Anda di halaman 1dari 136

Winter 2014

Content-Enabling Your
Insurance Business
Using Oracle BPM and
WebCenter Content
ASM Metrics
Enforcing Principle of
Least Privilege
Maturity of Service
Oriented Architectures
Future is now, ODI 12c
And more:
18 authors, 17 articles,
4 ACE's, 6 ACE
Directors, ...
OTech Magazine: Bigger & Better
When in September the first issue of OTech Magazine came
out, I could not have dreamed about the things that were
about to happen. I had set an initial target for myself of one
thousand readers. And how conservative that turned out to
be. This initial goal was met within a matter of hours.
The first issue of OTech Magazine was released on Tuesday
September 24 2013. During the heat of Oracle OpenWorld
the magazine created a rush. The initial goal of 1 thousand
hits on the magazine was reached within hours. In the first
week the magazine was viewed over 16 thousand times.
After a month we had 21 thousand hits on the magazine and
counting. The total views after initial release of the magazine
is over 25 thousand. The peak of the hits on the magazine
was during the first week after release, creating a massive
13542 hits on the Thursday after publication.
With these numbers it was time to start looking at this no
longer an innocent hobby as a more and more professional
venture.
So here we are. With the help from dozens of friends,
relatives, pals and some of the finest Oracle-folks in the
world we created an even bigger (and I really do think) better
issue of the magazine. And its an issue that we all (yes you
too as a reader of this) can be very proud of. Every single
person who helped with this magazine did it in his or her
own valuable time. And look at the result:
- 136 pages of pure Oracle technology knowledge
- 18 hard working authors, the best in their field
- 4 Oracle ACEs, 6 ACE Directors
This magazine that started off as just a way to have some
fun is turning into something magical: it might even turn
out to become the Oracle-glossy. Not some fancy-schmancy
thing thats all nice pictures and only marketing, but
something real. Something that you and I would like to read
when we have a spare moment. Just to put you feet on the
table and read a bit about real insight knowledge about our
field of work.
Enjoy. I know I did.
Cheers!
Douwe Pieter van den Bos
Foreword
SOA Made Simple: choosing the right SOA and BPM Suite
component based on classification
Ronald van Luttikhuizen Vennster
Tips and tricks for installing and maintaining FMW
products
Peter Lorenzen CGI
Enforcing Principle of Least Privilege
Biju Thomas OneNeck IT Solutions
An Introduction to Design Considerations for a Highly
Available Oracle Access Manager Deployment
Robert Honeyman - Honeyman IT Consulting
Oracle WebCenter Experts Complete the IT Puzzle
Troy Allen TekStream
Maturity of Service Oriented Architectures
Douwe Pieter van den Bos - Ome-B.nl Creative Software Solutions
The World According to Oracle: Oracle OpenWorld 2013
and beyond
Lucas Jellema AMIS
From Requirements to Tool Choice
Sten Vesterli - Scott/Tiger
NoSQL and Oracle
James Anthony - e-DBA
Case Management or Business Process Management?
Lonneke Dikmans Vennster
OTech Contents
Enterprise Deployment of Oracle Fusion Middleware
Products, Part 1
Simon Haslam - Veriton Ltd
Data Security in Case Management
Marcel van de Glind & Aldo Schaap AMIS
Oracle Business Intelligence and Essbase Together: You
dont know what you dont know
Neil Sellers - Qubix International Ltd
Why I Dont Use WebLogic JMS Topics
Ahmed Aboulnaga Raastech
ASM Metrics
Bertrand Drouvot
Future is now, ODI 12c
Gurcan Orhan, Global Maksimum
Content-Enabling Your Insurance Business using Oracle
BPM and WebCenter Content
Raoul Miller TEAM informatics
Information page
OTech Contents
SOA Made Simple: choosing the right SOA
and BPM Suite component based on
classification
Ronald van Luttikhuizen - Vennster
Organizations that have just started their SOA effort usually
only have a couple of services in place. Services are
discovered in a process that is called service identification.
Services are either identified top down based on the
business processes and the to-be architecture or top-down
by projects based on the services needed at that moment.
Service identification is usually an iterative
approach, so after a number of iterations youll
have dozens of services.
Is it hard for clients to find the services they need?
Does it take too long to answer specific questions
from stakeholders such as security officers and IT
operations about the services in your organization?
Is it difficult to make consistent choices on the
design and implementation of services? If so, you
might benefit from creating a service classification.
What is a service classification and why use it?
Different stakeholders need different information
about the services in your organization, such as:
The functionality that is offered by the service (service
consumers);
Channels through which the service can be accessed
(service consumers, security officer);
Contract and interface of the service (service consumers,
service providers);
Technology used to implement the service (service
providers: software architects, developers, operations);
Security constraints and measures such as authentication
(service consumers, service providers, security officer);
Visibility of the service to the outside world (service
providers, security officer).
At some point it will become unmanageable to list all
metadata in big documents. We need to be able to focus on
certain aspects of a service, and leave other criteria out.
Basically, we create a certain viewpoint or filter on our
services. This is called a classification.
Figure 1: Criteria to use for your classification
Service Oriented Architecture
Classification for service producers
There is no one, true service classification. Different
stakeholders have different requirements and need to know
different things about the services. It is very well possible
that you need to create and maintain several classifications.
Arguably, (future) consumers of your services are the most
important stakeholders to consider. Without use, we can just
as well discard a service. Consumers need to know what
functionality services offer, if they are allowed to use a
service and under what conditions, and should be able to
easily search for services. An example of a service
classification for consumers would be a matrix with the
functional domains of your organization (HR, CRM, Finance,
etc.) on one hand and the accessibility of the service (private,
internal use, external use) on the other hand. Such a
classification is often used in a Service Registry in which
consumers can search for existing services to use.
Although valid for service consumers, classification is also
very useful when you are designing, building, and
maintaining services. The following classification that is
based on granularity and the ability of services to be
combined into larger services is proposed for these
stakeholders. This article shows you how this classification
can be used to pick the right SOA and BPM Suite component
when building services.
Figuur 2: Service Classification based on Granularity and Composition
Service Oriented Architecture
Elementary services are the smallest possible building blocks
that still provide value on their own. They are typically short
running. Examples are a ClaimDataService used by an
insurance company for storing claim metadata and a
DocumentService used to store and retrieve documents.
Composite services are created when a particular
combination of services is reoccurring. These
services combine several associated actions
in one transaction. An example is a
ClaimService used by an insurance company
to both register the claim metadata using
the ClaimDataService and to store the
associated documents using the
DocumentService as one operation. Process
services are longer running services that are
created by combining elementary and
composite services and often have a human
step associated with them to handle
exceptions or certain specific tasks. An
example is a ClaimToPaymentService that
handles a claim from start to end; this both
involves human and automated steps.
This classification appeals to designers and
developers since combining components in
larger objects is a natural way of thinking for
designers and developers and guidelines
and implementation choices differ between
these service types.
Figure 3: Examples of elementary, composite, and process services
Service Oriented Architecture
Classifying to know what Oracle
Fusion Middleware product to use
Oracle Fusion Middleware is a
comprehensive stack and consists of
various products and suites. The
following products are especially
interesting from a SOA point-of-view:
Oracle Service Bus (OSB): Oracles
strategic Enterprise Service Bus that
can be used for protocol
transformation (e.g. RMI to SOAP),
data transformation, securing of
services based on policies,
virtualization of the underlying
service implementation, integration
various components into services,
and so on.
Oracle SOA Suite: a platform that
lets you combine building blocks such
as Business Rules, BPEL components,
Human Workflow, and so on into
composite applications that are called
SOA composites. The SOA Suite uses
the SCA standard to create SOA
composites.
Oracle BPM Suite: an extension to SOA Suite that provides
BPMN and Adaptive Case Management capabilities to
orchestrate activities in to processes.
The big question is: what product to use in what scenario?
This choice can have a big impact on the overall quality of the
solution youre building with it. The following diagram shows
how the classification is mapped to the various products.
Figure 4: Oracle Fusion Middleware
Service Oriented Architecture
Figure 5: Mapping of Fusion Middleware onto Service Classification
A service is a capability; in SOA everything is
considered a service. A service consists of three
components: an interface (how the service can be
used and accessed), an implementation (how the
service is realized) and a contract (what consumers
can expect from a service and under what
conditions they can use the service). In case of the
DocumentService that we discussed earlier, the
implementation could be an off-the-shelf Document
Management System, the interface a SOAP Web
Service described by a WSDL and XSDs, and the
contract an SLA defining the cost of usage,
availability, owner, response time, and so on. The remainder
of the article will discuss what product to choose for the
implementation of services, and what product to use for
accessing the service; or exposing its interface.
Implementation of Elementary Services
In IT, the implementation can be anything: whether it is a
packaged application such as Oracle EBS and Oracle Fusion
Applications, or custom-built software using Java, PL/SQL,
Oracle SOA Suite, or OSB.
Figure 6: Example of a simple calculation Web Service implemented in Java (J-WS)
Service Oriented Architecture
Important considerations when choosing Oracle SOA Suite or
OSB as implementation platform for elementary services:
Dont use Oracle BPEL or OSB as general purpose
programming language for elementary services. Program
logic such as calculations or a high degree of conditional
statements are better suited for imperative programming
languages such as Java or PL/SQL. SOA Suite offers you the
capability to include Java logic in SOA Composites as Spring
components.
You can expose any component of SOA Suite as a service by
wrapping it in a SOA Composite of its own. For example,
Business Rules and Human Workflow are components that
are often used from other components such as BPEL, BPMN,
and Case Management and packaged together with these
components into SOA Composites. However, on their own
they can also provide added value and be exposed and
packaged independently as SOA Composite using SOA Suite.
Figure 7: Example of a service that is implemented using SOA Suite (BPEL,
Mediator, and Business Rule)
This article is based
on SOA Made Simple
book by Lonneke
Dikmans and Ronald
van Luttikhuizen:
http://www.packtpub.
com/service-oriented-
architecture-made-
simple/book.
Service Oriented Architecture
Implementation of Composite Services
Composition is the combination of several smaller services
into a larger service that offers more added value. When we
combine only a few services in a straight-forward fashion
(e.g. sequentially invoke service operation A followed by the
invocation of service operation B) we call this aggregation.
When the composition is more complex and involves more
conditional logic, we use the term orchestration.
Use BPEL, which is part of SOA Suite, for orchestration.
Use OSB for simple aggregation flows only. Transformation
and routing logic in OSB flows becomes cluttered and hard
to maintain if it gets to wieldy .
Figure 8: Putting too much (complex) composition logic into OSB results in
cluttered flows
Figure 9: Implementation of a Composite Service operation in BPEL
SOA Suite has more components available than OSB that
can be used to implement the composition logic. For
example the use of Domain-Value Maps (DVM) to map
various data elements onto each other, and the use of
Business Rules to encapsulate fast changing logic. Business
Rules as well as DVMs can be changed at runtime without
the need for software modifications which provides greater
flexibility. While SOA Suite offers more functionality, OSB is
more light-weight and performs better for simple services
that process large amounts of messages.
Service Oriented Architecture
By default the operations of a service are implemented in
one message flow diagram in OSB (Proxy Service). That
means that if a composite service has several operations,
each containing a complex composition, the message flow
becomes very cluttered. You could create a separate Proxy
Service for every operation but that results in overhead. In
SOA Suite you can easily use a Mediator component that
redirects every service operation invocation to its own BPEL
component. The BPEL editor only shows the flow for one
operation as opposed to showing the flows for every
operation. Implementation of Process Services.
For the implementation of longer running services Fusion
Middleware offers several choices:
Use Adaptive Case Management when there are many
different variations in the process flow. This is the case with
knowledge-driven processes in which users determine the
next action in the process as it progresses. Adaptive Case
Management was added to the BPM Suite in 11g PS 6.
For deterministic processes such as invoice processing in
which efficiency is important, you can either choose BPMN
(BPM Suite) or BPEL. BPEL is a more technical and rigid
notation, while BPMN is better suited for process modelling
by business analysts on a higher level and provides more
flexibility.
Figure 10: Difference between BPMN (upper) notation and BPEL (lower)
A best-practice for services, independent of their type, is to
have guidelines in place for the size of messages you allow in
OSB or SOA Suite. Small messages can be inline, larger
messages should be sent as an attachment, and big
messages should be handled with a claim check pattern. The
actual handling of such large file is best left to tools that are
better suited for this like FTP servers, ODI, or the upcoming
Oracle Managed File Transfer product.
Publishing the interfaces
So far you have seen what product to use for the
implementation of different types of services. Besides the
technology to build services, we also need to know how
service consumers will access the capabilities of the services.
Service Oriented Architecture
There are several choices to
expose services to the outside
world:
Expose service
implementations by using their
proprietary interface. This is
often the product or technology
in which the service was build.
For example, you can expose a
PL/SQL package and use that as
the interface or use RMI to
expose Java components.
Expose the service implementations by using a standard
interface such as a SOAP Web Service or REST service. You
can for example use JAX-WS to expose Java components as
SOAP or REST services or use Oracle JCA adapters to
transform a (proprietary) technology to a standard interface.
Note that Oracle offers various JCA adapters for Relational
Databases, JMS, File, FTP, AQ, MQ, and so on that are built on
the Java JCA standard and expose those technologies as
SOAP Web Services. These JCA adapters are deployed on
Oracle WebLogic Server and can be invoked from both OSB
and SOA Suite.
Use an Enterprise Service Bus like OSB as a central platform
to expose your services to the outside world.
The latter approach increases the flexibility of your services.
Changes in the service implementations can be mediated in
the OSB. You can use it as a platform for versioning of
services, content-based routing to the appropriate service
implementation, transformation from a canonical data
format to a local data format, protocol transformation,
applying security measures that are defined in your service
contracts, and so on.
Figure 11: Exposing a service using Oracle Service
Bus
Summary
Oracle offers a number of components
that are part of the SOA and BPM
Suite. A technical service classification
helps you decide which of these
component to use for the
implementation and interfaces of your
services in what scenario. It is a good
practice to create service classifications
as part of your SOA governance. Create classifications based
on the needs of your stakeholders.
Ronald van Luttikhuizen
Vennster
Service Oriented Architecture
Tips and tricks for installing and
maintaining FMW products
Peter Lorenzen - CGI
Introduction
The purpose of this article is to provide an overview of
information that I feel is important to know when you install
and maintain Oracle Fusion Middleware (FMW) products.
Documentation
Oracle has lots of FMW documentation and locating the right
one can be a challenge.
Getting started
The best place to start is with the FMW Download,
Installation, and Configuration Readme Files
(http://goo.gl/GygKSP). There is a readme file for each FMW
release.
The readme file will lead you to the rest of the
documentation. It will help you locate the right software as
well as install and configure FMW.
It is highly recommended to read the following manuals:
Fusion Middleware System Requirements and
Specifications (http://goo.gl/gOcMpS)
FMW Installation Planning Guide (11g)
(http://goo.gl/QR7dZJ)
Installing and Configuring the FMW Infrastructure (12c)
(http://goo.gl/J4xnZq)
Product installation guides
Each FMW product has its own installation guide. Make sure
you read these in detail, since some of them have unique
information. There are for example, important differences
between Java Components and System Components
(http://goo.gl/NaEW5n). By the way, please note that the
management of System Components has changed
significantly between FWM 11g and 12c, as OPMN has been
replaced with Node Manager (http://goo.gl/xPUZBz).
Release Notes
All products have Release Notes. They contain information
about what to do when the software is not working, as
expected. The Release Notes are part of the documentation
library for the product. The Release Notes are updated
periodically, so it is a good idea to check them regu-larly.
There is also a Known Issues list, for some products. This is
not part of the documentation li-brary and can sometimes be
referred to as Release Notes. An example is the Known
Issues for Oracle SOA Products and Oracle AIA Foundation
Pack (http://goo.gl/E1jBHL). It lists known issues for BAM,
BPEL, OSB, SOA, BPM Suite etc.
I assume that the reason for the two Release Notes is that
there are many known issues and it is easier to maintain this
list than the documentation library.
The Release Notes can sometimes list patches that should be
installed. An example is the OSB 11.1.1.7 that lists four
required WebLogic Server patches.
Oracle Fusion Middleware
Repository Creation Utility
Many FMW products require a repository in a database. You
cannot just use any database or any Oracle database. Some
of the products have very specific requirements for the
configura-tion of the database. Sometimes the Repository
Creation Utility (RCU) will complain about a missing
requirement, but this can be ignored for some
products. It is therefore a good idea to read the
RCU documentation:
Creating Schemas with the Repository Creation
Utility (12c) (http://goo.gl/GFt89w)
FMW Repository Creation Utility User's Guide
(11g) (http://goo.gl/rYS7SY)
Downloading software
Oracle has three different locations for
downloading software.
Oracle Technology Network (OTN)
At the OTN site, you can download most Oracle
FMW software for free. Although you do not
have to pay to download the software, it is not free to use!
You can use the software while doing a proof of concept or
developing a new application. As soon as you go into
production, how-ever, proper licensing is required for all the
software used, including environments used for
maintenance and the development of new releases, etc. This
is also true for software installed on developers laptops.
Make sure that you understand the OTN Developer License
(http://goo.gl/He919f) before down-loading software form
OTN.
There is another license model - the OTN Free Developer
License (http://goo.gl/1CUW97). It only covers the WebLogic
Server. It allows a single developer to use the WebLogic
Server for free, also after going into production. This is nice,
but the license only covers WebLogic, there is no OSB, SOA
Suite etc. Therefore, it is only for pure Java applications.
Oracle Software Delivery Cloud
The http://edelivery.oracle.com site
has existed for some time, and is
now not surprisingly a cloud
service.
When you download software from
the site, you acknowledge that you
have already obtained a valid
license or that the 30 day Oracle
Software Delivery Cloud Trial
License is used.
You are not required to use the site
and as long as you have your
licenses in order, it does not matter
from which site you get the
software. I normally use OTN for everything.
My Oracle Support (MOS)
You need a valid support agreement to access the MOS site.
From MOS you can download patches, updates and other
fixes.
OTN and Edelivery only contain the latest releases of a
product. If you need an older release or legacy software from
one of Oracles acquisitions, it is necessary to create a MOS
Service Re-quest in order to obtain a download link.
Oracle Fusion Middleware
Location, Location, Location
When you install FMW you should follow a strict standard of
directory naming.
This is what I normally do:
Oracle Base/u01/app/oracle
Products /u01/app/oracle/products
Domains /u01/app/oracle/domains
The products directory contains the software installations
and the Oracle Homes. The domains directory contains the
domain homes, with all the configuration data.
For example:
MW_HOME /u01/app/oracle/products/wls1212
DOMAIN_HOME /u01/app/oracle/domains/myDomain
Whatever you do, make sure you do not keep your domains
under the Middleware Home. It makes good sense to keep
binaries and configurations separate, as you can run into
problems when you have to upgrade to a new major FMW
version in the future.
The only exception is Portal, Forms, Reports and Discoverer,
where the domains must be under the Middleware Home.
Otherwise the software will not work! For example:
/u01/app/oracle/product/pfrd11.1/user_projects/myDomain
Hopefully this will be fixed in a coming release.
If you keep the domains separate from the software, you can
harden your installation, by having a special OS user that
owns the software installation. Another user will own the
domain home and will only be granted read and execute
rights to the software.
In WebLogic 12.1.2 this works without any problems, but in
previous releases the Node Man-ager configuration and log
files were located under the Middleware home. Therefore,
you need to move the node manager configuration files and
change the configuration, so the log files are written to a
different location.
Remember to check out Oracles recommendations for
selecting directories:
http://goo.gl/vxdy4x
Oracle Fusion Middleware
Java
These days you have to reinstall Java at least every quarter,
because of security patches and new releases. Here are two
tips that can make life a bit easier.
Soft links
Since the Java installation is referenced in several files under
the Middleware home and the domain homes, you need to
change these references every time you install a new
release. It is easy to create a script that does this, but I prefer
to create a soft link to the current Java installa-tion and
reference this everywhere.
# Java Home
/u01/app/oracle/product/jdk1.7.0_45
# Create soft link
cd /u01/app/oracle/product
ln -s jdk1.7.0_45 java_current
/u01/app/oracle/product/java_current now points
to /u01/app/oracle/product/jdk1.7.0_45.
It is not a perfect solution, as some programs will use the
real location when they access the soft link, which can get
you into trouble now and then.
You can do the same on Windows with symlinks.
@REM Java Home
D:\oracle\product\jdk1.7.0_45
@REM Create symlink
cd D:\oracle\product
mklink /d java_current jdk1.7.0_45
D:\oracle\product\java_current now points to
D:\oracle\product\jdk1.7.0_45.
Whatever you do, please make sure you always remove the
old Java installation. I have experi-enced situations where
customers were using an old Java installation because they
had forgot-ten to change the references to the old.
cacerts
When you install Java, it contains a default keystore called
cacerts. If you use this, you must remember to copy it to the
new Java installations every time you reinstall. This is bound
to go wrong and you will have to spend time figuring out
why. Never use cacerts, but instead use a custom keystore.
Oracle Fusion Middleware
Less is more
You should always install as little as
possible. This goes for the OS, the
database, FMW etc. Maintenance is
easier and security is better. Do not
install products, options,
demos/examples etc. that you do not
need.
When you create a WebLogic domain,
you should select as few products as
possible. If you select all possible
products, you might even end up in a
situation where a domain does not
work. Some products conflict and the
domain wizard does not warn you
about this. For example, the SOA
Suite conflicts with the SIP Server (http://goo.gl/MFU0qA).
It can be a bit difficult to figure out which product to select
when creating a domain. The Domain Template Reference
(http://goo.gl/2m3Xjy) will be of some help.
Silence is golden
Installing everything manually is fine as long as you only
have a couple of environments, but as soon as you have
more, it will be difficult to ensure that the environments are
identical. If they are not identical, there is a good chance that
developers, testers etc. will run into problems, because of
small discrepancies.
To maximize predictability in your environments, you can
script everything. Oracle has tools that will enable silent
installation, scripted domain
creation and deployment.
Here is an example of a silent
installation and scripted domain
creation of the OSB 11.1.1.6 on
Red Hat 6:
http://theheat.dk/blog/?p=771
You can use the Oracle WebLogic
Scripting Tool (WLST) to create
data sources, JMS queues etc.
This blog post contains a WLST
script for creating a data source:
http://theheat.dk/blog/?p=1467
You can deploy applications via
WLST both online and offline. For more information, check
the documentation (http://goo.gl/2Ivt5G).
The WebLogic server, of course, also supports ant and
maven.
It is a good idea to use scripted deployment for all
environments. Make sure scripts are continu-ously created
and maintained from the start of a project. It should be a
continuous process and not done at the last minute.
Oracle Fusion Middleware
Configuration File Archiving
You can configure the WebLogic Server to make a backup of
the configuration whenever you change it. The configuration
files live in the DOMAIN_HOME/config directory and its
subdirecto-ries.
If you enable Configuration Archiving, all the configuration
files will be stored in a jar file every time a change is
activated.
The jar files are placed in the DOMAIN_HOME/configArchive
directory.
When the Archive Configuration Count limit is reached, the
oldest file is overwritten.
I consider it best practice to use Configuration File Archiving.
The files do not take up much space and they can provide
you with valuable information about changes you might not
be aware of.
For more details read this blog post:
http://theheat.dk/blog/?p=1385
Entropy problems on Linux servers
I install WebLogic on many new servers and have frequently
encountered problems because of low entropy. The
symptom is that it takes a long time for a WebLogic server to
start. The CPU has no load and the log files do not reveal any
problems. It can also happen when you start the Node
Manager for the first time. I once waited 10 minutes for the
Node Manager to start on a powerful blade server, with
nothing else running. It happens both for physical and virtual
servers.
It is one of the things that can be both puzzling and
frustrating until you find out what is going on.
The problem is with the way random numbers are
calculated. It is not a FMW problem, but a Linux problem. I
often meet people who do not know about this, so if you use
Linux you might want to have a look at the details and the
workarounds in this blog post:
http://theheat.dk/blog/?p=1539
Error Correction Support Polies
In my experience, most people know about Oracle Lifetime
Support. An Oracle product gener-ally moves through three
different support stages: Premier (5 years) > Extended (3
years) > Sustaining (perpetual). Each stage has different
benefits, and there are fewer and fewer bene-fits as you
move to Extended and Sustaining, and at the same time the
price goes up.
Oracle Fusion Middleware
What is not so well known, is that this is not the whole story.
If we look at the WebLogic Server 11g aka. 10.3.x, it is in
Premier Support until December 2018. The latest Patch Set is
10.3.6, so if you use 10.3.6 you are OK for some years. But
what if you use 10.3.5? If you look in the Criti-cal Patch
Update (CPU) from October 2013:
Patch Set Update and Critical Patch Update October 2013
Availability Document (Doc ID 1571391.1)
In the Final Patch History section you will see that July 2013
was the final CPU for WebLogic Server 10.3.5. Even though
WebLogic Server 11g is supported for years, CPUs etc. are
only available if you are running the latest Patch Set e.g.
10.3.6. This is governed by the Oracles Error Correction
Support Policy (ECP). The ECP states that when a new Patch
Set is released, Oracle will only deliver error corrections to
the previous Patch Set within a certain grace period.
The grace period is to provide the customer with time to plan
and apply the Patch Set:
Grace Period: up to 1 year for first patch set (minimum 3
months), and up to 2 years for sec-ond and subsequent
patch sets.
This means, that if you use WebLogic Server 10.3.5 you have
support, but Oracle will not make any new patches, leaving
you in a vulnerable situation.
It is my experience that the ECP is often forgotten. Make sure
that your customers are aware of the benefits of installing
the latest Patch Sets and understand the risks of not doing
so.
You can find lists of the ECP grace periods on MOS.
For more information, check this blog post:
http://theheat.dk/blog/?p=1753
Patching
A big part of installing and maintaining FMW is to
continuously apply the right patches.
To do this you need to know the terminology and the
different kind of patches Oracle supplies.
Security patches
Each quarter Oracle releases security updates. The patch
program is called Critical Patch Up-date (CPU). It includes all
Oracle products including Java. Java was added with the
October 2013 CPU.
In the beginning the patches released by the CPU program
were also called CPU patches, but from October 2012 the
name was changed to Security Patch Updates (SPU). The
program is still called CPU, but the patches are called SPU
patches.
A SPU patch is a cumulative patch consisting of security fixes.
Oracle announces the release dates for the CPUs around a
year in advance.
Sometimes Oracle will release one-off security patches if
particularly nasty bugs are found. It does not happen often,
and as far as I can recall there have not been any in 2013.
Proactive patches
Oracle Fusion Middleware
Oracle releases proactive patches on the same quarterly
schedule as CPU patches. Proactive patches come in three
flavors.
Patch Set Updates (PSU)
For some products the SPU patches have been replaced with
Patch Set Updates (PSU). This is true for the database and
the WebLogic server. You can see the full product list in:
Patch Set Updates for Oracle Products (Doc ID 854428.1).
A PSU patch is a cumulative patch consisting of security fixes
and other stabilizing changes. It is a SPU plus other non-
security related changes. No enhancements are included.
Bundle Patches (BP)
BPs are cumulative patches that are issued between patch
sets. They usually only include bug fixes, but may contain
minor enhancements. For example, the OSB 11.1.1.6
currently has two BPs and the SOA Suite 11.1.1.7 has one.
You can find a list of all FMW BPs and PSUs here:
Master Note on Fusion Middleware Proactive Patching
Patch Set Updates (PSUs) and Bun-dle Patches (BPs) (Doc ID
1494151.1)
Suite Bundle Patches (SBP)
A SBP is a collection of product BPs for a suite. For example,
an Oracle Identity Management SBP consists of OAM, OAAM,
and OIM BPs.
Version numbers
When you apply a proactive patch, the fifth number in the
product version is incremented.
Here is a WebLogic server 10.3.6 with the October PSU:
.
/u01/app/oracle/product/wls103/wlserver_10.3/se
rver/bin/setWLSEnv.sh
java weblogic.version
WebLogic Server 10.3.6.0.6 PSU Patch for
BUG17071663 Tue OCT 02 13:01:30 IST 2013
WebLogic Server 10.3.6.0 Tue Nov 15 08:52:36
PST 2011 1441050
Conflicts
Since SPU/PSU patches are cumulative, they will conflict with
each other, so you will need to remove older SPU/PSU
patches before applying new ones.
Oracle Fusion Middleware
Overlay patches
Sometimes you need to install one-off patches that are not
included in the proactive patches. These patches are now
called interim patches.
If a proactive patch makes changes to the same code as an
interim patch, they will conflict. If they do, you need a new
version of the interim patch that matches the version
number of the proactive patch. These are called overlay
patches.
If no overlay patch exists for an interim patch, you can
request that Oracle creates one.
Conflict resolution
Resolving conflicts can be difficult. The following MOS note
has a section called PSUs and Patch Conflict Resolution that
can help you:
Announcing Oracle WebLogic Server PSUs (Patch Set
Updates) (Doc ID 1306505.1)
Here is an example I recently encountered. The latest release
of the WebLogic Portal is 10.3.6. Currently no security or
proactive patches exist for the Portal, but you can install
WebLogic PSU patches. However, when you try to apply the
latest PSU you get an error:
Conflict condition details follow:
Patch BYJ1 is mutually exclusive and cannot
coexist with patch(es): CYXN
CYXN is a fix for the 13000612 bug. If you check:
Oracle WebLogic Server Patch Set Update 10.3.6.0.6 Fixed
Bugs List (Doc ID 1589769.1)
You will see that the fix for this bug was included in the first
10.3.6 PSU (10.3.6.0.1). This means you can remove the CYXN
patch as it was already included in the latest PSU.
MOS Recommended Patch Advisor
It can be difficult to figure out which patches to install. Oracle
has tried to help with the Recom-mended Patch Advisor on
MOS. It is a work-in-progress initiative. I have not used it
much yet, but it seems to be working fine.
Oracle Fusion Middleware
Keep current
It is not an easy job to keep an FMW installation updated and
secure. Things are moving fast and if you are not up-to-date
and proactive, you can run into problems.
A current example is the Java 7 update 51 that will be
released in January 2014. If an Oracle Forms installation is
not patched before the update is installed on the clients, it
will break Forms.
For more details, check this blog post:
http://theheat.dk/blog/?p=1681
Unfortunately, there is no silver bullet for staying current,
but the list below will help you.
Critical Patch Update Alert E-mails
Sign up for Oracles CPU Alert e-mails. Oracle will inform you
when a new CPU has been re-leased. You will also get an
email if Oracle releases one-off security patches.
MOS information centers
Check the product specific information centers on MOS:
OSB (Doc ID 1293368.2)
SOA Suite 11g (Doc ID 1369339.2)
Weblogic Server Patching & Maintenance Information
Center (Doc ID 1573509.2)

Master Note on FMW Proactive Patching PSUs and BPs (Doc
ID 1494151.1)
Make sure you check this MOS note, it contains a list of all
FMW PSUs and BPs.
Blogs, Twitter etc.
I follow many blogs and read a lot of tweets to keep up.
Wrap up
In this article, I have collected an overview of various subjects
I believe you should be aware of as an FMW administrator.
The subjects are broad and encompass various issues that
can be hard to come by if you are new to FWM
administration.
If you have, questions or comments please feel free to drop
by - http://theheat.dk or https://twitter.com/theheatDK.
Peter Lorenzen
CGI Denmark
Oracle Fusion Middleware
Enforcing Principle of Least Privilege
Biju Thomas - OneNeck IT Solutions
One of the top features of Oracle Database 12c that
attracted me is the ability to enforce principle of least
privilege with ease. Ever since database vendors started
taking security seriously, the principle of least privilege
theory is in play. To identify the privileges required by an
application or user in Oracle database versions prior 12c was
a tedious trial and error process. Many applications I have
come across run with DBA or DBA like privileges, this is
because no privilege analysis done at application design and
development time. For application design and development
team the focus is always on getting the development work
completed and delivering the project. Security, especially
least privilege, is not a focus item where team wants to
spend time. It is easy to grant system privileges (especially
DBA or ANY privileges like INSERT ANY TABLE) to get the
application working.
Oracle Database 12c brings the Privilege Analysis feature to
clearly identify the privileges required by an application for
its functioning and tells the DBA which privileges can be
revoked, to enforce the principle of least privilege and make
the database and application more secure. Privilege analysis
feature is available only in Enterprise Edition and it requires
Database Vault license, which is an extra cost option. The
good thing is that Database Vault need not be enabled to use
Privilege Analysis - one less thing to worry.
In a nutshell, privilege analysis works as below:
- Define a capture - to identify what need to be analyzed
- Enable the capture, to start capturing
- Run the application or utility whose privilege need to be
analyzed
- Disable the capture
- Generate results from capture for review
- Implement the results, from the findings
I will explain the steps using SQL command line as well as
using Enterprise Manager Cloud Control 12c. To do the
privilege analysis you need the CAPTURE_ADMIN role, this
role is granted to DBA role, so if you have DBA privileges on
the 12c database, you can perform the analysis.
Figure 1: Privilege Analysis
Oracle Database Security
Demo Environment
For demonstration purposes I am going to use the OE
schema that comes with Oracle Database 12c examples - it
has 14 tables and several other objects. We want to analyze
the privileges of OE_ADM user who currently has the
following privileges.
- SELECT ANY TABLE
- INSERT ANY TABLE
- UPDATE ANY TABLE
- DELETE ANY TABLE
- ALTER ANY TRIGGER
- CREATE PROCEDURE
- CREATE TABLE
- CREATE SYNONYM
- CREATE ANY INDEX
- ALL privs on ORDERS and ORDER_ITEMS tables
- CONNECT and DBA Roles
SQL> select object_type, count(*) from dba_objects
where owner = 'OE' group by object_type;
OBJECT_TYPE COUNT(*)
----------------------- ----------
SEQUENCE 1
LOB 15
TYPE BODY 3
TRIGGER 4
TABLE 14
INDEX 48
SYNONYM 6
VIEW 13
FUNCTION 1
TYPE 37
OE_ADM user connects using SQL*Developer to run the
scripts and reports. Our objective is to remove the ANY
privileges from OE_ADM user and grant appropriate
privileges based on the tasks performed during the analysis
period.
New package DBMS_PRIVILEGE_CAPTURE has the
subprograms to manage the privilege analysis. The
CAPTURE_ADMIN role has execute privilege on this
package.
Define and Start Capture
The very first step in privilege analysis is to create a capture,
to define what actions need to be monitored. Four types of
analysis can be defined in the capture:
- Database (G_DATABASE - 1): If no condition is defined,
analyzes used privilege on all objects within the whole
database. No condition or roles parameter specified for this
type of capture.
- Role (G_ROLE - 2): Analyses privileges exercised through a
role. Specify the roles to analyze using the ROLES parameter.
- Context (G_CONTEXT - 3): Use this to analyze privileges that
are used through an application module or specific context.
Specify a CONDITION to analyze
- Role and Context (G_ROLE_AND_CONTEXT - 4): Combination
of role and context.
Oracle Database Security
The CREATE_CAPTURE subprogram is used to define the
capture. For our demo, we want to use the Role and Context,
because we want to know what privilege from the DBA role is
being used as well as what other privileges granted to
OE_ADM are used when the application used is SQL
Developer.
Figure 2: OEM Screen to Create a Privilege Analysis Policy
Figure 2 shows the OEM screen to create a capture policy.
With few clicks you can easily create the policy. Based on the
context additional input is captured.
The SQL to define the policy as shown in Figure 2 is:
BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
name => 'Analyze_OE_ADM' ,
description => 'Review Privileges used by
OE_ADM through SQL Developer' ,
type =>
DBMS_PRIVILEGE_CAPTURE.G_ROLE_AND_CONTEXT ,
roles => ROLE_NAME_LIST('DBA','CONNECT') ,
condition => 'SYS_CONTEXT(''USERENV'',
''MODULE'') = ''SQL Developer'' AND
SYS_CONTEXT(''USERENV'', ''SESSION_USER'') =
''OE_ADM''');
END;
/
Oracle Database Security
Once the policy is defined, it shows up in the OEM Privilege
Analysis main screen, from where you can enable, disable,
generate report and drop the policy. See figure 3.
Figure 3: Privilege Analysis screen of OEM
You can click on the start button to start capture, or
use the below SQL to start the capture.
EXECUTE
DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE (name =>
'Analyze_OE_ADM');
Now run the application and for a period of time, so that
Oracle can capture all the privileges used.
Stop Capture and Generate Reports
Ok, now that OE_ADM user has performed their tasks using
SQL Developer, let us stop the capture and review the
privileges used.
EXECUTE DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE
(name => 'Analyze_OE_ADM');
Using OEM you can click on the Stop Capture button as
shown in Figure 3. Now click the Generate Report button.
Using SQL you can accomplish this by :
EXECUTE DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
(name => 'Analyze_OE_ADM');
OEM shows the number of unused privileges in the summary
screen as shown in figure 4.
Figure 4: unused privileges
Once you run the Generate Results procedure, all the
DBA_USED_ views as well as DBA_UNUSED_ views are
populated. You may query these views to generate revoke
scripts or to prepare reports. The DBA_USED_ views show the
privileges used by the user for the policy. The DBA_UNUSED_
views show the privileges that are assigned to the user, but
are not used. The _PATH views show the privilege path (how
the privileged was given to the user, through which role).
Oracle Database Security
Capture Privilege - DBA Views Populated with Generate
Results Procedure
DBA_USED_OBJPRIVS
DBA_USED_OBJPRIVS_PATH
DBA_USED_PRIVS
DBA_USED_PUBPRIVS
DBA_USED_SYSPRIVS
DBA_USED_SYSPRIVS_PATH
DBA_USED_USERPRIVS
DBA_USED_USERPRIVS_PATH
DBA_UNUSED_COL_TABS
DBA_UNUSED_OBJPRIVS
DBA_UNUSED_OBJPRIVS_PATH
DBA_UNUSED_PRIVS
DBA_UNUSED_SYSPRIVS
DBA_UNUSED_SYSPRIVS_PATH
DBA_UNUSED_USERPRIVS
DBA_UNUSED_USERPRIVS_PATH
OEM makes it easier on you to see the reports and even
generate a revoke script. Figure 5 shows the drop down
menu under Actions.
Figure 5: OEM Options under Actions
The Reports menu shows a summary, as well as used and
unused privilege listing that you can export to an excel file.
To be able to use the Revoke Scripts option, OEM needs to
complete a setup as shown in figure 6.
Figure 6: OEM Setup for Revoke Scripts Generation
Oracle Database Security
The revoke script revokes all unused roles and privileges
from the role granted to the user, in this case this is not
desired, because we do not want to mess with the DBA role.
Here the Create Role menu comes for help. Figure 7 shows
the OEM screen to create the role; you have option to
customize the role creation as well.
Figure 7: Create Role screen of OEM
This creates a new role for you with only the used privileges -
how sweet is that!
Biju Thomas
OneNeck IT Solutions
Oracle Database Security
An Introduction to Design Considerations
for a Highly Available Oracle Access
Manager Deployment
Robert Honeyman - Honeyman IT Consulting
The use of Single-Sign-On (SSO) and Access Management is
an often requested feature when implementing web and
middleware applications to improve security and reduce
administration. The primary reason SSO is desirable as an
Enterprise technology is the centralization of user
information and a single login / access control point for
many applications. However once a centralized SSO
infrastructure is implemented it becomes service critical for
all applications using it. If SSO infrastructure becomes
inoperable then all dependent applications are also
inaccessible, so the SSO infrastructure is a potential Single
Point of Failure for the Enterprise. This means there is a
responsibility to ensure resilience and High Availability for an
SSO system beyond that of the individual dependent
applications.
This article provides an introduction to the considerations
required to build a High Availability SSO infrastructure to
support Oracle Fusion Middleware deployments. An SSO
solution needs to ensure service continuity for all
components of the SSO and access control service and
associated data repositories.
The Oracle product offering for SSO and conventional access
control is Oracle Access Manager 11g, subsequently referred
to as OAM. In case you are not familiar with OAM I provide
a quick summary of the OAM product features and general
deployment requirements.
OAM has capabilities beyond a conventional username and
password SSO and can support more sophisticated
authentication methods such as Security Token, Kerberos,
Windows Logon integration and Identity Federation. In order
to impose authentication and access control to an
application a resource definition is created in OAM. In the
case a web application the resource would store a URL
definition to protect. An authentication scheme is then
attached to the protected resource using a policy definition.
This would commonly impose a credential verification lookup
to an LDAP directory when authenticating, and subsequently
authorizing access. A web SSO authenticated and authorized
user receives OAM_ID and OAM_AuthnCookie_hostname
cookies from OAM to allow them ongoing access to
protected resources. OAM stores user session information
on the server-side so is able to ensure user session validity.
OAM supports multi-level authentication and has the ability
to create access control policies based on group
memberships and user attribute values. These features
supplement basic authentication allow a more fine-grained
access control to web and middleware application resources.
OAM also provides full integration with Fusion Middleware
products through Oracle specific SSO features and supports
native OAM and legacy Oracle Single Sign-On login agents for
older Oracle middleware deployments.
All these features aside OAM is fundamentally an Oracle
Fusion Middleware application and is built in the typical
fashion. It is deployed to Weblogic and uses an Oracle
Database repository created with the Repository Creation
Utility (RCU) to host access control policy data. The other
required component for an OAM SSO solution is an LDAP
directory to store user identity data, the SSO credentials and
Oracle Access Manager
user attribute information.
To summarize from above when considering a High
Availability OAM SSO configuration we need to consider the
following main components, and their back end data stores:
LDAP User Identity Store
Oracle Access Manager
The main options for an OAM User Identity Store are shown
in Table 1. The table outlines the differences between the
options.
For the purposes of this article we specify configuration of a
High Availability (HA) Oracle Internet Directory (OID)
deployment. OID has adequate HA capabilities and has
comprehensive and proven integration options with Fusion
Middleware products, including legacy products such as
Forms and Reports. The Weblogic Embedded LDAP option
enabled by default when OAM is installed is not suitable for
an Enterprise HA solution due to its limited scalability. Oracle
Unified Directory is specified by Oracle in the latest
Enterprise Configuration Guides however it is not fully
certified with all Oracle Fusion Middleware products at the
time of writing.
To achieve High Availability all components need to be
configured with resilience. This means thinking about
databases, middleware, web servers and connections to all
tiers for the high-level components. Multiple targets must be
deployed for each tier of these high-level components and
connections routed in a resilient manner. This means load
balancing and failover is required for all connections to all
targets. Some of the sub-components have integrated
software load balancing capabilities, others require external
or hardware load balancing.
Oracle Access Manager
Table 2 outlines the connections which use load balancing
provided by the Oracle Fusion Middleware and Database
products. WebGate agents are configured to access multiple
Access Server Proxy targets through the OAM Admin
Console. Oracle HTTP Server (OHS) connections use a
standard Weblogic reverse proxy approach using
WebLogicCluster directives. Database connection failover is
handled through standard methods appropriate to the
connection type.
Table 3 outlines services which require external load
balancing. These services are the front-door access to OAM,
OID and management services.
Oracle Access Manager
Figure 1 illustrates a simplified layout of an OAM SSO
solution integrated with OID and a single dependent
application. This diagram omits the management services
used to configure the Live services involved to reduce
complexity in the diagram.
The two Oracle Databases OIDDB and OAMDB shown should
be configured in Real Application Clusters (RAC)
configurations. I will not elaborate on how to configure a RAC
cluster here, but will re-iterate the connections from the
OAM Weblogic cluster use JDBC whereas OID database
connections use TNS. This means that OAM database
connections must implement Multi or GridLink Data Sources
for connection resilience whereas OID must use Transparent
Application Failover (TAF) for connection resilience. TAF can
be configured on the client side in tnsnames.ora or as is now
recommended on the server-side as TAF policy attached to
the serving RAC cluster and database.
Once the database requirements have been
satisfied we must consider the OID LDAP
services to implement the User Identity Store
for OAM. Figure 1 shows OID implemented as
an Identity Management Cluster denoted by
IDM LDAP Cluster contained in IDMDomain.
An OID Identity Management Cluster is not a
Weblogic cluster. An OID Cluster shares state
through the ODS schema held in the OID
database, and the OID nodes do not replicate
state between nodes directly. As the OID
services do not use Weblogic the IDMDomain
is not strictly required, but typically
management services are also deployed
which do require Weblogic. In these typical
configurations an OID cluster is registered as
an OPMN managed target to the IDMDomain.
The management services are Directory
Services Manager, Fusion Middleware Control
and Weblogic Administration.
Oracle Access Manager
The OID LDAP server targets must be load balanced in a
round-robin fashion without stickiness and presented
through a virtual server denoted by the
ldap.mycompany.com box in Figure 1.
The following further requirements should be given
consideration when implementing a High Availability OID
cluster:
A time service such as NTP to ensure cluster node time
synchronization
Port translation to present the service on industry
recognized LDAP ports
Timeouts at the load balancer and OID level to prevent
untimely connection drops
Disabling LDAP entry caching to preserve data integrity
across the cluster
The time service is critical for stable operation and is checked
during OID cluster installation so must be configured in
advance. NTP deployment is straightforward to implement
by a Systems Administrator and does not require further
elaboration.
Port translation allows services to run on Oracle default
unprivileged ports on OID hosts while still presenting the
service on standard registered ports through the Load
Balancer. The OID port translation mappings for
ldap.mycompany.com are shown in Table 3.
Load Balancer timeouts for the LDAP service may be
required by the Enterprise for compliance or operational
reasons. If this is the case the OID attribute
orclldapconntimeout should be used to set the OID idle
timeout to less than the external Load Balancer timeout. This
will prevent hard connection drops during quiet periods
which could affect service from OID hangs.
OID entry caching is a performance enhancing feature to
prevent unnecessary database round-trips. However as the
entry cache is not synchronized across an OID cluster it must
be disabled in an OID cluster configuration. To disable OID
entry caching set the OID attribute orclcacheisenabled=0.
Having reviewed the requirements for the OAM User Identity
Store we must consider OAM itself. The OAM SSO and access
control services run as a Java application and are deployed in
a conventional Weblogic cluster denoted by WLS oamcluster
contained in IAMDomain in Figure 1. The OAM Weblogic
application runs the Access Service and Access Agent Proxies
and also serves standard content such as SSO login pages.
The OAM cluster implements an Oracle Coherence
distributed object cache to replicate shared state across the
managed servers in the cluster. The OAM Coherence
deployment replicates both configuration and policy changes
made from the OAM Admin Console and session state for
active user login sessions. This means policy, configuration
and session information is always synchronized and up to
date across the whole cluster, so failover from one OAM
node to another is seamless. The use of Coherence for
cluster replication dictates that clustered nodes should be
connected to each other by a high bandwidth and low
latency network connection. This means co-location in the
same data centre on a Gigabit or better network or possibly
a very good cross-site link. This being said there are
alternatives for multi-site configurations which are usually
preferable.
Oracle Access Manager
One exceptional data set that is not replicated by Coherence
is initial request data used by OAM prior to a logged on user
session being established. This includes original requests to
protected URLs to allow re-direction to protected resources
after login. By default the initial request data is stored in the
OAM Server, but as it is not replicated by Coherence the pre-
login session data could be lost in the event of an OAM
Weblogic managed server failure. To cater for this
eventuality an OAM_REQ cookie can be used to store the
pre-login session information. If a managed server fails, data
is still available in the users browser session. The OAM_REQ
cookie is enabled by setting the RequestCacheType to
COOKIE in the OAM Admin Console.
As shown in Table 2 load balancing of the Oracle Access
Servers is achieved through software, however an Enterprise
HA configuration requires a separate Oracle HTTP Server
Web Tier. The Web Tier hosts must be externally load
balanced. The following further considerations apply to the
OAM high-level component.
A time service such as NTP to ensure cluster node time
synchronization
Port translation for web tiers to present the service on the
industry HTTPS port
Shared storage for OAM Admin Server domain directories
OAM Integration with OID as a User Identity Store
Timeouts to from OAM to the User Identity Store
WebGate to Access Server connection resilience
Timeouts from WebGates to OAM
The time service requirement is the same as for OID and all
multi-node HA configurations.
The port translation requirement applies to OAM Web Tier
hosts only. Port mappings for OAM Web Tiers on
sso.mycompany.com are provided in Table 3.
Resilient shared storage must be used for the OAM Admin
Server domain directories for two reasons.
1. The Admin Server needs to be able to start up on any
server in the cluster in case of a failure this is not possible if
the storage is hosted locally.
2. The Admin Server domain directory is the master copy, so
we do not want to lose access to these files due to a host
failure.
OAM Integration with OID is achieved by first configuring OID
to operate as an Identity Store using an Oracle supplied
script idmConfigTool.sh for UNIX derivative operating
systems. This script loads OID schema objects required by
OAM and sets up user accounts to manage the OAM Admin
console and provide OAM access account to connect to OID.
The OAM access account to OID is privileged but not a super-
user for security reasons. After OID has been prepared as an
Identity Store it needs to be created as an Identity Store in
the Data Sources section of the OAM Admin Console. The
load balanced ldap.mycompany.com OID cluster VIP and the
OAM access account cn=oamLDAP,dc=mydomain,dc=com
should be used to connect to OID. At this point the OID User
Identity Store should be set as the Default Store and System
Store for administrator credentials. Finally to set OID to be
used for user credential searches the LDAP Authentication
Module must be changed to use OID as the Identity Store
instead of the Weblogic embedded LDAP server.
Oracle Access Manager
The Identity Store definition page in the OAM Admin Console
provides configurable timeout settings for OAMs connection
to the OID User Identity Store. In the single-site example
presented here there is no secondary Identity Store and High
Availability is provided through the OID cluster and Oracle
RAC. Nevertheless it may be worthwhile setting some of the
timeouts as waiting indefinitely for a response may not be
desirable as it may increase the load on a stressed or failing
service. The following settings allow control over Identity
Store wait times and operations:
Wait Timeout: places a time limit on obtaining a connection
Results Time Limit: limits the length of time for an Identity
Store operation
The diagram in Figure 1 shows an integrated web application
with a WebGate Policy Enforcement Point module installed
on the web server. The WebGate in this example must be
registered with OAM either through the OAM Admin Console
or using another Oracle supplied tool called RREG. The RREG
tool uses XML configuration files to configure WebGates and
associated protected resources from the command line. The
point to note here is that to achieve High Availabilty a
WebGate must be configured to load balance and failover
requests to multiple Access Servers in oamcluster. This is
achieved through specifying multiple OAM servers in the
WebGate configuration using a combination of primary and
secondary servers in the WebGate configuration. Typically
servers in the same oamcluster should be specified as
primary servers to the WebGate. Secondary servers are only
invoked on failover as a result of the Failover Threshold
being reached.
The amount of time a WebGate waits for a response from an
Access Server is also configurable in the AAA Timeout
Threshold setting. In some situations it may be worth setting
this timeout to avoid long pauses with a WebGate waiting for
a TCP timeout where an Access Server has failed.
This article has explored some of the considerations and
requirements for a Highly Available Oracle Access Manager
SSO solution, using Oracle Internet Directory as the User
Identity Store. Oracle Access Manager is a product with many
applications and configuration possibilities, and High
Availability configurations are inherently complex. As a result
there are many more options, aspects of OAM configurations
and related topics which I hope to cover in future.
Robert Honeyman
Honeyman IT Consulting
Oracle Access Manager
Oracle WebCenter Experts Complete the IT
Puzzle
Troy Allen - TekStream
Remember those rainy
days when we were kids,
nothing to do, cant play
outside unless you want
to get drenched and
muddy (Ill admit, there
were plenty of rainy days
that that was just the
ticket)? Puzzles were
always a great alternative
to having to wash the
mud out from behind
your ears, and I loved
them. The only problem I
had with puzzles was
finding the right piece to
start with. Id look at the
picture on the box, try to
organize all the pieces
out, grouping them by putting all the ones with an edge
together, all the ones that looked like they clouds together
and so forth. The bigger the puzzle, the more planning I had
to do. Now days, my puzzles dont include cardboard
cutouts but computers and software that have to be
organized and connected in just the right way to make the
picture of an IT directors vision come to life.
Designing the infrastructure for Oracles WebCenter
products can be a daunting task for IT organizations new to
the technology, or even to experienced 10G administrators
wishing to deploy Oracle WebCenter 11g. Even those who
are familiar with WebCenter 11g must discover the hidden
surprises brought about by the latest Dot 8 release of the
product set. It is a puzzle, figuring out what parts fit
together, how they work to communicate, and it can be
difficult to find that one piece to start out with. While there
are many products under the WebCenter banner, this article
will focus on the latest release of WebCenter Content with
some highlights on WebCenter Portal.
Oracle has written hundreds
of pages on how to
implement the Oracle
WebCenter product set, and
Im not going to dig into all
the details that they have
already disclosed. Instead, I
think its more valuable to
focus on overviewing the
elements of the
infrastructure, key decisions
that need to be made,
exploring some of the
hidden gotchas that come
with the product set, and
providing some ideas that
can help to make your
implementation more
smoothly.
Oracle WebCenter
Puzzle Elements
The basic elements of the WebCenter puzzle can be broken
down into hardware, network, security, and software.
They are all dependent
upon each other, but part
of putting the puzzle
together is looking at each
one separately as well as
how they fit into the larger
picture.
WebCenter Content and
WebCenter Portal both
require, at a minimum,
database, file store system, security application (unless using
what comes with WebLogic Server), application server, web
tier, and the WebCenter application. While some of these
applications and software elements can run together on the
same servers, it is generally best-practice to have them
separated out. The following is a standard WebCenter
reference architecture provided by Oracle.
The reference architecture calls out several elements, but at
its basic level, it notates database, network, security, and
software.
WebCenter supports several varieties of database. You
should check out the supported type and version for the
specific versions of WebLogic Server and WebCenter
Content/WebCenter Portal you will be deploying.
Specs for WebCenter Content can be found here:
http://www.oracle.com/technetwork/middleware/webcenter/
content/oracle-ecm-11gr1.xls.
Oracle WebCenter
Specs for WebCenter Portal and WebLogic Server can be
found here:
http://www.oracle.com/technetwork/middleware/downloads
/fmw-11gr1certmatrix.xls
WebCenter also supports several different operating services
and versions. Ensure that the operating system of choice
matches those listed in the above certification matrixes
provided by Oracle.
Determining the hardware
requirements for all the software
elements can be tricky. For
database, keep in mind that
unless you are planning to store
files as blobs within the database,
most of the data transactions will
be for metadata storage, system
detail and logging, and for searching. Storing content outside
of the database usually represents a smaller database
footprint. Sizing against WebCenter Content/Portal, in this
case, should be based on the expected searches that users
will be performing. Database should also be configured for
RAC or GRID.
Sizing hardware for WebCenter Content and WebCenter
Portal is usually based on the number of transactions
expected at any given time. Understanding the uses cases of
your systems and the volumes of user interactions will be
critical. As a general rule, 40 to 50 transactions per second
per CPU per GHz is a good rule-of-thumb for sizing
WebCenter applications.
Network and inter-application
communication plays a large
role in constructing the
WebCenter puzzle. WebCenter
relies on communication to
security applications,
application server(s), other
WebCenter products, and user
access. In most cases, SSL
encryption is supported and can be configured for outside
server access as well as inter-application access.
Communication ports are also configurable (even though
most installations utilize out-of-the-box ports). One of the
largest gotchas in the overall network for WebCenter is file
system access rights and permissions. There are many
interactions between the WebCenter products that require
file system access and this should be planned out in advance
of any installations.
Both WebCenter Content and
Portal allow administrators to
either use the built-in security
that comes with WebLogic
Server, or to utilize a third-party
security application like Oracles
Identity Management or
Microsofts LDAP services.
When deploying WebCenter Content and WebCenter Portal
to create a single user application, it is best to deploy with
Oracle Identity Management or Microsoft LDAP. Single Sign-
On (SSO) is also a key factor to make the user experience of
the application as smooth as possible. While there are many
options for SSO, WebCenter is an Oracle product and
requires less configuration when using SSO provided by
Oracle WebCenter
Oracle products. Configuring Kerberos and SAML can be a
challenge at times.
WebCenter Content has
several elements that make
up the overall application
including refinery services for
content conversion, imaging
for document capture from
fax and scanners, and
multiple components that can
be turned on including
Records Management.
WebCenter Portal provides
two primary options for operation, Spaces and Portal.
Understanding the overall use cases of the system you are
piecing together will help to determine what portions of the
products should be enabled. For some features, like
enabling PDFConverter on WebCenter Content, require that
the Inbound Refinery (also referred to as the Conversion
Server) runs on Windows to support cVistas
PDFCompressor. Reviewing the Installation and
Configuration guides for the WebCenter products (found
here http://docs.oracle.com/cd/E29542_01/index.htm) can
help determine the appropriate operating systems of the
hardware it will be installed on.
WebCenter Content Dot 8 (11.1.1.8) provides some new
features that will impact the servers, security, and network
aspects of the overall puzzle. Dot 8 introduces a new
WebCenter UI based on ADF (Application Development
Framework) that makes it a necessity to utilize a security
application outside of WebLogic Server.
Oracle WebCenter
"Even those who are familiar with
WebCenter 11g must discover the hidden
surprises. It is a puzzle, figuring out what
parts fit together.."
Key Decisions
The following decisions,
among many that need to be
made, will help in determining
what puzzle pieces are needed
and how they will ultimately fit
together.
Decide on the Products
Based on established Business
Requirements and Use Cases,
align the appropriate
WebCenter products and
features to ensure needs are
being met
Decide on the Hardware
Based on the selected
WebCenter products and
features and how the system
will be utilized, first determine
the types of environments
that will need to be configured
(Development, QA/Testing,
Production, Disaster
Recovery). It will be important
to determine what
environments (if any) will be
configured for clustering to support Highly Available and
Highly Reliable access to the application. The WebCenter
products and features may dictate what operating systems
will be required and this should be included in the decision
tree.
Decide on the Database
Some companies rely heavily on Microsoft
database applications and are not comfortable
with Oracle database products. While WebCenter
does support Microsoft as a database platform,
there are some considerations that need to be
made. Using Microsoft database with WebCenter
Content means that you will be limited to using
Database Full Text for searching. While this is a
valid search option, it does require that the index
be completely rebuilt whenever new metadata
fields are introduced; this can take considerable
time to complete if there are a large number of
documents in the repository. Other
considerations include some of the features that
Oracle database provides, and have been tested
against WebCenter, such as encryption at rest for
metadata and content, database clustering (RAC
and GRID), and de-duplication (removing multiple
copies of the same file).
Decide on a Security Application
In most cases, it is best to utilize an external
security application to support single user
management across multiple instances of the
WebCenter products as well as making it easier
for SSO configuration. While third-party
applications can be utilized successfully, Oracle tends to
provide the greatest amount of support for Oracle on Oracle
applications.
Oracle WebCenter
Final Thoughts Completing the Puzzle
The only way to really make sure that you get all the pieces in
together, get them sorted, line them up, and put them
together to finish the WebCenter puzzle is to Analyze,
Investigate, and Document, Document, and Document.
Analyze your requirements fully. Understand what the end
game of the application is meant to be and ensure that you
have all the details available.
Investigate all the options that are available to you from an
infrastructure and software perspective to ensure that you
have a configuration that will enable the outcome of your
analysis.
Document your findings. Document every step of the way
and utilize revision controls so that you can look back on
where you came from and where you wound up at. Make
notes along the way as to why certain decisions were made
(this will help you down the road especially when you expand
or upgrade your systems). Document your final solution
BEFORE you implement it. This will give you a dry run on
paper (with logical and technical diagrams, process flow
charts, and requirement matrix). It is more cost effective to
walk through the process and catch the obvious issues than
to perform a full install only to find major issues while
performing the beginning steps of the solutions deployment.
One additional note that can make a huge difference: Get an
extra set of eyes and hands on the project. Even if you have
already deployed WebCenter and are just doing an upgrade,
this isnt something that most people do day-in and day-out.
Find resources that only focus on WebCenter applications
and understand all the undocumented features and
gotchas that come with enterprise level applications. The
cost upfront will same you big money in the maintenance
and support of your solution in the years to come.
Troy Allen
TekStream
Oracle WebCenter
Maturity of Service Oriented Architectures
Douwe Pieter van den Bos, Ome-B.nl Creative Software Solutions
Introduction
Service Oriented Architecture, SOA, help organizations
become more agile, flexible and can reduce the cost of
ownership of the landscape. SOA certainly can help middle-
and large organizations to get more control over their
architecture, while creating opportunities in the business
field.
However, there are some big challenges that are to be made.
Because SOA is not only a way of working in the IT domain,
the whole organization needs to be on track. Like all
architectures, Service Oriented architecture has maturity.
Some organizations are very SOA-aware and the entire
organization is built on the principles of the architecture,
other organizations are just starting out and have only
implemented a Service Bus.
Knowing what steps to take in the future can provide
valuable insights for both technology and business sponsors
in the organization. In short: SOA-Maturity is a keen way to
scope the next steps in further development (maturing) of
the entire organization in becoming more and more agile
and flexible.
SOA-Maturity: Why?
Knowing where you stand and where youre heading can
help in a lot of ways. All organizations are very busy working
on IT-programs, business enhancement initiatives and
alignment projects. But where do we stand? And what
investment is the best effort to make?
Using SOA Maturity Models we answer a few essential
Why?s for any organization.
Complexity. Service Oriented Architectures are complex. To
get insight in the complexity of an organization we need to
know its ambitions and vision. This helps us to define where
were heading and how complex (and therefore costly) the
road ahead is.
Future-proof. For members of the board there is no
disappointment larger than realizing that large investments
where invalid. When we know the maturity and road ahead
of the organizations architecture, we can define what steps
are to be taken. And how future-proof they are. Taking in
account the age and agility of all dimensions of SOA (from IT
infrastructure to maintenance organization structures) we
create a clear view of the sustainability of the environment.
Roadmap. Using the SOA-Maturity approach we create a
clear and feasible roadmap of further development. This
helps us to define the necessary next steps in further
growing up of the organization. The roadmap, as a final
product in the maturity assessment, shows where to invest,
and where not.
$s and !s. Eventually, it all comes down to numbers.
Knowing the roadmap of further maturing the organizations
architecture gives insight in what investments are
necessities, and what are mere wishes. In other words: we
now know how to put our money where its worth.
Service Oriented Architecture
SOA-Maturity Models
There are various models available to measure SOA-
Maturity. Two of them are used widely and have their own
pros and cons. Theres an extensive model that is published
and maintained by The Open Group (the same organization
of TOGAF) and theres a model that Oracle uses itself. In the
table below the differences of the models are explained.
This said, although the differences, both models are quite
similar. And for both models its important to use the part
necessary for the task at hand. The Open Group model
(OSIMM) is very comprehensive but if you use the parts that
you need it offers a lot of flexibility and maturity as a model
itself. The Oracle model on the other hand is fairly
understandable, but might be a bit too simple for very
complex environment. And, of course, both models can be
combined, just what you want.
The Open Group Service Integration Maturity Model (OSIMM) The Open Group
Although we discussed two different models, both work the
same. They both use various measurements:
Dimensions of SOA: technology and organization. These
dimensions like IT-infrastructure, information, governance,
project management, etcetera ensures that the
measurement is not only done on the technology side of
things.
Indicators: level of SOA-maturity. All models work with
various indicators to know on what level of SOA-Maturity an
organization is. These indicators give insight in the maturity
level like no SOA, Ad Hoc, opportunistic, systematic,
managed and optimized the organization is on or the level
it wants to be.
Service Oriented Architecture
Level of adoption. The level of adoption of SOA-principles
tells a lot about the maturity and is a very important
indicator of the level of SOA-maturity. These levels such as
only adoption on project level or organization wide adoption
of the most important principles offer insight in how the
principles are embraced by the organization.
The Oracle SOA Maturity Model Oracle
But most important: SOA-Maturity Measurement is an
assessment. Its a process that needs to be done with the
right people, with the right will. Its not something you can do
on your own, without backup from the organization
(although you will know that the level of adoption is quite
low).
SOA-Maturity Measurement
SOA-Maturity Measurement is an assessment. This means
that it is a process that involves various stakeholders,
multiple actors, different dimensions and especially loads
and loads of questions.
The process of SOA-Maturity Measurement is as follows:
The SOA-Maturity Measurement process
First we need to know how the surroundings look like.
Therefore we need to identify the dimensions and to identify
the stakeholders.
Looking at Identify the Dimensions, we have a fairly
comprehensive model to take into account. In the Oracle
SOA-Maturity Model we see 8 dimensions, in The Open
Group OSIMM model we can identify 7 layers / dimensions.
These are all fairly similar although Oracle recognizes
projects and portfolios as a separate dimension. This is
actually pretty smart, since we have to take into account that
it is possible that the environment is changing as we speak.
Plus the way projects are governed is of interest to us in this
stage.
Service Oriented Architecture
The 8 dimensions in the Oracle SOA-Maturity Model Oracle
When we are identifying the stakeholders we have a few
helpful models in place. The RACI-table is the most important
one here. Using the Responsible, Accountable, Consulted,
Informed method we can quickly identify per dimension
who the main stakeholders are. This can help with the
workshop that will give understanding of the current state
and the future vision of the organization architecture.
During the Assess Current State phase we have to get
investigate what the current state of the architecture is.
Beware of some nice-weather answers you might get
because some of the stakeholders might not benefit from
the (cold) truth. But that said: per dimension there are
questions in place that can be asked. Especially in the Open
Group OSIMM model there are extensive questionnaires
available to assess this state.
With the answers being provided either during a workshop
or during various interviews we can use indicators to
determine what the current state is. In the OSIMM model we
see a comprehensive list of what indicator and what
attributes combined give a certain maturity level.
Example of maturity indicators in the OSIMM model The Open Group
Of course, the most fun part is to Define the Future Vision
on Service Oriented Architecture. But this is also the trickiest
part. It happens more than once that an organization forgets
essential stakeholders, or that organizations define a future
vision that is not realistic.
So theres a nice challenge. Because we dont want too
ambitious, but we also do not want to be too laid back about
the things we want to do in the future. This because we really
want to work and need things to do. When were looking at
the future vision take into account where the organization as
Service Oriented Architecture
a whole wants to stand in relation to Service Oriented
Architecture. Especially the OSIMM model works very good in
this case since the relation to TOGAF is a very natural one.
Defining the future vision is ongoing work for an
organization itself. However, it is possible to create enough
insight in the matter within one intensive workshop as long
as the most relevant stakeholders are participating and
willing.
When we have a clear view on where we stand (Assess
Current State) and know where we want to go (Define Future
Vision) we can learn what the gap between those places is.
During the GAP Analysis we create the view on where the
most of the work is to be done. This is the first step in which
we learn where we should put our money.
Example of a GAP Analysis in a graph.
Using the insights we got during the GAP Analysis we can
Identify Activities we need to do in order to fit the gap.
These activities need to be addressed to stakeholders and
have a clear goal, purpose and date. All activities should
follow SMART (Specific, Measurable, Assignable, Realistic and
Time-bound).
Another method that is used a lot in the Scrum Development
method is also practical to use: INVEST. This stands for:
Independent, Negotiable, Valuable, Estimable, Sized and
Testable. Especially within an Agile development
environment this might help. In this article I will not go
further into this method, since it is very Scrum specific.
Create Roadmap
When we have done our SOA-Maturity Measurement we
have enough ammo to create a roadmap. The Roadmap
helps us make the right decisions and offers us insight in the
projects that needs to be done in order to grow as an
organization. During this phase there are a few things to
bear in mind.
- Steps in a roadmap always follow a certain order. Beware of
this when you are shuffling the steps to take.
- Always have in mind which activities are adding actual
value, which offer nothing but constraints and which can be
seen as mere wishes from one or two stakeholders.
- Order the activities. Not by random but by the value they
add to the organization as a whole.
- Prioritize. Therefore you will need to add numbers to the
activities. Think about the entire sum of things: this means
both the necessary effort and the value it brings.
Service Oriented Architecture
Conclusion
SOA-Maturity Measurement is an effective way to keep on
track and to see what needs to be done. It offers
organizations a complete view and insight in the way the
organization is developing itself. Using various tools, the
right stakeholders and smart questions it is possible to give
answer to the further development of your Service Oriented
Architecture within the span of one day.
Douwe Pieter van den Bos
Ome-B.nl
Service Oriented Architecture
The World According to Oracle
Oracle OpenWorld 2013 and beyond
Lucas Jellema - AMIS
What the short, mid and long term plans are of Oracle
Corporation is interesting for many stakeholders. Among
these are industry analysts, Oracles customers, partners,
competitors and of course the hundreds of thousands if not
millions of technical specialists whose daily livelihoods
depend on Oracle. Communications about these plans of
Oracle as well as living proof of the execution of those plans
are ongoing. Every week brings press releases, product
launches and roadmap updates. However, the best time of
the year to get a complete overview of where Oracle stands
and is going, is during the Oracle OpenWorld conference. For
close to a week, Oracle staff from all ranks and across all
product offerings present, outline, demonstrate, defend and
launch statuses and plans, roadmaps and decisions, new
acquisitions and classic products. One week in September to
get up to speed with Oracles plans and actions.
This article summarizes status and future for many parts of
the Oracle technology stack, based on the official and
informal news, gathered during Oracle OpenWorld 2013.
What was said, what was intended, what was carefully
omitted and what could be read between the lines has been
assembled into this one overview.
The article however opens with a discussion about three
major transitions substantial course changes for that red
super tanker called Oracle. These transitions dictate much of
what is going on in Redwood Shores that will impact many
different products, Oracles position in various markets and
the overall customers do business with Oracle. The three
transitions discussed here are not specific to Oracle most
of the IT industry and its customers face or will face similar
challenges. Nor is Oracle necessarily the first to handle these
challenges. In fact - as may be expected from super tankers
Oracle cannot rapidly react to quickly emerging trends.
When we look at the curve of technology (innovation)
adoption, Oracle hardly can be considered an innovator. It
may sometimes be an early adopter on the left side of the
chasm, frequently it will be on right side of that chasm. In
terms of spending your investment dollar wisely, that is not
necessarily a bad place to be. However, it usually means that
you are a little late in each new game and have to try to catch
up with the other players [or simply catch one of them].
Figure: Oracle is not well positioned to be a true innovator; it frequently does well
as early adopter
Oracle OpenWorld
Of course when Oracle starts to play in a certain area, it
usually means business. The tanker may not react quickly,
but its momentum is huge.
Oracle states that it wants not only to provide the full stack
and mutually integrated platform components but also
products that are best in class a phrase that seems to
replace best of breed. This means that when a selection is
made for a specific product be it an RDBMS, a service bus,
an enterprise content management system or an Identity &
Access Management solution the Oracle product should be
one of the top options, even without the added benefits of
the complete stack. Oracle products have to be leaders,
firmly positioned in the Gartner Magic Quadrant shown
below.
Figure Oracles products should be best in class, firmly positioned in the leader
quadrant
Oracles ability to execute is usually high based on the
breadth of the company portfolio and the size of its R&D
budgets. Development of a vision that both does justice to
the specific product or technology trend at hand and fits in
with Oracles over-all strategy and stack can sometimes take
a little longer. More an Early Adopter or Early Majority than
an Innovator.
Sometimes however, Oracle does act on the cutting edge. For
example by setting up a group that is relatively independent
of the organization hierarchy and traditional lines of budget
and control, such as is the case with the Oracle Application
User Experience team. Or by having the innovation take
place outside of Oracle and then acquiring the innovator.
This phenomenon occurred dozens of times over the last
decade (to name just a few: KSplice, Collaxa, Oblix,
Moniforce, Nimbula, Xsigo Systems). Of course in some core
product lines, Oracle drives the innovation itself such as in
relational database technology.
Three transitions at Oracle drive many of the product
roadmaps, providing the undercurrent for basically anything
being planned, developed and rolled out. Three transitions
that Oracle has embraced, made part of its strategy and
should ensure the continued cornering of the magic
quadrant.
1. Desktop => Mobile
An increasing number of enterprise users has an increasing
percentage of its interaction with enterprise systems not
through a connected desktop PC or laptop with full screen
browser but instead through a variety of different any-time
any-place devices including but not limited to smartphones
and tablets. This has huge implications for the user
Oracle OpenWorld
experience, software delivery, security, data synchronization,
back-end infrastructure and many more aspects.
Oracle focuses on the enterprise market; it will not target
consumers directly. However, we see a blurring of the
traditional line between pure internal corporate users and
external agents. The internal users may be part of the
enterprise, however when they use their own device on a
location of their choosing, they are not all that internal
anymore. Additionally, many organizations invite business
partners, temporary workers, customers or citizens to
interact with their enterprise systems through web sites and
other channels tracking their orders, self-servicing their
account and even participating in business processes. These
outsiders are hard to tell apart from employees that roam
about.
Oracle has made mobility or multi-channel interaction -
even more prominent in its directions forward. Not just the
tools to create user interfaces that run in on-device browsers
but also the back-end infrastructure required running
scalable mobile (multi-channel) applications. Part of the
latter is the Oracle Mobile Cloud Service that was
announced.
The Oracle Mobile Cloud Service provides a proxy which
can be cloud based or on-premises that all mobile devices
connect to. This proxy provides various services such as
caching, enrichment, authentication and authorization,
format and protocol adaption, that are typically required to
support enterprise grade mobile apps and that are largely
stateless. The Oracle Mobile Cloud Service proxy mediates
between all the mobile devices out there in the world and
the internal enterprise systems. Mobile devices do not
directly access the enterprise systems.
Security is a major component of the mobile revolution.
Authentication and single sign on from external devices by
users with varying clearance levels in much larger numbers
than just the employee head count is an interesting
challenge. Security of data on devices outside the security
perimeter of the enterprise is another consideration. Oracle
announced support in its Identity and Access Management
Suite for both these challenges, including management of
secure containers on bring-your-own-devices that hold the
enterprise assets and have a form of remote management
that is compliant with privacy laws.
Oracle does not want to get into platform specific, native
mobile apps. However, it has a need for cross-platform
mobile applications that can also run in off-line conditions.
Such apps are provided with middleware products like BI
Foundation, BPM and WebCenter Content and for many
elements of the Applications portfolio. The technology Oracle
uses for the development of these mobile applications is
currently called ADF Mobile, based on Apache Cordova (aka
PhoneGap). This technology may shortly be rebranded to
Oracle Mobile Development Framework or something of
that nature perhaps in conjunction with the Mobile Cloud
platform that was mentioned before.
REST
An important part in the support for the variety of channels
and devices Oracle strives for is broad REST
(Representational State Transfer) & JSON (JavaScript Object
Notation) support. RESTful services have become the de
facto standard for the interaction between modern user
interfaces running either as HTML5 in browsers or as native
Oracle OpenWorld
apps and their enterprise back end. These services are
invoked over HTTP using relatively simple messages and the
basic verbs available in HTTP (GET, POST, PUT, DELETE
providing CRUD on resources) usually with JSON as the
data format structure, as the compact and native browser
alternative to XML.
Support for REST and JSON has become a common theme
across many components of the Oracle technology stack.
Some examples: the Mobile Cloud Service and other cloud
services from Oracle expose RESTful APIs that can be
consumed by mobile devices. Oracles SOA Suite is currently
being enhanced with support for RESTful Web Services that
speak JSON. The Oracle Database is being extended to
support JSON in a way that is similar to the support for
XML(Type). ADF can consume RESTful/JSON based services as
of 12.1.2 (July 2013). The next release of ADF (12.1.3,
sometime in the first half of 2014), is expected to also allow
easy publication of RESTful/JSON services from the ADF BC
framework. Coherence 12c exposes RESTful APIs for
retrieving and manipulating data. Products such as
WebCenter Content have had support for RESTful APIs for
some time and other products follow that lead. Shortly, most
administration actions we can perform on WebLogic through
WLST will also be available through RESTful APIs.
User Experience simplicity, mobility and extensibility
Different channels, devices and user groups require diversity
in the user experience. The enterprise user of the last
decade typically accessed user interfaces from a browser
running on a desktop using a keyboard and a mouse. Most
applications were designed with power users in mind,
focusing on a wide scope of functionality. The user
interacting with the enterprise systems of today and
tomorrow do that in a variety of ways, including touch
devices without a keyboard such as tablets and smart
phones. Most of them will typically require only a small
percentage of the full functional breadth of the application.
Oracle has made a bold statement: it wants to lead in the
area of User Experience. It has put together a strong team
that explores user experience in an out-of-the-box manner,
embracing new technologies such as voice capture, Google
Glasses and modern media traits such as info graphics and
eBooks including multi media. This UX team will lead the way
for all Oracle products in terms of how their user experience
should be designed and implemented. Note that anyone can
benefit from their ideas and processes using resources on
their public website: usableapps.oracle.com.
One important statement coming out of the UX team is that
Oracles Applications should keep the 90:90:10 ratio in mind:
10% of functionality that 90% of the users need for 90% of
their interaction. This can be translated for example to a very
attractive, very accessible, largely read only layer that is
wrapped around the core power user parts of the
applications. This layer exposes key information in an
intuitive way, allowing for very easy navigation and providing
the starting point for drilling down into core areas where
more complex data manipulations are available. Self Service
is an important topic supported in this layer: opening up
application functionality for new user communities that only
need access to specific parts, information and actions. These
users should not require training to use the application,
should have an intuitive experience such as they get on an
iPad and other tablets. This approach is implemented (under
the name Fuse) in the Release 7 implementation of Fusion
Applications HCM heavily using the ADF components Spring
Oracle OpenWorld
Board, PanelDrawer and Vertical Tabs.
The mantra simplicity, mobility, extensibility very concisely
summarizes the philosophy of the User Experience team.
Mobility in this case states that the user interfaces are
designed to support various channels and devices (mobile
smart phone, small tablet, large tablet - and desktop browser
for the power user. Design of the user experience will start
from the tablet screen size, form factor, touch and gesture
to create the UI taking centre stage in the 90:90:10
approach. HTML5 is an important factor in the actual
implementation of the tablet user interface.
Extensibility refers to the ability for end users to change the
appearance and the behavior of applications and even of
business logic and -processes. This functionality available in
the browser mimics to a certain degree the behavior of the
design time IDEs. Examples are the Page Composer, Report
Composer and Data Composer used in Fusion Applications,
WebCenter Portal as a whole and the BPM Process
Composer through which business processes can be
designed and modified.
2. On Premises => Cloud
Organizations will rapidly be using IT assets that they do not
completely control themselves. The times of on-site data
centres where all enterprise data and applications reside are
gone. Not before long, the vast majority of enterprises will
have a mix of on-premises and public cloud-based
applications and infrastructures. Drivers include elastic
scalability, reduced upfront investment, quick deployment,
outsourced administration and reduced around-the-world
network latency. Supporting efficient and secure
administration, migration and integration are among the
greatest challenges
Oracle OpenWorld
All software from Oracle should run both on-premises as
well as in the cloud. The capabilities of the Oracle
infrastructure components should also be made available
through cloud services. Oracle wants to provide a complete
cloud stack including IaaS, PaaS and SaaS offerings as
well as enable enterprise to run their own private cloud
infrastructure. The essence of the cloud is multi-tenancy and
elastic scalability, quick start up time and pay for real usage.
Through efficient usage of machine and human resources,
the cost of cloud services can be very competitive as
opposed to dedicated, decentralized alternatives.
Moving components from on-premises to cloud or vice versa
should be painless: the infrastructure and the platform in the
cloud should be the same (so far as possible) as on-premise.
That is the strategy that Oracle is currently working on.
New in Oracles view on the Cloud is the acknowledgement
of the third party public cloud. The collaboration with
Microsoft that offers images with Oracle Database and
WebLogic Server on the Microsoft Azure Cloud (on Windows
or on Linux) is an example of this. Customers and either
bring their own license or acquire a pay-as-you-go license.
SaaS
In the SaaS space, Oracles cloud offerings are the most
tangible. Fusion Applications were published as cloud service
fairly early on and Oracle acquired a number of established
SaaS providers to further boost its SaaS portfolio and market
share. These include Taleo, Eloqua, RightNow, Compendium
and BigMachines.
In the spring of 2013, Fusion Applications ERP offerings in the
Oracle Cloud were extended with Oracle Financials Cloud,
Oracle Procurement Cloud, Oracle Project Portfolio
Management Cloud and Oracle Supply Chain Management
Cloud. These ERP services along with the HCM services will
be integrated with SalesForce as was announced in June.
Figure: overview of Oracles (intended) cloud portfolio
PaaS
Platform as a Service offerings make elastically scalable
database and application server capacity available from
providers to remote consumers. Oracles PaaS services
expose the Oracle Database, WebLogic Server and other
Fusion Middleware facilities to consumers. Until Oracle
OpenWorld 2013, only the Java and Database Service were
live. These offer limited access to a WebLogic Managed
Server and an Oracle Database schema. During Oracle
OpenWorld 2013, Oracle announced new PaaS services:
Database Instance as a Service and Web Logic Server as an
instance.
Oracle OpenWorld
With this DB instance as a Service, consumers have access
over any protocol including SQL*Net and JDBC to your own
database instance (11g or 12c). Customers have root access
to the Virtual Machines Operating System. SQL*Net access is
available for example for loading/exporting large data
volumes. The database instance runs inside a Nimbula
environment that you cannot break out of. Three flavors
with different levels of service management and uptime are
available. The highest level includes a two-node RAC cluster.
The WebLogic Instance as a Service is similar to the Oracle
Database Cloud Service. It offers the same WLS that you use
on-premise. Oracle manages it, or you make use of your root
level access. It is the full fledge Java EE container with ADF
runtime not the somewhat restricted edition offered in the
Java Cloud.
Other PaaS Services that were announced in the roadmap
are database backup and recovery, Java, Developer, Mobile,
Documents, BI and the Cloud Marketplace where partners
and developers can publish and monetize their own
applications and extensions to the Oracle applications.
IaaS
Oracle announced two new IaaS services: Elastic Compute
and Elastic Storage as a Service (similar to what Amazon
offers with EC2 and S3). These services complement the SaaS
and PaaS offerings from Oracle by ensuring customers of the
SaaS and PaaS offerings that they can stay within the Oracle
Cloud if they want to simply store files or deploy a custom or
3rd party application. Oracle does not necessarily wants to
compete with other cloud vendors at the IaaS level it
cannot do so on price and there really is no other way for
that type of low level service - but it does want to make sure
its customers can do all they want in terms of cloud in the
Oracle Cloud
Oracle will start offering a Storage service (late 2013) and a
Compute service (probably early 2014). The pricing will be
very similar to AWS and Google.
Identity as a Service is a service on the horizon with
support for authentication, access management and
users/roles/permissions - to eventually allow you to replace
on-premises IdM. Other planned IaaS services are
Messaging, Naming, Elastic Load balancing, Elastic IP
addresses. Messaging is currently the closed to go live: it
provides messaging through queuing with publish/subscribe
over http. Messaging can play an important role in cloud on-
premises integration.
The Oracle Cloud is hosted across the globe, in regional data
centers in the USA and Canada, UK and The Netherlands,
Singapore, Japan and Australia with new data centers being
considered in for example Germany, China and India.
3. Database managed, structured data => BigData &
Hadoop
Never did the world revolve around structured, relational
data. However, enterprises have focused for decades on
automating their own structured data without paying much
attention to non-structured data and data outside their own
systems. With technology advances in both hardware and
software and the increasing availability of digital assets, it
becomes attractive for many organizations to try to extract
value from unstructured data from both within and beyond
the enterprise boundaries. Additionally, near real time
responses to events in social media or in the execution of
Oracle OpenWorld
business processes has become an option: continuous
evaluation of data structured and unstructured is starting
to become feasible, also at enterprise business scale.
Oracle used to be on top of data management for nearly all
data relevant to enterprises. With the advent of big data and
the new ways in which that data is acquired, managed and
processed, Oracle has to radically re-establish itself to
achieve a similar position.
Internet of Things
The major theme of Oracle OpenWorld 2013 was Internet of
Things. This refers to the quickly growing number of devices
that are directly or indirectly connected to the Internet:
devices such as sensors, cameras, microphones and also
printers, refrigerators, cars and other equipment.
At the present, the number of these connected devices is
estimated at around 6 billion. This is expected to grow to 50
billion in 2020. Many of these devices are fairly simple low
power, simple I/O ports, low end CPUs. However, what we
consider simple today usually has the computing power of a
standard desktop PC a decade ago. This means that the
devices can easily run a Java Virtual Machine with quite
sophisticated logic.
The ability to run Java on embedded devices has been
available for a long time, but with the increased capabilities
of the devices and the easy application of common Java
programming skills in that environment as well as the quick
growing connectivity (either local Bluetooth or other near
field communication or the true internet connectivity and the
cheap availability of for example the Raspberry PI, the stage
seems set for rapid growth.
These devices produce data by measuring, sensing,
recording and otherwise registering values. The signals from
the edge devices are typically processed in local gateways
that turn raw data into more meaningful messages through
pattern matching, aggregation, filtering and other event
processing strategies. Often, local gateways will send their
output to enterprise backend services where all these
messages together represent big data or even huge data.
The combination of volume, variety and velocity presents a
serious challenge, especially when information must be
derived in real time and immediate action is required.
In addition to all the data streaming in from the Internet of
Things, there is of course quite a bit of data from the Internet
of People. Data from various social media constitutes an
important part of Big Data, and this constituent is typically
unstructured. Oracle by the way offers a range of products to
collect social data, interpret this data and respond in near
real time to, for example, complaints or questions on social
media; these products together make up the Social Services
offering apparently called SRM for Social Relationship
Management.
A fairly common approach to handling Big Data is arising:
data is gathered in NoSQL databases, processed using Map
Reduce in an Hadoop distribution and then stored on the
Hadoop File System (HDFS). The thus produced outcomes
are then frequently transferred to relational database such
as the data warehouse.
New trends around Big Data seem to render this last step
less common. One of these is Big SQL, which is the ability to
use SQL to access the data on HDFS. Oracle intends to make
it possible to call out from the database to HFDS in order to
Oracle OpenWorld
query the processed Big Data directly from within the
database and even allow joins between relational tables and
data sitting on the Hadoop File System. Similarly, Oracle will
enable Enterprise R to combine analysis of relational data
with Big Data. Another development is the ability to reach
out to data on Hadoop directly from Endeca Oracles
product for Information Discovery. Something similar is on
the roadmap for BI Foundation (formerly known as OBI EE):
the BI Server will generate Native HiveQL and will be able to
query data directly from Hadoop through Hive.
All in all it seems like the thresholds for gathering Big Data
and integrating information extracted from Big Data with
traditional data sources and infrastructures are lowering
rapidly. This will help many more organizations to leverage
value from data that currently goes to waste.
Platform Product Announcements
In addition to the three trends and their ramifications across
the product portfolio, there were a number of very specific
product announcements that captured the eyes of the
crowds. The In Memory Database option was arguably the
biggest single announcement from the Oracle OpenWorld
2013 conference. Oracle Database 12c will be extended
sometime in 2014 with the in memory option that helps to
speed up queries with up to a factor of 100 by keeping
selected tables and partitions in memory, in a parallel
accessible read format. The secret sauce of this feature
compared to the competition (SAP Hana among others) is
the fact that Oracle uses a dual format (columnar for fast
read and row format for fast update to disk) which in turn
means that all applications and most administration tools
are completely unaware of whether some data is held in
memory in columnar format.
The M6-32 Big Memory Machine that was also announced
is very suited to run the in memory database option. It
comes with up to 32 TB of DRAM, a brand new processor
the SPARC M6 that doubles the cores of the M5 that it
replaced and offers 96 threads per processor. The M6 is
available as a general purpose computer and in super-cluster
form.
This years most striking new hardware announcement was
the Oracle Database Backup and Recovery Appliance. This
appliance is meant to be able to create reliable copies of
hundreds or thousands of databases in an enterprise.
Instead of using a long backup window, often in the middle
of the night with minimal database activity, the appliance is
geared to take time-stamped snapshots of a database
system. Once a full copy has been created, updates occur
through snapshots that capture only the changes since the
last snapshot. The appliance makes use of the existing
Recovery Manager (RMAN) feature in Oracle database
systems. By telling RMAN to run "in an incremental-forever"
configuration, the database system will send quick, periodic
snapshots to the storage-equipped appliance, where they
will be saved. If a restore is required, data loss is minimized.
In addition to this appliance, the same functionality is
available as cloud service.
Rising stars in the Oracle product catalogue are Endeca
(Information Discovery) and GoldenGate (near real time data
change capture and transfer). These products features were
showcased across the Oracle stack.
Oracle OpenWorld
Conclusion
The Oracle product evolution is moving forward along the
mantras of its corporate strategy: complete, open,
integrated, best in class and provided from the cloud. The
evolution is heavily influenced by the three transitions
mobile, cloud and big data.
At Oracle OpenWorld, one can get a fairly good impression of
what is going to happen. It is quite another story to discover
when that is going to take place. Release dates for upcoming
products are only given in very vague and approximate
terms. The figure below indicates what seems to be the best
guess overview of imminent releases.
Currently slated for Spring 2014 are Fusion Middleware
12.1.3 and Java 8. The database update that ships the in-
memory option is harder to forecast; somewhere mid 2014
is our current guess. Cloud services have been announced in
droves, yet they have had some difficulty in the past to
materialize. Based on Oracles statements, the Storage and
Compute service as well as the Database and WebLogic as an
instance services should become available around the time
of publishing this article: late 2013. The Developer Cloud
could be available in that same timeframe. APEX 5.0 seems
mid-to-late 2014 material, but this is based on very little
factual information.
Oracle OpenWorld
Lucas Jellema
AMIS
From Requirements to Tool Choice
Sten Vesterli, Scott/Tiger
To the man who has only a hammer, everything looks like a nail.
And to an Oracle developer, every set of application
requirements used to look like a job for Oracle Forms.
Fortunately, we now have a full toolbox available. But
application requirements do not lead as clearly to the tool choice
as nails, screws and bolts. We will have to stop and think a bit in
order to identify the right tool for the job. This article covers the
main areas you need to consider.
Why do I need a tool?
An application at its simplest needs to accept data from the
user, store them, optionally perform some calculation, and
display results to the user. In this article, we will consider
traditional systems where the storage is handled by an Oracle
relational database.
The purpose of the development tool is to help you get data
from the database onto a screen for the user to see and
manipulate, and to move data from the screen back into the
database. Ideally, the application developer will spend all of his
or her time on implementing business logic and nothing on the
plumbing code that performs this simple data transportation.
What are my choices?
You have three tools available from Oracle:
Oracle Forms
Oracle Application Express (APEX)
Oracle Application Development Framework (ADF)
Oracle Forms has been around for a long time, and there is a
very large number of existing applications built in Oracle Forms.
As the following graph from Google Trends show, it is also still
very much a searched-for topic.
NoSQL
The NoSQL movement is attempting to provide an alternative that is
faster for simple inserts (because there is no indexing). On the other
hand, NoSQL uses longer time to retrieve data (because there is no
indexing). This is useful for systems that receive massive amounts of
data very quickly, but not for the normal administrative systems we
consider here
Oracle Forms, ADF or APEX
Another way to look at the actual developer interest for these
tools is to look at the number of new discussion threads on the
Oracle Technology Network. For November 2013, APEX was
most popular, followed by ADF.
Programming language choice
The framework you choose should take care of the fundamental
task of transporting data between the screen and the database
and preferably also handling more sophisticated features like list
of values and master-detail navigation. A good framework also
allows you to declaratively specify data validation without having
to write actual program code.
But at some point, the capability of the framework ends and the
developer has to start writing code. At that point, it becomes
important, which programming language the tool uses.
Oracle Forms and Oracle APEX both use the PL/SQL language
beloved of Oracle developers. That means that these
developers will immediately be able to transfer their database
programming skills to implementing business logic. However,
PL/SQL skills are actually quite rare among the general
developer population, and do not seem to be growing.
When was the last time you saw a young PL/SQL
programmer?
Oracle ADF, on the other hand, uses Java to implement
business logic. This skill is widely available among developers
but can be rare inside organizations that have been using the
Oracle database and traditional Oracle development tools like
Oracle Forms.
For many developers, this choice of language is an important
criterion for selecting a tool. However, that is quite misguided. A
programmer who knows one of these two programming
languages can easily learn enough of the other to be able to
implement the business logic functionality necessary. What is
much more important is what kind of application you are trying
to build.
Data-driven applications
Traditionally, a database application has had a user interface
that reflected the underlying data structure. If your system works
with invoices, there will be an INVOICES table, and because
there is an invoice table, there will be an INVOICE.FMB Oracle
Forms screen.
What happens if the user has a workflow where invoices have to
be mapped to project lines in the project module? Well, because
there is a PROJECTS table, there is a PROJECT.FMB screen.
And if the user needs to place invoice information on the project
screen, he will have to open the invoice screen, copy the invoice
number to Windows Notepad, open the project screen and
Oracle Forms, ADF or APEX
paste in the invoice number. The user interface is tied to the
data model.
The Oracle E-Business Suite, built with Oracle Forms, is an
example of a data-driven application.
This is a simpler approach for the tool builder, because the
developer will simply point to a database table and the tool can
then read the data dictionary to determine which columns are
available and create matching fields on the screen. It is also
simpler for the developer, because the architecture of the
application is defined in advance by the data modeler.
However, this is not necessarily simpler for the end user as the
above example shows.
User interface driven applications
The other way of building applications is by starting with the
user. This is the approach that Oracle took when they were
designing Oracle Fusion Applications actually investigating the
specific workflow needs of real users. In this approach, you
create the screens first as paper prototypes or mockups and
then test them with the users in several iterations until you have
created an application that makes sense to the user and
supports her workflow.
Some development approaches then go directly from the
screens to the data model. This leads to what data modelers
would consider a very poor data model. This type of model will
often perform very poorly when you want to query data across
entities, and it becomes brittle and hard to maintain as the
application changes.
So what do you do if you have screens that match the users
reality and requirements, and a good relational data model in
third normal form and need to put them together? You need to
create a business object layer between your database and the
user interface.
Because Oracle faced this challenge in building Oracle Fusion
Applications, they had to develop a tool that supported this
approach. And that tool is Oracle ADF, which uses ADF
Business Components in the middle and ADF Faces and ADF
Task Flows for the user interface.
This is a more complex approach for the tool builder, who has to
provide tools for handling this mapping. It is also more complex
for the application developer who now has to create an explicit
mapping between three layers and not just between the screens
and the database.
However, it allows you to implement exactly the screens your
users need.
Decision Support
The difference between the different programming languages is
very minor compared to the difference between data-driven and
user interface driven applications. Any experienced developer
that knows one language can learn the other. Therefore, the
programming language should not seriously affect your choice
of tool.
Oracle Forms, ADF or APEX
The really serious consideration that you have to make is
whether your application needs are so complex that you require
a business object layer between your user interface and your
data model. Your process should look like this:
You need to start with a good data model that represents the
major entities that the application needs, as well as the
relationship between them. This requires classic data modeling
skills like it has for as long as there have been relational
databases.
No good application was ever built on a bad data
model
When you have the data model, you should figure out whether
screens built on these tables are good enough for your needs.
Create screen designs on paper or mockups and perform tests
with actual end users to see if this first design will meet their
needs.
If data-driven screens meet the needs of your users,
develop the application with a data-driven tool like Oracle APEX
to get the benefit of faster development time
If data-driven screens do not meet the needs of your
users, develop the application with a business component layer
using Oracle ADF to get the benefit of freely connecting your
screens with the data model
We used to have only a hammer now we have different tools.
Use the right tool in order to meet the needs of your users with
the least development effort.
Sten Vesterli
Scott/Tiger
Oracle Forms, ADF or APEX
NoSQL and Oracle
James Anthony - e-DBA
The NoSQL bandwagon is really rolling right now, but it
always strikes me that there is a lot of confusion (mostly
understandable) in exactly what NoSQL is, where its used
and how it replaces (or otherwise) the traditional RDBMS.
A lot of the press and reporting of NoSQL databases seems
to focus on the threat they pose to the RDBMS, indeed I think
its fair to say a few obituaries have already been writtenall
of which i think will come back to haunt the authors just like
those who predicated the death of the mainframe in the 90s,
00s etc.
This article sets out to the be the first in a series that explains
what NoSQL is, and how we see it as a technology. Hopefully
well unpick some myths and set a few records straight, but
dont take this as gospel either, do your own research, create
your own use cases and see how NoSQL can benefit you
(and Im pretty sure it will).
So first lets get a few things straight, and first off:
So thats that stated very clearly! NoSQL databases (perhaps
better called Data Stores in many cases) are often linked to
BigData processing - perhaps as the store for MapReduce
data, or perhaps as the store of BigData but they arent
always the same thing. Indeed one of our own use cases for
NoSQL is definitively small data (in the range of a few 10s of
GB) the technology just suited our use case down to the
ground - but more of that later. But the whats the
difference between NoSQL and BigData? question is the one
Im asked the most at places like Oracle OpenWorld and the
OUG.
The second thing Id like to clear up : NoSQL isnt a single
thing!
Now this is going to be more confusing, and even as I write
this I suspect someone, somewhere, is adding to the list; but
right now Ill generalise NoSQL data stores into 4 categories:
1) Key Value
2) Column family
3) Document
4) Graph
In the first of these articles well discuss (briefly) each of
these types to give you the grounding. Then in the next
article well discuss Oracles NoSQL database (imaginatively
titled Oracle NoSQL Database), but this grounding in the
different types will allow you to see how Oracles offering
differs from some of the other databases out there. Finally
before we start just a quick note; I will discuss a few NoSQL
products in here, this shouldnt be considered an exhaustive
list, nor does it indicate any preference on my part, its simply
perhaps showing you the most common ones we come
Oracle NoSQL Database
across in our work. If you come from one of the other
products dont shoot me on this one, and it certainly doesnt
mean Ive forgotten about you!
In this first article at least Im not going to cover the
underlying technical architecture, thats for another time.
Forgive me if you want to dive straight into ring spaces,
compaction, avro, sharing etc. we will get there I promise!
But I think its important at this stage to recognise that most
NoSQL databases were created to address a perceived
inability of the RDBMS to provide certain capabilities, be that
multi-geography active-active configurations, very large data
volumes or horizontal scale out. Whether the perception of
these weaknesses was more based around open-source as
opposed to commercial RDBMS (most notably MySQL) Ill
leave for another later article, but do note that most of our
use cases for NoSQL deployments have been to address
specific technical challenges with a specialised tool. One of
the things we will explore in depth in the next article is how
NoSQL databases often relax the CAP (Consistency,
Availability, Partition Tolerance) and ACID (Atomic,
Consistency, Isolation, Durability) mechanisms of a normal
RDBMS. Anyway, for now lets get back to the different types
of NoSQL database/datastore.
Key Value Data Stores
Key Value (KV) NoSQL databases (of which the Oracle NoSQL
database is one) are in some ways the easiest to discuss, so
we can start with this type. KV is exactly what it says you
have a key and a value. You store stuff by key and you
retrieve stuff by key as simple as that! Lets look at some
code (bear with me!)
Putting Data into the NoSQL Database :
NoSQLDataAccess.getDB().put(myKey, value);
Getting data out of the NoSQL Database :
ValueVersion vv =
NoSQLDataAccess.getDB().get(myKey);
So ignore the bits that arent in bold, for now thats not as
relevant, and just focus on the bits in bold. See how simple
that is? You store stuff with a key and you get stuff with a
key! Ok, its got to be more complex than that? Well
obviously, otherwise I could teach my 5 year old to do it, so
lets deal with the obvious questions first:
Question 1: What does the key look like? For those of us from
the RDBMS background the key isnt like a primary key, its
probably not a single value. A key is much more likely to be a
multipart string, perhaps with both a primary and secondary
component, and we refer to this as the Key Path. Lets take a
simple example first as well explore this type specifically
when we are looking at the Oracle NoSQL Database.
/stocktick/symbol/time
So the key is firstly a string, and contains multiple parts. The
first part identifies the Key as a stock tick, this isnt actually
anything other than an identifier- it allows us to identify the
data type we want when were using the Datastore for
multiple different types of data, so if I stored session data in
there my key path might look like /sessiondata/sessionid.
The next two entries denote the stock symbol (ORCL for
instance) and the time at which that tick occurred.
Oracle NoSQL Database
Question 2: Whats the value? This might sound like an
obvious question, but actually its very relevant. The value is
whatever you want it to be, and as simple or as complex as
you care to make it. It could be a simple String, but more
likely its going to be an object of some type, perhaps the
serialisation of internet session data (a really great use case
for NoSQL - and Ill come back to this in the next article), or a
JSON document containing a lot more information in a
structured format. Taking my previous example of a stock
tick, the value might look like
Value: {
name" : TickData",
"namespace" :
"com.companyX.stockticker.avro",
"type" : "record",
"fields": [
{"name": currBid", "type":
double", "default": ""},
{"name": currAsk", "type":
double", "default": ""},
{"name": currVolume",
"type": long", "default": ""}
]
}
Again ignore the stuff thats not in bold. We have a value
defined that is called TickData and contains a record with
three fields in, bid, ask and volume. We will come back to
notation, JSON and AVRO in the next article so for now just
notice how the value isnt just a simple string but allows for
complexity.
Column family DataStores
Perhaps the most famous type of NoSQL database
(although MongoDB has to be up there from our next
category) is in the form of HBase, Google BigTable and
Cassandra, and certainly the most synonymous with BigData.
Column family data stores (not to be confused with columnar
store databases!) are formally defined as sparse,
distributed, persistent, multi-dimensional sorted map and
Im pretty sure that makes that entirely clear for everyone
and I can leave it at that? No? Alright then lets try and
explain.
I generally find when trying to explain this type of datastore
within e-DBA its best to use an example, so lets for one
moment imagine we are a newly formed company looking to
index web pages and provide a brand new web search
facility at blazing speed, lets just call ourselves Goggle!
What we need to do is index data in rows and columns but
we also want to add an extra dimension in time (because
web pages and their embedded links change over time)
So now we have the following:
Index : (row, column, time)
And weve also crossed off the first part of that fairly long
winded formal definition "sparse, distributed, persistent,
multi-dimensional sorted map, as we now know where our
multi-dimensional comes from.
Oracle NoSQL Database
Lets look at what a row might look like.. and this time Ill use
an example.
For now ignore the way the URL has been represented; Ill
explain that shortly. Notice how we allow for multiple
versions of a search of the URL to be stored (multi-
dimensional), think also how for each URL were going to
have a totally different number of links embedded within the
page that we want to store in columns so each row can have
an arbitrary number of columns, and this forms the sparse
part of our definition.
What next? Well we can deal with the distributed and
persistent part in a single explanation. Databases such as
HBase use an underlying datastore, HDFS ( Hadoop
Distributed File System) to persist the data, and they actually
persist to an immutable file in this case (changes are dealt
with by creating new copies). These file systems are designed
not just to persist the data but also to distribute it across
multiple locations, both for data protection but also to allow
processing to be moved to the locality of the data and
parallelised across many nodes. Ok, so at this point were up
to..."sparse, distributed, persistent, multi-dimensional
sorted map, well hopefully from the above image you can
see the map portion too, and I promised to come back to the
way the URL was portrayed. This is a good example of why
sorting and locality work, and the reason I left this until after
the discussion of distribution.
Let me illustrate lets say Im indexing www.e-dba.com as in
the example above. The e-DBA domain has a bunch of sub
domains hanging off it, not just www perhaps something like
this
blogs.e-dba.com
demos.e-dba.com
mobile.e-dba.com
www.e-dba.com
etc
Now add into the mix the millions (billions?) of other
domains out there and you can see that if I dont reverse the
sort order and I just work of the lexicographical order if I
want to re-index the whole of e-dba.com and all its
subdomains Ive got a lot of places to store this in. Ive also
got a lot of places (and by places I mean disk locations or
different servers) to hit to service a search on all of the e-
dba.com domains. Reversing the order;
com.e-dba.blogs
com.e-dba.demos
com.e-dba.mobile
com.e-dba.www
This means that my key values will all be stored in the same
location providing greater locality of data and allowing me to
record and retrieve results much faster. So there we have an
example of sorting and weve finally covered the definition!
Phew!
Oracle NoSQL Database
Finally before we leave Column family data stores a couple of
points, firstly lots of other stuff happens inside these
database such as compression (allowing faster scanning),
MapReduce integration etc. but in general this form of
database has limited queries with no joins etc (although
mechanisms exist to work around this). If you expect to just
fire up one of these and have the same sort of analytics as
your Oracle RDBMS youll probably be disappointed. As
always though thats not to say things arent changing
rapidly, and that it doesnt fit your needs.
Asking for your input
As a quick note and suggestion, I tend to find this is the
single hardest family of data stores to understand, but there
is some really, really clever technology involved (again, not
suggesting that other NoSQL databases dont, they are just
perhaps easier to understand) so Id be interested to know if
youd like me to drill into more detail on this type of
database, in particular HBase which is included within the
Oracle BigData appliance.
Document Datastores
Recently weve seen a huge rise in the number of people
exploring Document data stores, and in particular MongoDB.
Much of this is fuelled by developers, with them seeing the
product as a very attractive, developer led solution.
Document database such as MongoDB and CouchDB are
actually one of the easiest to explain, in that they are similar
to KeyValue stores but the value is always a document - most
often in the JSON or BSON format.
JSON document storage is a huge benefit for many
developers in that it offers a schema-less design. This isnt
to say there is no structure to the data, quite the opposite,
but rather than the schema is flexible and can be modified
by the developer. Again, perhaps a small example illustrates
best. Lets start with a simple JSON document for storing
customer information.
{
"firstName": "James",
"lastName" : Anthony",
"age" : 38,
"address" :
{
"streetAddress": Farr House",
"city" : Chelmsford",
county" : Essex",
"postCode" : CM1 1QS"
},
}
The first thing youll probably realise from this if youre from
a RDBMS background is that its very much denormalised,
and thats a key thing to remember, document stores are
typically denormalised, with the document providing all of
the data about the entity youre interested in. Clearly this has
advantages and disadvantages, and well talk about some of
these shortly.
Oracle NoSQL Database
Now lets say the developer has stored this information, but
then the application scope changes and we also want to
capture phone number information, in JSON based
development thats easy, we just change the document.. no
underlying fixed tables/columns to deal with...
{
"firstName": "James",
"lastName" : Anthony",
"age" : 38,
"address" :
{
"streetAddress": Farr House",
"city" : Chelmsford",
county" : Essex",
"postCode" : CM1 1QS"
},
"phoneNumber":
[
{
"type" : work",
"number": 01245200510"
},
]
}
Extending this further, one of our other customers provides
two phone numbers, and we allocate these to different fields
within the record type...
{
"firstName": Alex",
"lastName" : Louth",
"age" : 37,
"address" :
{
"streetAddress": Farr House",
"city" : Chelmsford",
county" : Essex",
"postCode" : CM1 1QS"
},
"phoneNumber":
[
{
"type" : work",
"number": 01245200510"
},
{
"type" : mobile",
"number": 0777 111 222"
}
]
}
Oracle NoSQL Database
Hopefully you can see how this flexibility is something
developers love, no need to keep going back to the design
phase, no need to get DBAs to modify the structure and no
ORM layers to deal with. Indeed so popular is this model that
at OOW2013 I attended a great session showing the
upcoming JSON storage facilities within the Oracle 12c
database that will provide exactly this sort of functionality
but with all the benefits of the RDBMS behind it and access
to the data through both SQL and Restful services, personally
I think this will change the game somewhat and the
flexibility and developer led drive for document databases
will be more of a level playing field between the Oracle
RDBMS and NoSQL with Oracle offering all of the
functionality, plus arguably more.
So what are some of the drawbacks of the JSON model? Well
denormalisation clearly increases the storage requirements,
and you dont get the ability (easily) to do other functions
such as scan for all customers with a given record type (and
clearly youd have to retrieve a LOT of data). MongoDB and
others are also now providing secondary indexing to address
some of these issues but it doesnt take much to realise that
this denormalised, read everything about an entity, models
are somewhat contradictory when other databases such as
columnar storage databases (and the Oracle InMemory
database coming in 12c) show how reading individual
columns when performing analytics provide massive
performance gains.
For now Im going to leave this topic as in a future edition Ill
be writing an article all about JSON document storage and
databases with specific reference to those upcoming 12c
(12.1.0.2) features.
Graph Datastores
Graph databases are the final type of NoSQL database Ill
cover, and probably the most niche. Having said that these
are niche, they are becoming more prevalent with people
now using Facebook Graph search and the release of the
RDF Graph for Oracle NoSQL Database! So what is a Graph
data model and how does it differ?
Graph databases are all about the relationships between
entities rather than the entities themselves, and schemas
evolve by adding new relationships. At this point youre
probably thinking but my RDBMS does this with Foreign
Keys. Well sort of yes, but it is much more about the type of
questions you ask of the database.
Graph databases support query and discovery using Graph
patterns and traversals, meaning we ask questions about
reachability, connectivity, same as and proximity. A classic
example of this might be Who is part of this group, and
extending this out Who is a friend of all the people within
this group
Oracle NoSQL Database
The basic structure of graph storage is the triple
With triples
connected to
form the
Graph. Just like
JSON, KeyValue
and Column
family
databases the
schema doesnt
have to be
defined up
front and is
flexible in its
implementation, with new triples being added and the
relationships between entities defined as we go. In Graph
databases it is the edges (the connections) between
Vertices (the
entities/nodes) we
are interested in
using for traversal.
Personally I can see Graph databases becoming more
popular over time, as we move toward modelling
relationships between entities but well leave this subject for
now and perhaps ask the question to you if youd like to see
something more in-depth on the Graph capabilities of the
Oracle NoSQL Database?
James Anthony
e-DBA
Oracle NoSQL Database
1
Case Management or Business Process
Management?
Lonneke Dikmans - Vennster
Oracle recently released a new component in the Oracle
Business Process Management Suite: the case. This is an
important step forward in supporting business processes in
your organization. In this article you will learn how and when
to use BPMN 2.0 as a way to support the business and when
and how to use case management.
Not all processes are the same
There are different reasons why organizations want a system
to support their business process:
To automate the process or part of the process and make it
more efficient;
To trace the steps for future reference or legal reasons, a
so called audit trail;
To help employees make decisions about actions they can
take to increase the quality.
Example: Procure-to-Pay
The procurement process is a typical example of a process
that is predictable. The number of possible steps is limited
number and the order of the steps is predictable as well.
1. The process starts when somebody in the organization
needs to order an item (say pencils)
2. The purchasing department checks the order to ensure it
complies with company policies and a supplier is selected.
3. The goods are ordered with the supplier.
4. The goods are delivered at the organization and the
supplier sends an invoice.
5. The invoice is made payable in the ERP system and the
invoice is paid in the next payment batch.
Of course there are exceptions to this process: the
organization might have decided to go paperless and not pay
for pencils anymore, the pencils could be out of stock, the
pencils could not be delivered or the invoice could be
incorrect. But these are exceptions, not part of the regular
process or happy flow.
This process is a typical example of a business process the
organization wants to run as efficiently as possible, both in
terms of time and in terms of cost.
A number of steps can be automated:
1. Selecting a supplier could be automated, if the purchasing
department keeps a list of preferred suppliers;
2. The invoice could be made payable automatically in the
ERP system without human interference if the goods are
received as ordered and all supplier data are correct;
Systems that support these types of business processes are
traditionally called BPMS: Business Process Management
Systems. These systems typically are transaction centric and
focus on the automation of steps. The order and steps are
known design time.
Example: Court cases
An example of a very different process is a court case. The
predictability of the outcome varies greatly. Some court
cases are very predictable, for example undefended money
claims, or family cases where the parties propose an
agreement and the courts only examine them marginally.
Oracle BPM Suite
2
Other cases have very unpredictable outcomes. The matrix
below shows how many of the civil court cases in the
Netherlands fall within the different categories. Even though
the minority of the cases is unpredictable, these cases take
the most judicial time. There is a large amount of
information. Judges spend time on understanding the facts
and interpreting norms. To do this, they have a number of
activities that they can perform, ranging from reading files,
hearing witnesses to visiting the location (Source: technology
for justice, Dory Reiling, 2009)
Illustration 1. Matrix of Judicial Roles and Caseloads (source: technology for
justice, Dory Reiling, 2009)
The goal of a system that supports court cases is three-fold:
Make the process more efficient. Not so much by
automating steps, but by using self-service and digital means
to structure the information.
To trace the steps for future reference or legal reasons, a so
called audit trail;
To help employees make decisions about actions they can
take to increase the quality and efficiency.
Systems that support these types of (unpredictable)
processes are traditionally supported by Adaptive Case
Management Systems. These systems typically are document
centric or unstructured-data-centric because it concerns
enterprise content including video and pictures as well. The
focus is on the knowledge worker and the order and steps
are determined runtime based on the characteristics of the
case and the decisions of the knowledge worker.
BPM versus ACM
To summarize the differences, you can check the table
below. BPM stands for Business Process Management. ACM
is the acronym that is used for Adaptive Case Management.
Note that in real life a lot of processes are a combination of
predictable (sub) processes and adaptive case management.
Oracle BPM Suite
3
Supporting business processes and cases
In the previous paragraph you have read that organizations
want to improve or manage the way the business is run. The
improvement can be either in terms of efficiency (BPM) or
quality (ACM) or both. To accomplish this organizations step
through a number of phases:
Modeling. In this phase you design and model the business
process that needs to be supported by the BPM or ACM
platform.
Simulation. Simulate the designed processes to see if
requirements are met, given the predicted load and
resources (people and IT).
Illustration 2. Phases in BPM and ACM
Implementation. Build the business process in the tooling
that the BPM or ACM platform offers.
Execution. Deploy the process to the BPM or ACM platform
and run it.
Monitoring. Monitor the processes and instances at
runtime by collecting metrics and other information. (BAM).
Optimization. Based on runtime metrics, you can improve
the design of your process thereby starting another iteration
of the BPM or ACM lifecycle.
The cycle is the same for BPM and for ACM. The type of
process you model, simulate, implement, execute and
monitor is different though, as you have seen in the first
paragraph.
BPM modeling: BPMN 2.0
There are several standards for process modeling available.
In the past there were tools that modeled processes in one
language, and then executed them in another. For example
one would model the process in BPMN 1.1 and then execute
it in XPDL or BPEL. Since the arrival of BPMN 2.0 this is no
longer necessary. Apart from being a modeling notation, the
standard also prescribes a file format (XML). This makes it
possible for process engines to execute the process model.
Our purchase-to-pay process would look as follows in BPMN
2.0:
Illustration 3. BPMN 2.0 model for Purchase-to-Pay
Oracle BPM Suite
4
This can be modeled using the process modeler (Browser
based) that is part of the BPM Suite or using the BPM Suite
plugin of JDeveloper.
Case management modeling: CMMN
OMG, the group that has defined the BPMN standard, is in
the process of defining a standard for case management:
Case Management Modeling and Notation. It currently is in
beta.
A court case about a labor dispute would look something like
this:
Illustration 4. Labor Dispute modeled in CMMN
In addition to the CMMN model, you need to describe the
exit and entry criteria for the different tasks, stages and
milestones and the stakeholders (or case roles) that are
allowed to execute the tasks in order to develop the case.
At this moment there is only one tool available that supports
CMMN:
https://www.businessprocessincubator.com/cmmnwebmodeler.
Oracle BPM Suite
5
Alternatives
Instead of creating a case model with CMMN, you can also
use other means: BPMN 2.0 or a
mind map. The advantage of
BPMN 2.0 is that it shows a
natural flow of the process, or
the progress. It has the concept of
ad hoc processes to support the
non-deterministic nature of a
process. See Case management
Part I for a comparison of the
BPMN and CMMN:
http://blog.vennster.nl/2013/09/case-
management-part-1.html.
The other option is to use a mind
map to model the different
elements of a case. You can see a
mind map of a labor dispute case.
Illustration 5. Labor dispute in a mind map
Oracle BPM Suite
6
The biggest advantage of mind maps is that business users
are familiar with them, the disadvantage is that by its nature
it is hard to enforce modeling standards.
In addition to all these models you need an excel document
or other means to model the rules that are associated with
the different activities, stages and case roles or stakeholders.
Building business processes and cases
After the business process has been modeled (as a case or
as a deterministic business process), it needs to be
implemented. This is done using JDeveloper.
BPM: using the BPMN 2.0 engine
In JDeveloper, every activity in your business process is
associated with an implementation.
Illustration 6. Implementing an activity in JDeveloper
The implementation is done using SCA, service composite
architecture. The BPMN process is a component in your
composite application, just like a BPEL process is. The
different activities are implemented calling external
references (services) or components (human tasks, business
rules) in the composite application.
Illustration 7. Composite with a BPMN process
ACM: Case component
The case is implemented in a similar way; it is a component
in the composite application. Activities are implemented
using the human task engine, or calling a business process
(BPMN). Rules that determine the availability of activities are
implemented using the business rule engine. Last but not
least, custom case activities can be defined, by implementing
a Java class. Below is a picture of the composite with a case.
The case has a rule dictionary associated with it, and two
activities: a human task and a BPMN process.
Oracle BPM Suite
7
Illustration 8. Composite with Case component
When you double click the case component, the
property editor is opened. You can define
milestones here, data and documents,
stakeholders and translations.
Illustration 9. Case editor in JDeveloper
Running business processes and cases
Since both BPMN processes and cases are
implemented in SCA, the processes are visible in
Enterprise manager, for debugging and technical
monitoring purposes.
For the end user, both cases and BPMN processes
offer audit trails, milestones and task lists, to keep
track of the process. This can be viewed in the
Process workspace that is part of BPM Suite, or using
a custom application that calls the APIs for BPM and
Case management. In my experience, organizations
usually prefer the latter. Components from the
Process workspace can be used in the custom
application, since these are realized as ADF regions.
For business monitoring, BAM can be used.
Oracle BPM Suite
8
Conclusion
The case component is a natural addition to the BPM Suite.
Because it builds on the existing SCA infrastructure and
components like the BPMN engine, the human task service
and the rules engine, it is easy to learn for developers. For
the business it adds the flexibility that is needed to be
adequately support the different types of processes in an
organization.
The table below summarizes the different phases in BPM
and the corresponding tools that are used in the Oracle
Fusion Middleware stack.
Lonneke Dikmans
Vennster
Oracle BPM Suite
Kevin Thackaberry
Enterprise Deployment of Oracle Fusion
Middleware Products Part 1
Simon Haslam - Veriton Ltd
For quite a few years now Oracle has been producing what it
calls the Enterprise Deployment Guides (EDGs) for most of
the Fusion Middleware layered products.
This is the first article in a series that will cover what the
Enterprise Deployment Guides are, why you would want to
use them, and what areas you might choose to deviate from
them and why.
Firstly, what are the EDGs?
The best way to think of an EDG is as a recipe for building a
production-ready, highly available and secure Fusion
Middleware platform for one specific product set, such as
SOA. The recipe doesnt justify why certain items are added,
or give you any alternatives, but does provide step-by-step
instructions. However, just like baking a chocolate cake, you
might change the recipe to suit your own desires
depending on your skill, experience, or perhaps just whether
youve got all the right ingredients. No one recipe is right or
wrong just some of them will be more successful than
others.
So, back to Fusion Middleware Oracle have written EDGs
for the following product sets:
Oracle Business Intelligence
Oracle Identity Management
Oracle SOA Suite
Oracle WebCenter Content
Oracle WebCenter Portal
Oracles Exalogic engineered system, as you might have
guessed coming from Oracles middleware team, has some
EDGs too the current release includes Exalogic-specific
WebLogic, Identity Management and SOA ones. For the brave
Oracle has also written a Fusion Applications EDG and the
full documentation set contains a smorgasbord of all of the
above!
An EDG gives you the suggested steps to follow but the
assumption is that you will read it in conjunction with other
relevant manuals, such as the Installation Planning Guide
and the Installation Guide for your product set, the weighty
High Availability Guide (1140 pages for 11.1.1.7) and the,
comparably pamphlet-like, 188 page Disaster Recovery Guide
(though admittedly the EDG doesnt address this topic). By
this point youre just hoping you dont have to dip into the
Administrators Guide (770 pages) or the Administrators
Guide for your product (966 pages for SOA/BPM)
Why would you choose to follow an EDG?
Im assuming that you are installing Oracle products for
mission critical, enterprise-wide systems (or at least a pilot
for such) so they will have to be both highly available and
secure.
The first reason for using an EDG is if you dont yet feel
confident enough to build such a platform on your own. As
many readers will know its relatively easy to install Fusion
Middleware in a non-clustered manner usually this is
something along the lines of: create the database and run
RCU, install the software for the JDK, WebLogic and layered
product (such as WebCenter), then create and configure a
domain. Where it gets much more complicated is when your
confidentiality, integrity and availability requirements dictate
Oracle Fusion Middleware
the need for encryption of important traffic using properly
generated SSL certificates (not just demo ones), and drive
redundancy for every component to guarantee uptime.
Furthermore even when you do get such a multi-component,
multi-layered system installed you then need to test that the
security and high availability features work as you expect
and that you havent misunderstood the purpose of some
configuration or made any mistakes. From some recent
product development work I now think
there are well over a dozen failure
conditions you need to test to be confident
that a typical Fusion Middleware HA
platform is as robust as you hope. So once
again having the EDG as a recipe of the
steps you need to carry out to build such a
system is a great help.
Secondly, having met some of the EDG
authors now, I know that these documents,
blueprints if you will, are put together by
highly skilled individuals and with broad
input from engineering. Sometimes it is
easy to look at a design decision taken in an
EDG, think it is wrong and ignore it, only to later find out
there was a subtle reason for it but you hadnt understood
the product well enough (Im certainly guilty of this myself!).
Why should your boss want you to follow an EDG?
Those coming from an Oracle Database background, and
with a long memory, will remember OFA, or the Optimal
Flexible Architecture, that Cary Millsap co-designed during
his tenure at Oracle during the early 1990s. One key
advantage of OFA is that is gave the DBA some directory
naming conventions that didnt need further consideration
if any of you have wondered why, even today, Oracle
software is most often installed under
/u01/app/oracle/product this is the reason. This also
means that other experienced DBAs naturally know where to
look and what, say, the redo log files will be called.
Likewise if you build your important middleware platform
following an EDG then other experienced middleware
administrators, especially those hired on a
short-term basis, will more easily be able to
find their way round your system. During a
serious incident, or perhaps for a change
taking place in middle of the night when
your body would much prefer to be tucked
up in bed, the more consistent and easier
the components are to navigate, the better.
Finally lets consider Oracle Support. At
various points, whether during installation,
patching, application upgrades or just
following unexpected errors, you may have
to call upon Oracle for assistance. Oracle
Support teams must have the same
challenge as the rest of us when it comes to building
complex, multi-tier test environments. Therefore they too
use EDG as a basis, so if you are both following the same
instructions it is more likely that Oracle Support will have a
more representative environment to reproduce problems
against. In fact last year Oracle Support ran a series of
webcasts describing the EDGs (using SOA as an example)
which I would encourage readers to view (See My Oracle
Support Doc ID 1456204.1).
Oracle Fusion Middleware
So, is that all I need to know about EDGs?
No, not exactly. Modern middleware platforms, such as
Fusion Middleware, are highly sophisticated layered suites of
software that have many difference use cases. Whilst all this
glowing praise is being heaped on the EDGs please
remember that they are just one suggested approach and
may not suit all architectural circumstances or all sizes of
organisation in fact in my project experience Ive never
actually implemented what might be called a 100% EDG
build. There are many drivers which might make you choose
to deviate from the EDGs, such as licence optimisation,
administration roles, special security requirements, network
design and so on this will be the subject of my next article.
Simon Haslam
Veriton Ltd
Oracle Fusion Middleware
Let's do
Business
together
OTech Magazine reaches over 25 thousand
Oracle professionals around the globe.
Advertising in OTech Magazine is valuable
and competetively prized.
For more information contact us at
info@otechmag.com or +31 6 149 143 43
Data Security in Case Management
Marcel van de Glind - AMIS
Aldo Schaap - AMIS
Most of the data within organizations is not meant for the
outside world. It might even cause economic or political
problems if specific information comes out on the street.
Organizations need to take precautions to protect their
valuable data for the outside world.
But also within organizations not all data, for all kind of
reasons, should be available to everyone. For example
employee files should only be available to the employee him-
or herself and a selected group of others within the
organizations.
This article describes how the fictive company 4Security is
implementing a method to protect their valuable data for the
inside world.
The Challenge
How to separate data access between the different
stakeholders? E.g. a Customer Contact Center employee is
not allowed to see the work of the Analysis & Evaluation
employee. On the other hand, the A&E
employee is allowed to see the work of the CCC
employee but is not allowed to change it.
Another example is the Operational Manager
(OM). He has access to all data appended to the
case. It depends on the case status whether the
OM is allowed to change anything in the case.
With this in mind, how can we offer our employees a search
facility to find specific cases. This search facility should only
search through the data the employee is allowed to see. For
a CCC employee this means that it should only search in the
cases of his own department (the products they are
authorized for), and for these cases only in the data that has
been appended by his own department.
For maintainability and security reasons we want to do this
at a central place and only ones for every piece of data. How
can this be done?
4Security the fictive company
4Security is a company in the insurance branch. They exploit
several insurance products like life insurances and health
care insurances. Every insurances request is handled
according to a case oriented approach. To support a
uniform method to handle the cases, the company is
implementing one Standard Business Process for all
products supported by Oracle BPM. This standard business
process is divided into 4 phases, as illustrated in the
following picture. The human tasks are implemented with
Oracle ADF and the structured data is stored in an Oracle
database.
Data Security
New insurance requests are handled by an employee of the
Customer Contact Center department responsible for the
requested insurance product. The employee adds case
details and information about the related companies and
people, performs a number of default checks and optionally
adds a note.
In the next phase a number of automated information
collection requests is performed. The outcome of these
requests is added as observations to the case file.
After the collection phase the Analysis & Evaluation
department of the product performs an analysis and
evaluation on the case. The employee can perform
additional information requests, add received documents to
the case and put his findings into notes.
Finally the actual insurance is implemented. In our fictive
company this will be an automated phase. Part of this phase
is adding the created certificate to the case file.
Besides the mentioned operational force, there are also
other stakeholder involved. The role of one of them, the
operational manager, is taken along in this article.
Problem Analysis
Data objects
Except the documents, all case data is stored in a relational
database. The documents are stored in a document
management system. The database contains some meta
data and references to these documents. The relational data
is divided into eight different objects, namely:
Cases
Relations
Information requests
Observations
Documents
Notes
Reference data
Logging
On a database level each object consist of multiple tables.
E.g. in the picture below you will see the observation object
data model. For simplicity we will ignore the tables in this
article and pretend that the objects are the actual tables.
Data Security
User access
A Customer Contact Center employee is responsible for
registering insurance request. This is done during the so
called Intake phase. During the intake phase the CCC
employee needs write access to the case, relation, document
and note objects and read access to the reference data
object to create a new case. At the same time the system
also needs write access to the logging object.
To search for specific cases the CCC employee also needs
read access to the same objects of all the cases he or she is
authorized to. The search facility should search in all the
relational data, including the logging to determine the case
status. It should not search in the information requests,
observations, documents and notes appended during one of
the subsequent phases.
The Analysis & Evaluation employee performs an analysis on
the case data. Base on this analysis the employee decide
whether the request is accepted or rejected. The A&E
employee needs write access to the relation, information
request, observation, document and note objects and read
access to the reference data object to do their job. Like CCC
employees they are allowed to add additional relations but
they are not allowed to create new cases. Case searching of
the A&E employee is much richer than it for the CCC
employee. Now the searching takes place in all relational
data the employee is authorized for. All cases his
department is responsible for.
The Operational Manager of the CCC and A&E employees
needs the system to perform his managerial tasks like
reporting, monitoring and intervene. He has access to all the
cases for which he carries the responsibility. He is allowed to
update the case data, except for the information requests
and observations related data. This is exclusively allocated to
the A&E experts. The search facility for the OM is similar as
for the A&E employee.
Oracle Label Security
It is possible to separate access to data from an application
level. This means that the BPM process and the ADF task
screens are responsible to protect the data against
unauthorized usage. This approach is rather complex and it
still does not protect against direct access to the database.
For this reason 4Security has chosen to protect the data at
database level. They will use Label Security to protect the
data in the relational database against unauthorized use.
Selective data access control based on a user's level of
security clearance can ensure confidentiality without
overbroad limitations. This level of access control ensures
that sensitive data will be unavailable to unauthorized
persons even while authorized users have access to needed
data, sometimes in the same tables. Oracle Label Security
(OLS) enables access control to reach specific (labeled) rows
of a database. With OLS in place, users with varying privilege
levels automatically have (or are excluded from) the right to
see or alter labeled rows of data.
Besides the following small summary, we will not get into the
details of OLS and only describe the implementation of
4Security. OLS is making use of policies. A policy contains
among other things the definition of the label components,
the data labels and the user labels. User profiles identify a
users labels and privileges. OLS controls access to the
contents of a row by comparing that row's label with a user's
label and privileges.
Data Security
Labels
Labels enable sophisticated access control rules. When a
policy is applied, a new column is added to each data row.
This column will store the label reflecting each row's
sensitivity within that policy. Level access is then determined
by comparing the user's identity and label with that of the
row. Labels consist of three components, namely levels,
compartments and groups.
The label implementation of 4Security.
Data Security
Data Labels
A data row label indicates the level and nature of the row's
sensitivity and specifies the additional criteria that a user
must meet to gain access to that row. All data labels must
contain a level component, but the compartment and group
components are optional.
The data label implementation of 4 Security.
User Labels
A user label specifies that user's sensitivity level plus any
compartments and groups that constrain the user's access to
labeled data. Each user is assigned a range of levels,
compartments, and groups, and each session can operate
within that authorized range to access labeled data within
that range.
A subset of the user label implementation of 4 Security. For
example as you can see in this table, the operational
manager has read and write access to all the data in the
compartments REG and INV with a maximum level of
Sensitive.
Data Security
Label mapping
The following table brings all labelling of 4Security together.
As mentioned before, the Operation Manager has access to
all the cases for which he carries the responsibility. He is
allowed to update the case data, except for the information
requests and observations related data. This all is reflected
in this table.
Apply the policy
After defining the policy, the
policy must be applied to the
application tables. Once a policy
is applied, no data will be
accessible unless special
privileges have been granted to
the user or the legacy data is
updated with appropriate data
labels.
Applications need to have proxy
accounts connect as (and
assume the identity of)
application users, for purposes
of accessing labeled data. With
the SET_ACCESS_PROFILE privilege, the proxy account can act
on behalf of the application users. The SET_ACCESS_PROFILE
procedure sets the Oracle Label Security authorizations and
privileges of the database session to those of the specified
user. This means that when the application (user) accesses
the database the correct access profile must be set before
any read or write operation is performed. As an example:
-- Need to specify the applicable policy and user.
exec sa_session.set_access_profile
('OLS_POLICY','CUSTOMER CONTACT EMP');
-- Policy is applied to the employees table.
select * from employees;
Data Security
Another example:
-- Need to specify the applicable policy and user.
exec sa_session.set_access_profile ('OLS_POLICY','
Analysis and Evaluation Emp');
-- Label is part of the insert statement.
insert into employees (employee_id, last_name,
email, hire_date, job_id, ols_column) values (1000,
'van de glind', 'marcel.glind@amis.nl', sysdate,
'IT_PROG',
char_to_label('OLS_POLICY','S:INV:PRD1'));
-- Label is not part of the insert statement. The
default label is used.
insert into employees (employee_id, last_name,
email, hire_date, job_id) values (1001, 'van de
glind', 'marcelvandeglind@amis.nl', sysdate,
'IT_PROG');
These examples shows how to use the policy from a
database level. In the situation of 4Security the policy should
be used within the BPM processes and the ADF task screens
together running on a Weblogic Server that uses a
connection pool. Question is: is label security usable in this
situation? The answer is yes. It is actually rather easy.
BPM
From BPM, data is accessed through a database adapter. It is
a best practice to access the database content through
wrapper functions and not directly on the table. These
wrapper functions make it really easy to apply a policy. When
accessing a database label security enabled, the database
needs to know whos accessing the database and which
policy to apply. This information should be sent with the
request as metadata. An example:
Data Security
ADF
The task screens build in ADF have interactions with the
database. At 4Security the application uses Business
Component (ADFBC) for this. In ADF is an Application Module
a logical container for coordinated objects related to a
particular task, with optional programming logic. Application
Modules provide a simple runtime data connection model
(one connection per Application Module) and a context for
defining and executing transactions. It defines a database
session and a transaction boundary. By extending the base
class of the application module with the required OLS
functionality there is a central location from where all data
security can be arranged. By setting the required policy and
the involved user(role) at the beginning of a session in this
module, and to be more specific in the
ApplicationModule.prepareSession method, Label Security is
prepared for use.
Oracle Internet Directory
Till now 4Security has implemented the policy in the
database. Managing Oracle Label Security metadata in a
centralized LDAP repository provides many benefits. Policies
and user label authorizations can be easily provisioned and
distributed throughout the enterprise. Implementing the
policy in OID is out of scope for this article. For now will
suffice to mention the data that is stored in the directory.
The following Oracle Label Security data is stored in the
directory:
Policy information, namely policy name, column name,
policy enforcement options, and audit options
User profiles identifying their labels and privileges
Policy label components: levels, compartments, groups
Policy data labels
Data Security
Conclusion
This article describes how 4Security successfully
implemented Oracle Label Security to protect their valuable
data for the inside world. OLS could be used during the
development of new applications but could also be applied
easily to existing applications running on a Oracle database.
Because OLS is implemented at the database level, the
technology of the application running above it does not
really matter, as long as it is possible to set the access profile
before access the data. The application of 4Security running
in a Fusion Middleware environment where the application
consists of several products like BPM, ADF and OID perfectly
fits this patterns. OLS offers al the required flexibility, is easy
to maintain and offers the right level of security to protect
data against unauthorized usage.
Aldo Schaap
AMIS
Marcel van de Glind
AMIS
Data Security
Travis Hamilton
Oracle Business Intelligence and Essbase
Together
You dont know what you dont know
Neil Sellers - Qubix International Ltd
Oracles latest release of OBIEE (Oracle Business
Intelligence Enterprise Edition) v11.1.1.7 has a lot of user
improvements and more integration capabilities with
Essbase and the Oracle EPM (Enterprise Performance
Management) application suite of tools. This article for
OTech magazine will focus on the integration of OBI with
EPM and Essbase, looking at tools like Hyperion Smart
View, BI Publisher and Workspace and the integration
between them.
Why Together?
Oracle have been marketing for a couple of years the
Business Intelligence Foundation Suite (BIFS) and it has
become the default purchase for anyone considering a
license of either Essbase or OBIEE. The main reason for this
is that out the box of BI Foundation Suite comes
OBIEE+ : The Business intelligence Platform delivering
abilities for reporting, ad hoc query, analysis and dashboards
Essbase : For online analytical processing (OLAP)
Scorecard and Strategy Management : Scorecards
Essbase Application Link for Hyperion Financial
Management
So on this basis many people have a number of these
technologies in their license portfolio but perhaps are only
using one of them such as Essbase. There is clearly a lot to
be gained by leveraging the whole suite of products, not only
increasing the ROI but also taking advantage of the many
features and benefits. This is what we are going to discuss.
When we talk about OBIEE and Essbase together it often
involves discussions around Essbase as a data source for
OBIEE, the Essbase targeted Aggregate Persistence Wizard
for relational reporting and the integration of Essbase with
other relational sources of Data using Federated Queries.
Whilst these are key features I am keen to come at this from
a different angle with a focus on reporting from Essbase,
building Essbase and fundamentally improving Essbase using
OBIEE and the associated tools.
Oracle Business Intelligence & Essbase
A shift towards Essbase and OBIEE
Essbase Integration with OBIEE v10.1.3.3.1 was first
introduced in 2007 as a hidden feature. A small tweak to the
registry on the server and with a quick restart, Essbase was
available as a data source in the physical layer of the BI
Server. Functionality was limited and a flat list of measures
rather than an intuitive hierarchy together with MDX
generation issues and performance concerns meant that it
clearly was something that had been started and then
switched off as it wasnt ready. This was the first step in
bringing together the technology acquisitions of Siebel and
Hyperion that Oracle had made.
Traditional Reporting from Essbase in the Hyperion world
included Microsoft Office integration in the form of either
the Excel Add-in or Smart View, Hyperion Financial Reporting
for XBRL and Pixel perfect report creation from Essbase and
Hyperion Web Analysis formally known as Analyzer for all
your dashboard reporting. Third Party tools such as IBM
(Temtec) Executive Viewer also added a useful option to
report on data from Essbase.
OBIEE brings to the table a number of options which
enhance the reporting capability from Essbase. BI Publisher
especially allowing a native connection directly to Essbase
with a new MDX Query builder to aid the data extract. I will
focus more on this later in the article.
There are many methods of building an Essbase Cube from
simple Load Rules created in Essbase Administration Services
to the perhaps slightly dated but fully functional Essbase
Integration Services with a GUI drag and drop environment.
Additionally ETL tools such as Oracle Data Integrator (ODI)
and Informatica PowerCenter (DIM) both have adapters built
in for Essbase.
Oracle Business Intelligence & Essbase
All these methods are further enhanced by Essbase Studio
which consolidates cube construction activities, unifies
modelling of many sources and enables reuse of hierarchies,
metrics, dimensions and elements at a very granular level.
OBIEE brings more to the table with Essbase Studio
supporting the BI Server as a source allowing you to model in
the BI Server and then leverage this to build and Essbase
cube.
Essbase Integration Capabilities in 11.1.1.7.1
A number of beneficial features have been included in the
latest release of OBIEE which make the Essbase integrations
even more seamless. There are many to list out but I have
tried to focus on, what I consider are the important ones and
have a bias towards Essbase. These can be broken down into
4 categories
OBI / Essbase Integrated Infrastructure
The Complete installation of Essbase including the Essbase
tools is delivered with BI Install mitigating the need for
complex installs and post-install configuration Financial
Reporting, EAS, Essbase Studio and Calculation Manager also
installed with BI. This also allows for all services to be
managed through Oracle Enterprise Manager.
Oracle Business Intelligence & Essbase
Support for BI security negating the need for Shared
services in BI only use cases
Integrating Essbase into the BI Architecture platform allows
for all services to be managed through Oracle Enterprise
Manager providing a central place to manage Security and
diagnostics logs.
BI Server and Essbase Interaction
BI Server ODBC Procedures to create & execute Essbase
Calculations and MAXL scripts allows for a huge amount of
flexibility when implementing with Essbase. You could for
example run a business process (Essbase calculation script)
to update some data real-time if there was a requirement to
support the need.
Additionally Logical SQL syntax for updating, deleting
inserting single & multiple measures has been included
giving a greater amount of control of Essbase from with the
BI Server.
Presentation Services Essbase oriented features
An Option is now available to show how hierarchies are
displayed giving you a property that allows you to have your
totals at the bottom or after the children, perhaps where
they should be in a lot of cases. The option applies to all axis
and impacts all applicable views.
A further option controls whether nulls in rows/columns are
suppressed in sparse data sets suppress #missing in
traditional Essbase terminology. The default is suppress but
you can override at view-level and it applies to row, column
edges, it impacts all applicable views.
Oracle Business Intelligence & Essbase
Extended Essbase and EPM Integration Capabilities
Further to the above 4 Categories which I see as more of
platform enhancements to OBIEE, I have summarised 3
further capabilities that I think will help you maximise your
return on investment in the Business Intelligence Foundation
Suite (BIF).
The 3 at the top of my list are:
OBIEE Smart View Extension
The OBIEE Smart View Extension provides a very good
alternative to querying the OBI RPD, also giving you the
option to publish that query back to the BI server all within a
familiar MS Excel interface. For example if you were carrying
out your Group Financial Budget or Forecast using Oracle
Hyperion Planning through Smart View you could very easily
in real-time switch to another sheet in the work book to
query that Essbase database through the OBIEE extension or
perhaps look at the same through MS Word or MS
PowerPoint. This capability gives you a single user interface
to carry out the end to end needs of the process you are
working through without the need to jump between systems.
EPM Workspace Integration
Oracle have tried to embed OBIEE functionality within the
Hyperion Workspace before but removed it when it didnt
really work. Their second attempt is much better and you are
now able to consume, create and edit BI content within
Workspace with access to the BI Catalog, BI Home, BI
Interactive Dashboards all SSO enabled able to pass and
consume HSS tokens. The result is a web portal giving you
access to tools such as Hyperion Planning and Financial
Management alongside reporting from BI all under a single
roof giving a great user experience.
Oracle Business Intelligence & Essbase
BI Publisher Enhancements
The last capability I want to talk about is probably the most
exciting in my eyes and represents a significant reporting
step forward for Essbase users out there in need of boosting
the look and feel of their reporting solution in a slick easy to
use front end.
The latest release of BI Publisher includes a new Query
builder GUI giving you the capability to drag and drop
dimensions and members from an Essbase cube into a
Reporting layout. It also gives you a read out of the MDX as
you go which is very clever and perfect for giving you some
pointers on writing MDX queries which I dont mind
admitting are not my strongest point.
Further to this you can easily report against Essbase &
Planning cubes without the need to introduce them into the
RPD, meaning it is effectively plug and play. Allowing you to
combine data coming directly from Essbase on OBI
dashboards, combine Essbase data with data from other
sources in the same report and ultimately use BI Publisher
for production style reporting against Essbase.
Once you have a data model in place using the Essbase
Query Builder you are then able to switch your attention to
the actual BI Publisher report and run through the step by
step wizard to create your desired report. This capability
brings the quick start reporting capability on top of Essbase
that we have been missing for a number of years.
Oracle Business Intelligence & Essbase
Closing Note
From the points raised in this article I hope you can see that
a lot of work has gone into making the BI Foundation Suite a
truly integrated suite of tools thats talk to each other
seamlessly and work together. I have been working with
Essbase for many years and this latest integration with BI
gives users the most advance platform yet for reporting from
Essbase and indeed building Essbase.
Neil Sellers
Qubix International Ltd
Oracle Business Intelligence & Essbase
Solution-Architecture-in-a-Day SOA-Maturity-in-a-Day Oracle-Design-in-a-Day
At Ome-B.nl, Creative Software Solutions we
offer various one-day workshops. During
these workshops where you are in the lead
we set up a complete view of the challenge
ahead and how to make a the project a
success. Within a week after the session
youre offered an extensive report on how
the solution is designed. This report is the
ideal starter point for a project, tender, IT
program or Enterprise Architecture.
The fixed fee for all sessions (including
material, workshop, pre-session intake,
report and evaluation) is 2.500 euro (ex.
taxes, transportation). All the sessions will be
organized in-house at your location.
For more information please contact us at
www.ome-b.nl
or call us at +31 6 149 143 43.
Why I Dont Use WebLogic JMS Topics
Ahmed Aboulnaga, Raastech
Lack of true high availability and the inability to create
durable subscribers on distributed topics are the two
primary reasons to avoid using WebLogic JMS topics.
WebLogic JMS is an enterprise-class messaging system
that is tightly integrated into the WebLogic Server
platform. ~Oracle documentation
Oracle WebLogic Server 11g provides the ability to create JMS
destinations, namely JMS topics or JMS queues. JMS
destinations are used extensively for integration
development, particularly in point-to-point and publish-
subscribe models. Topics are useful in the sense that they
allow a message to be published once and consumed by
multiple subscribers. The message stays in the topic until all
subscribers to the message consume the message. This is
particularly useful in a publish-subscribe model as shown in
the figure below.
Contrast this with queues, in which a message is produced
into the queue and consumed once (aka first-in-first-out).
This is usually used in a point-to-point type of integration.
This article assumes testing in a 2-node middleware cluster.
Some SOA (or Java) code is deployed to all nodes of the
cluster and a JMS topic is defined in this cluster.
The expected behavior is as follows:
1. If there is a single consumer to this topic, we expect that
even though this code is deployed to all 2 nodes of the
cluster, that the message is consumed only once. (Note: The
cluster is merely to provide high availability to our existing
code.)
2. The message is equally available to all 2 nodes of the
cluster, so if any nodes fail, the message is still available and
can be consumed without manual intervention.
3. If one of the consumers is down at any point in time, when
the consumer is back up, it is able to consume the messages
from the topic.
In this article, I will attempt to describe the behavior of the
singleton property as well as the forwarding policies within
JMS topics to highlight the high availability challenges.
The Code
The screenshot below depicts a very simple SOA composite
that consists of a Mediator service and a BPEL project. These
are 2 separate flows; the Mediator service (top) accepts a
request and dumps it into a WebLogic JMS topic, shown in
the External References swimlane. The BPEL process
(bottom) consumes the message from the JMS topic.
Oracle Fusion Middleware
In this example, we have a single producer of the message
and a single consumer. This code will be deployed to a 2-
node cluster.
The singleton property
Within composite.xml in the code, simply adding the
singleton property as shown in the figure below in the JMS
JCA Adapter binding is all that is needed. This article
demonstrates testing this with and without this property.
Creating a Uniform Distributed Topic
In WebLogic Server, a Uniform Distributed Topic called
AhmedTopic is created and targeted to 2 JMS Servers, since
the current architecture assumes a 2-node cluster. Thus,
though the topic is available on both nodes (i.e., distributed),
it is seen by the code as a single entity.
Setting the JMS Forwarding Policy in Topics
JMS topics include a forwarding policy that can either be set
to Replicated or Partitioned. Both will be demonstrated
shortly.
Test #1: Replicated without Singleton
Recall that our SOA code example had a single producer (the
Mediator service) and a single consumer (the BPEL process).
Thus, every message enqueued to the JMS topic should be
dequeued only once. It should make no difference whether
our code is deployed to a single-node, 2-node, or 8-node
cluster.
Oracle Fusion Middleware
Observing the instance flow in the screenshot above reveals
some quite unusual behavior. Recall that we are using a
forwarding policy of Replicated and are not using the
singleton property in our code.
Though the message is produced only once, this resulted in 4
consumer instances! Basically, this is what happens:
1. Mediator produces a single message.
2. Message is replicated across both JMS Servers on both
nodes (since the forwarding policy is replicated).
3. Since the code physically resides on 2 SOA servers,
each SOA server will create an instance, consuming the
message from each of the JMS Servers, resulting in 4
instances and 4 messages consumed total.
Test #2: Replicated with Singleton
Using the same test, but simply adding the singleton
property in the code. This resulted in 2 instances.
Basically, this is what happens:
1. Mediator produces a single message.
2. Message is replicated across both JMS Servers on both
nodes.
3. Though the code physically resides on 2 SOA servers, the
code on only one of the servers will consume the message,
yet it still consumes it off of both JMS Servers, resulting in 2
instances and 2 messages consumed total.
Oracle Fusion Middleware
Test #3: Partitioned without Singleton
This test changes the forwarding policy to partitioned
without the singleton property in the code. This resulted in
2 instances.
This is what happens:
1. Mediator produces a single message.
2. Message resides on only a single JMS Server.
3. Since the code physically resides on 2 SOA servers, each
SOA server will create an instance, finding and consuming
the message from the JMS Server which has the message,
resulting in 2 instances and 2 messages consumed total.
Test #4: Partitioned with Singleton
The final test uses a JMS forwarding policy of replicated
with the code using the singleton property. This resulted in
1 instance, which is the behavior that we want!
Basically, this is what happens:
1. Mediator produces a single message.
2. Message resides on only a single JMS Server.
3. Though the code physically resides on 2 SOA servers, only
one of the JMS Servers will actually have the message and
the singleton property will enforce only a single instance
consumption as shown, resulting in 1 instance and 1
message consumed total.
Oracle Fusion Middleware
Summary
Test #4 appears to be the behavior we are looking for, but
that is partially true. By using a partitioned forwarding policy
in the JMS topic in conjunction with the singleton property in
the code results in the behavior we expect, which is for each
single message produced in the topic, the message will be
consumed once by each consumer.
Our examples above demonstrated a single consumer
scenario, but adding more consumers to the topic would
appear to operate just fine.
There are two problems though:
1. Using a partitioned forwarding policy means that if the
physical node on which the JMS server resides on is down,
the messages are not available for consumption. This does
not satisfy our high availability requirement.
2. If you have 3 consumers, as shown in the first figure of this
article, and the first two have already consumed the 60
messages, yet the last one has not yet completed; if the
server is restarted, these messages are lost. This is typically
resolved by creating a durable subscriber on the topic for
each of the consumers. This is common practice and
essentially tells the JMS topic that we have 3 consumers, and
not to clear the messages from the topic until all 3
consumers have consumed the messages. WebLogic Server
does not allow you to create durable subscribers on
distributed topics.
In order to use WebLogic JMS topics, we would have to
sacrifice availability of the topic and risk messages being lost
if one of the consumers is down or the server is restarted.
Therefore, it is my recommendation to avoid using WebLogic
Server 11gs implementation of JMS topics until these
limitations are addressed.
Instead, utilize a multi-queue architecture as shown below.
The burden is now on the producer to enqueue the message
to 3 separate queues.
Oracle Fusion Middleware
References
Singleton (Active/Passive) Inbound Endpoint Lifecycle
Support Within Adapters
http://docs.oracle.com/cd/E23943_01/integration.1111/e102
31/life_cycle.htm#BABDAFBH
Uniform Distributed Topic: Configuration: General
http://docs.oracle.com/cd/E15586_01/apirefs.1111/e13952/p
agehelp/JMSjmsuniformdestinationsjmstopicconfiggeneraltitl
e.html
Using Replicated Distributed Topics
http://docs.oracle.com/cd/E23943_01/web.1111/e13727/dds.
htm#BABEAGDF
Using Partitioned Distributed Topics
http://docs.oracle.com/cd/E23943_01/web.1111/e13727/dds.
htm#autoId16
Tuning Topics
http://docs.oracle.com/cd/E23943_01/web.1111/e13814/jmst
uning.htm#PERFM294
Developing Advanced Pub/Sub Applications
http://docs.oracle.com/cd/E23943_01/web.1111/e13727/advpub
sub.htm
Oracle Fusion Middleware
Ahmed Aboulnaga
Raastech
ASM Metrics
Bertrand Drouvot
ASM metrics are a goldmine: Everything we need is well
instrumented into the ASM cumulative views. We just
need to manipulate them efficiently.
Thats why I created the asm_metrics.pl: A new utility to
extract and to manipulate them in real-time. It helps me
every day when I am working with ASM and I hope it would
help any DBA working with ASM too.
History
When I need to deal with the ASM I/O statistics, the tools
provided by Oracle (asmcmd iostat and asmiostat.sh from
MOS [ID 437996.1]) do not suit my needs. The metrics
provided are not enough, the way we can extract and display
them is not customizable enough, and we dont see the I/O
repartitions within all the ASM or database instances into a
RAC environment.
Then, I decided to create my own asmiostat utility
(asm_metrics.pl) that is helpful for 4 main reasons:
1. It provides useful real-time metrics:
Reads/s: Number of read per second.
KbyRead/s: Kbytes read per second.
Avg ms/Read: ms per read in average.
AvgBy/Read: Average Bytes per read.
Writes/s: Number of write per second.
KbyWrite/s: Kbytes write per second.
Avg ms/Write: ms per write in average.
AvgBy/Write: Average Bytes per write.
Ill explain how they are computed and extracted later on.
2. It is RAC aware: You can display the metrics for all the ASM
and (or) database instances or just a subset.
3. You can aggregate the results following your needs in a
customizable way: Aggregate per ASM Instances, database
instances, Diskgroup, Failgroup or a combination of all of
them.
4. It does not need any change to the source: Simply
download it and use it.
How does it work?
The script takes a snapshot each second (default interval)
from the gv$asm_disk_iostat cumulative view as of 11GR1
(example from 12.1.0.1.0)
Oracle Automated Storage Management
or gv$asm_disk_stat (example from 12.1.0.1.0)
And computes the delta with the previous snapshot.
The only difference with gv$asm_disk_stat is the information
available in memory while v$asm_disk access the disks to re-
collect some information.
Extract from the oracle documentation:
Since the information required doesnt require to re-collect
it from the disks (as a discovery of new disks is not needed),
gv$asm_disk_stat is more appropriated here.
In this article I will describe the utility. Another article will be
written later on to describe use cases.
First lets see the metrics reported by the tool (already
explained above):
The output is like the following:
Oracle Automated Storage Management
In this example we can see the metrics for the DATA
diskgroup per database instances (for all the ASM instances,
failgroups and disks).
Important remark
The blank value for one those fields (INST, DBINST, DG, FG,
DSK) means that the values have been aggregated for this
particular field.
For example on this screenshot
You can see the metrics aggregated by failgroups, disks,
diskgroup per database instances, database instances and
ASM instances.
How are those metrics computed?
The metrics are computed this way:
Reads/s comes from the delta computation of the READS
column divided by the snapshot wait interval.
KbyRead/s comes from the delta computation of the
BYTES_READ column divided by the snapshot wait interval.
Avg ms/Read comes from the delta computation of the
READ_TIME / READS columns.
AvgBy/Read comes from the delta
computation of the BYTES_READ / READS
columns.
Writes/s comes from the delta
computation of the WRITES column
divided by the snapshot wait interval.
KbyWrite/s comes from the delta computation of the
BYTES_WRITTEN column divided by the snapshot wait
interval.
Avg ms/Write comes from the delta computation of the
WRITE_TIME / WRITES columns.
AvgBy/Write comes from the delta computation of the
BYTES_WRITTEN / WRITES columns
Oracle Automated Storage Management
What are the features of the utility?
To explain the features, lets have a look to the help:
So:
1. You can choose the number of snapshots to display and
the time to wait between the snapshots. The purpose is to
see a limited number of snapshots of a specified amount of
wait time between snapshots.
2. You can choose on which ASM instance to collect the
metrics thanks to the -INST= parameter. Useful in RAC
configuration to see the repartition of the ASM metrics per
ASM instances.
3. You can choose for which DB instance to collect the
metrics thanks to the -DBINST= parameter (wildcard
allowed). In case you need to focus on a particular database
or a subset of them.
4. You can choose on which Diskgroup to
collect the metrics thanks to the -DG=
parameter (wildcard allowed). In case you
need to focus on a particular diskgroup or a
subset of them.
5. You can choose on which Failgroup to
collect the metrics thanks to the -FG=
parameter (wildcard allowed). In case you
need to focus on a particular failgroup or a
subset of them.
6. You can choose on which Exadata Cells to
collect the metrics thanks to the -IP=
parameter (wildcard allowed). In case you
need to focus on a particular cell or a subset of them.
7. You can aggregate the results on the ASM instances, DB
instances, Diskgroup, Failgroup (or Exadata cells IP) level
thanks to the -SHOW= parameter. Useful to get an overview
of what is going on per ASM Instances, per diskgroup or
whatever you want, as this is fully customizable.
8. You can display the metrics per snapshot, the average
metrics value since the collection began (that is to say since
the script has been launched) or both thanks to the
-DISPLAY= parameter. So that you can get the metrics per
snapshots, since the script has been launched or both.
Oracle Automated Storage Management
9. You can sort based on the number of reads, number of
writes or number of IOPS (reads+writes) thanks to the
-SORT_FIELD= parameter (so that you could for example find
out which database is the top responsible for the I/O). So
that you can find the ASM instances, the database Instances,
or the diskgroup, or the failgroup or whatever you want that
is generating most of the I/O reads, most of the I/O writes or
most of the IOPS (reads+writes)
The main entry for the tool is located to this blog page:
http://bdrouvot.wordpress.com/asm_metrics_script/ from
which youll be able to download the script or copy the
source code.
Feel free to download it and to provide any feedback.
The next article will focus on use cases: See the ASM
preferred read in action, find out the most physical IO
consumers through ASM in real time, some Flex ASM 12c
findings and so on.
Bertrand Drouvot
Oracle Automated Storage Management
We're looking for damn good content!
The call for content for the Spring-issue of
OTech magazine is now open!
Go to www.otechmag.com/call-for-content
and share your knowledge with over 25
thousand readers worldwide.
FUTURE IS NOW, ODI 12C.
Gurcan Orhan, Global Maksimum
In my first article of OTech Magazine, I would like to share
my impressions about the latest release of Oracle Data
Integrator, which is ODI 12c. What Oracle say, what we
should expect, what ODI 12c gives to us, etc.
Initially lets overview the major enhancements and changes
for ODI 12c regarding previous versions;
Change of names of tools and properties
Installation
Complete change on UI
In enterprise
Integration with other Oracle products
Knowledge Module architecture
Change of Names of Tools and Properties
Names of components are changed with ODI 12c. Heres
what we will talk about from now on in order not to cause
collusion.
Installation
We now have 3 types of installation of ODI.
Standalone : SDK (Software Development Toolkit) and
Standalone Agent.
Developer : SDK, ODI Studio and Standalone Agent.
Java EE : SDK and JEE components to deploy agent to Web
Logic Server.
RCU
In prior versions of ODI, if you were using RCU (Repository
Creation Utility) to create master and work repositories, your
master and work repositories of ODI was created under
same schema. This could not be a problem when you are
working in your own environment for laboratory purposes.
But in real world, when sharing the repository with many
colleagues such as developers, application operators, admins
for development, test and production environment, this
feature wont be so effective, because it is generally needed
that master and work repositories should be in different
schemas. Main reason for this is that, we need to deploy
everything in the work repository from prod to test and
maybe to dev and let the master repository has each
environments own information.
Now new version of RCU supports different schemas for
work and master repositories.
Complete Change on UI
User Interface is completely changed in ODI 12c and
architecture is switched from declarative design to
declarative flow-based design. New UI shouldnt mean to be
a colorful make-up, but this new look brings, new
architecture, new methodology and new logic. If you have
used OWB (Oracle Warehouse Builder) before, it wont be
Oracle Data Integrator
much problem to be familiar with ODI 12c, because to me -
it is the mix of ODI 10G, ODI 11G and OWB 11G.
Because of the flexibility and extensibility of the new
approach, it will be much more simpler to use and to
develop integration projects. So it is time to talk about the
new features that is hidden in the new UI.
New Operators (based on Developers Guide)
10 operators are implemented in ODI 12c to boost up E-L-T
flows and reduce development effort.
1. Aggregate : A projector component to summarize data and
use major aggregation functions that most databases
support. Group By statement can be left to ODI automatically
or can be overridden with manual group by clause.
2. Dataset : A container component that allows us to group
multiple data sources and join them through relationship
joins and is logically implemented. A dataset can contain
datastores, joins, lookups, filters and reusable mappings
(with no input but one output signature is allowed). Dataset
is a relationship-based component and exactly the same how
we develop in ODI 11G in logical design. Upgraded or
imported projects from ODI 11G will
contain datasets, which is developed in
ODI 11G.
3. Distinct : A projector component
which simply uses distinct SQL
operator and outputs the unique
values of obtained select statement.
4. Expression : A selector component
that inherits attributes from a
preceding component in the flow and
adds additional reusable attributes. An
expression can be used to define a
number of reusable expressions within
a single mapping. Attributes can be
renamed and transformed from
source attributes using SQL
expressions.
Oracle Data Integrator
5. Filter : A selector component that can select a subset of
data based on a filter condition. Can be located in a dataset
or directly in the flow component of a mapping and can
contain any type of SQL statements (and, or, like, functions,
etc.).
6. Join : A selector component that creates a join between
multiple flows. The attributes from upstream component are
combined as attributes of the Join component. Can be
located in a dataset or directly in the flow component of a
mapping and can contain any type of SQL statements (and,
or, like, functions, etc.).
7. Lookup : A selector component that returns data from a
lookup flow being given a value from a driving flow. The
attributes of both flows are combined, similarly to a join
component. A lookup can be implemented in generated
code either through a left outer join or a nested select
statement. Can be located in a dataset or directly in the flow
component of a mapping and can contain any type of SQL
statements (and, or, like, functions, etc.).
8. Set : A projector component that combines multiple input
flows into one using set operators like UNION, INTERSECT,
EXCEPT, MINUS and others.
9. Sort : A projector component that will apply a sort order to
the rows of the processed dataset, using the SQL ORDER BY
statement.
10. Split : A projector component that divides a flow into two
or more flows based on specified conditions. Split conditions
are not necessarily mutually exclusive: a source row is
evaluated against all split conditions and may be valid for
multiple output flows.
Reusable Mappings
A reusable mapping can have generic input and output
signatures to connect to an enclosing flow, and it can also
contain sources and targets that are encapsulated inside the
reusable mapping. This means to improve productivity of
development phase. Similar flows can be now based on
reusable mappings and when it is needed to change low-
level architecture, only change of reusable mapping would
be enough.
Multiple Targets
ODI 12c now supports multiple targets in a single flow. In
previous releases of ODI, we were developing many
interfaces to load different target tables with the same or
similar structure. With the help of split component, we now
Oracle Data Integrator
have ability to load different target datastores of different
technologies with different methods in one ODI 12c
mapping. Since loading methods are applied on target, we
can optionally split into two or more targets of one is Slowly
Changing Dimension, the other Incremental Update and the
other Append within the same mapping.
Executing Mapping in Parallel
This feature is not meant to be running mappings in parallel
mode, since we can already do this in ODI 11G. Since
temporary objects during a mapping has now have individual
unique name customized by an option in the physical view of
a map and loading into staging area can be made in parallel
mode, we now have ability to execute queries in parallel
mode for the same mapping.
Step-by-Step Debugger
Debugging is the most important activity for a developer
when writing code and developing something. In ODI 10G,
we were using some workarounds like adding a 1=2 filter to
the interfaces or running the package in development
environment, moving the original table and creating empty
tables to see what the SQL statements look like. In ODI 11G,
simulation of any execution implemented and we can now
see what code is producing and sending to database or
operating system and check the statements.
In ODI 12c, many steps further to this feature, we now have
step-by-step debugger. Mappings, packages, procedures and
scenarios can be debugged in a step-by-step manner with
adding breakpoints to interrupt execution at predefined
locations. Variables values can be introspected and values
can be seen in any step. Additionally, data of underlying
sources and targets can be queried, including content of
uncommitted transactions for better insight.
In Enterprise
In ODI 10G, agents behave as background client applications
and cannot be managed, controlled or monitored through a
management utility. In ODI 11G, Java EE agents are
implemented and can be deployed to WebLogic Server and
monitored through Enterprise Manager 12c. In ODI 12c, both
standalone agents and Java EE agents can be deployed to
WebLogic Management Framework and can be managed
within Oracle Enterprise Manager 12c.
Oracle Data Integrator
With this enhancement we now can have the ability to do
following;
User-interface driven configuration through the
Configuration Wizard.
Multiple configurations can be maintained in separate
domains.
Node Manager can be used to control and automatically
restart agent.
High-Availability (HA) and scalability is fully supported via
clustered deployments for Java EE agents. ODI 12c
components deployed in WebLogic Server benefits from the
capabilities of clustering for scalability, including JDBC
connection pooling and load balancing. In addition to the
cluster-inherited HA capabilities, the run-time agent also
supports a connection retry mechanism to transparently
recover sessions running in repositories that are stored in
HA-capable database engines such as Oracle RAC.
ODI 12c simplifies complex data-centric deployments by
improving visibility and control for with a unified set of
management interfaces.
ODI 12c console leverages the Oracle Application
Development Framework (ADF) and Ajax Framework for a
rich user experience. Using this console, production users
(mostly administrators) can set up an environment, export
and import the repositories, manage run-time operations,
monitor the sessions, diagnose the errors, browse design-
time artifacts and generate lineage reports.
In addition, this interface integrates seamlessly with the
Oracle Enterprise Manager Fusion Middleware Control
Console and allows administrators to monitor from a single
screen not only their data integration components but their
other Fusion Middleware components as well.
To help maximize the value of ODI 12c, Oracle offers the
Management Pack for ODI 12c. This leverages Oracle
Enterprise Manager Cloud Controls advanced management
capabilities, to provide an integrated and top-down solution
for your ODI 12c environments by providing a consolidated
view of your entire ODI 12c infrastructure enabling users to
monitor and manage all of their components centrally from
Oracle Enterprise Manager Cloud Control. Additionally, you
can obtain real-time and historical in-depth performance
statistics for the ODI Standalone and JEE Agents.
There are new features not only in agent side, but also in
security side. ODI 12c can further integrate with leading
Identity Management solutions through its integration with
the Oracle Platform Security Services (OPSS), which provide
an authorization model and control access to resources.
Enterprise roles can be mapped into Oracle Data Integrator
roles to authorize enterprise users across different tools.
Oracle Data Integrator
Integration with other Oracle products
As it is for sure, Oracle Warehouse Builder (OWB) is at the
edge of its life. Last major release (11G R2) has been
released and only minor releases will be releasing in the
future. Oracle 12c is the latest database version that will
have OWB bundled. As it is announced in Oracle Open World
13, next release of Oracle database will not contain OWB. So
what will your OWB projects future going to be?
ODI 12c is now supporting OWB mappings. You can now
execute OWB jobs inside ODI 12c through the
OdiStartOwbJob tool. OWB repository can now be configured
as a data server in Topology and users can examine OWB job
execution details displayed in ODI 12c console, in operator
and in Enterprise Manager. Before migrating from OWB to
ODI, you can now create OWB packages in ODI 12c and start
executing OWB jobs within ODI and get ready to prepare for
migration.
On the other hand, integration with Oracle Golden Gate
(OGG) has been enhanced. To reduce loading and
transformation time, you can easily configure and deploy
real-time data warehousing solutions without impacting
source systems or batch window dependencies.
The integration of OGG as a source for the Change
Data Capture (CDC) framework inside of ODI 12c has
been improved in the following areas :
OGG source and target systems are now configured
as data servers in ODI 12cs Topology. Physical and
logical schemas represent capture and delivery
processes. This representation in topology allows
separate physical configuration for multiple
environments, following the overarching philosophy
around contexts.
Most OGG parameters can be added to capture and
delivery processes in the physical schema
configuration. The user interface provides support
for selecting parameters from a library of
parameters. This minimizes the need for the
modification of the OGG parameter files after
generation.
A single ODI 12c mapping can be used for journalized
Change Data Capture load and bulk load of a target. This is
enabled through the use of the OGG Journalizing Knowledge
Modules as well as the definition of multiple deployment
specifications attached to a single mapping. This powerful
feature allows a single mapping logical design to be reused in
Oracle Data Integrator
different physical configurations.
OGG parameter files can now be automatically deployed to
source and target OGG instances through the JAgent
technology. In addition, OGG instances can now be started or
stopped from ODI 12c.
Knowledge Module architecture
ODI 12c continues to support Knowledge Module
architecture, but now with some enhancements. Knowledge
Module architecture is power of ODI and set of library codes
that has ability to change SQL statements via options and
other elements to provide us for flexible, modular and
extensible processes.
ODI 12c has introduced a new style of Knowledge Module,
called Component-Style Knowledge Modules, in addition to
Template-Style Knowledge Modules available from ODI 11G.
This new style of Knowledge Module provides an extensible
component framework that improves the overall mapping
design, where for example users are able to declare the
transformation order. These also improve reusability as they
can be plugged together; in addition to helping avoid code
and data duplication as well as providing improved Oracle
connectivity.
Conclusion
No matter if you are using ODI for data warehousing,
application integration, OLTP reporting or other purposes, it
is the unique tool that supports heterogeneous environment
in both source or target side, and has extensive features that
allows you fast developing cycles and quick responses with
ability of flexibility, high-performance, open and integrated E-
L-T architecture and high productivity. With merging OWB
functionality into new version of ODI, it became more
powerful without losing prior powerful features.
Data Warehousing and Business Intelligence : By
executing high-volume, high- performance loading of data
warehouses, data marts, On Line Analytical Processing
(OLAP) cubes, and analytical applications, ODI 12c
transparently handles incremental loads and slowly changing
dimensions, manages data integrity and consistency, and
analyzes data lineage.
Big Data : By providing prebuilt integration with Big Data
technologies, such as HDFS and Hive, businesses are able to
leverage additional sources of data previously too large and
unwieldy to gain benefits from. Oracle delivers integration
for Big Data technologies leveraging a metadata driven
process while enabling data integration resources to easily
manage how Big Data is extracted, loaded, and transformed.
Service-Oriented Architecture (SOA) : By calling on
external services for data integration and by deploying data
services and transformation services that can be seamlessly
integrated within an SOA infrastructure. ODI 12cs
architecture additionally provides support for high-volume,
high-performance bulk data processing to an existing
service-oriented architecture.
Master Data Management (MDM) : By providing a
comprehensive data synchronization infrastructure for
Oracle Data Integrator
customers who build their own data hubs, work with
packaged MDM solutions, or coordinate hybrid MDM
systems with integrated SOA process analytics and Business
Process Execution Language (BPEL) compositions.
Migration : By providing efficient bulk load of historical
data (including complex transformations) from existing
systems to new ones. Oracle Golden Gate then seamlessly
synchronizes data for as long as the two systems coexist, and
ODI 12c continues to complement as needed for
transformations.
References;
ODI 12c New Features
http://docs.oracle.com/middleware/1212/odi/ODIDG/whatsn
ew.htm#ODIDG109
ODI 12c Developers Guide
http://docs.oracle.com/middleware/1212/odi/docs.htm
ODI 12c Data Sheet
http://www.oracle.com/us/products/middleware/data-
integration/odi-ee-ds-2030747.pdf
Gurcan Orhan
Global Maksimum
Oracle Data Integrator
Steve Cox
Content-Enabling Your Insurance Business
Using Oracle BPM and WebCenter Content
Raoul Miller TEAM informatics
TEAM has worked with a number of insurance industry
companies over the years and the advantages of content-
enabling these enterprises are numerous. Many insurance
organizations have grown through acquisition, with the end
result that they have parallel legacy systems in place with
increasingly evolving complexity in meeting the challenges of
integrating them. This makes the insurance industry ideal for
deployment of SOA/BPM as an integration methodology and
adding content and records management to this is a no-
brainer (as the insurance industry is so document and
record-driven).
Last year Oracle published a white paper (link) on how best
to use their BPM (business process management) tools in an
insurance industry setting. I wont restate all their reasons as
to why the Oracle solution is a good fit but suffice to say
that the Oracle Insurance business unit has a lot of
experience in the area and solutions to meet all your needs.
However, what was missing from the white paper was the
value and flexibility gained by adding WebCenter Content,
Records, and Imaging to the mix.
We are currently working with a number of customers on
exactly these integrations allowing for secure upload of
files from agents offices, scanning and uploading of paper
forms and documents (using capture and distributed
capture), migrating legacy images from mainframe systems
to modern Oracle WebCenter Content systems, and allowing
users (for the first time) to truly search and retrieve content
based on metadata and full text searches.
Case Study: Insurance Company A
Insurance company A is a large organization in APAC
responsible for workers compensation and accident claims.
The organization required a very secure system (as it will
contain medical records and other potentially sensitive
information) with extremely granular security capabilities
and a high level of audit capabilities. The system had to be
scalable enough to meet the needs of every person in the
country and visitors from overseas. Historic content was
migrated from their existing legacy system and new content
submitted either electronically through an upload screen or
scanned from paper. A final wrinkle was that bandwidth to
many of the organizations remote branch offices was quite
poor, so the overhead of the application needed to be low.
TEAM worked with Oracle and FINEOS, a third-party software
WebCenter Content
vendor, which delivered the case management software
used at the front end. This installation is a prime example of
invisible ECM or what I call CaI (Content as Infrastructure) in
that end users are unaware of the content / records
management system sitting behind their user interface and
only a few administrators have access to the Oracle
WebCenter Content user interface. The new system
leverages all the content management capabilities of Oracle
WebCenter Content (including the retention, conversion,
indexing, security, and auditing functions) while integrating
seamlessly with the industry-specific front-end. The Content
Management version deployed at Insurance Company A is
the 10gR3 release, which is why the logical diagram below
shows the content servers with standalone web servers
rather than deployed on the same application server
platform as the Content Integration Suite in the middleware
tier. We would expect to make this change when the client
upgrades in the next 12-18 months.
The system has been successfully
deployed in production for over 7
years and ingests 10-20 million records
per year. The system was tested to
manage 200 million records on the
existing infrastructure; we have 2-3
years to design the physical
infrastructure for the next major
release.
Case Study: Insurance Company B
Insurance Company B is a US-based
individual and commercial insurance
company that has acquired two
equally large competitors in the past 5
years. Although the business is run as
one organization, there are still three
sets of back-end infrastructure in place
(with three sets of IT staff to support
them). The majority of their records
are stored as TIFF or proprietary image
formats in legacy systems on failing
(and unsupported) hardware.
WebCenter Content
This customer has a number of major challenges:
Volume of images needing to be imported and
converted is very high approximately 600-700 million items.
While we know that WebCenter Content can support
collections this large, the storage systems have to be
carefully designed to ensure reliability and performance.
The best approach is to split the huge single instance
collection into a number of smaller functional collections and
access them all via web services with intelligent routing
rather than rely on close-coupling to a single unwieldy
instance.
The customer has chosen to procure scanning from
one vendor, BPM and SOA from a second, and content
management from Oracle. While WebCenters standards-
based architecture and integration methods ensure this is
possible, using products from a single vendor simplifies the
integration discussions, support considerations, and
generally lowers overall cost (through license bundling).
The customer has a vacuum in architectural and
technical leadership which complicates the design and
deployment process. In any complex environment,
organizations require clear vision and firm leadership to
deliver complex functionality to the business. In its absence,
point solutions, short-term fixes and siloed applications will
tend to proliferate causing more issues in the long term.
The solution we are proposing to Insurance Company B uses
components from different vendors integrated via an
Enterprise Service Bus (ESB) to migrate from their legacy
platforms to a more-modern SOA-enabled delivery process.
Although their SOA and ESB vendor is not Oracle, the
process of delivery would be identical if it were. Ingestion of
new content is simple although the business encourages
submission of content via secure email, web upload, or XML
data transfer (over HTTPS), there is still a high volume of
paper which is scanned, OCRed and indexed, then passed to
WebCenter Content via a custom connector that packages
PDF images and associated security and metadata and
passes them on via HTTPS to the repository. Legacy content
would be passed through a conversion and organization
process (by another TEAM partner) and packaged into bulk
upload packages for ingestion using the standard bulk-load
tools.
Once the raw content is managed and indexed by the
WebCenter Content systems, users will interact with it
through their standard web-enabled applications. New
content is passed via web services to the BPM system and
workflow managed through these methods; updated content
is managed through a parallel track. Requests for new,
archive or legacy imported content are passed to the ESB as
web service requests and routed to the correct WebCenter
Content instances based on traffic rules defined within the
ESB. Because the overall collection is so large, full text
indexing is not useful or required and all searches are
managed via metadata. However, certain content instances
have full text indexing enabled to support future
functionality for targeted searches.
Overall Best Practices
Working with a number of insurance companies, we have
been able to derive some best practices for these large (and
often) complex entities and processes:
Metadata Management - Best Practices around
metadata fall into two broad categories - master data
management, or ownership of various metadata, and
WebCenter Content
development of
a scalable
information
architecture.
The former is a
key takeaway
for any
complex
system; every
metadata filed
in an
integrated
system must
have a single
system of
record and that environment is the only one that can update
these values. Every other system will treat that metadata
value as read-only. Updates may be initiated from other
environments, but they will trigger a web service request
that is routed back to its owner and update the record
there. All metadata need not be owned by a single system
but each field must be managed by only one system.
Although it is technically feasible to pass updates from
multiple systems and keep them in sync, in practice this is
very complex and almost always fails somewhere, leading to
data corruption that needs to be fixed.
Development of scalable information architecture follows
from master data management but is much harder to pin
down into simple guidance. Simply stated, an organizations
security and metadata model needs to reflect its current
business needs while allowing scope to adapt as the
business evolves. Every business will require something
slightly different to reflect its unique history, processes,
focus, and plans. The key consideration here for an
insurance client to have all aspects of the business reflected
in the workshops that are held to define the information and
security architecture.
Security Integrations - As insurance companies deal
with a lot of personal and financial data about their clients,
security is obviously an extremely important consideration
within the design process. The organizations themselves
have usually worked out the requirements for auditing,
restricting access to need to know, separation of
responsibilities, etc. Translating these to business rules is
sometimes a challenge but the capabilities of identity
management, database advanced security, WebCenter
Records auditing and BPM have been able to deliver these at
all the clients we have been engaged with.
One particular challenge often seen in companies that have
grown by merger and acquisition (such as Insurance
Company B above) is that they may have multiple user
directories to store user and agent identities. Although
WebLogic Server (WLS) (which handles the authentication
and authorization duties for all WebCenter applications and
BPM) can be configured to query multiple authentication and
WebCenter Content
Authorization providers, we have found that a more elegant
and scalable approach is to use either Oracle Virtual
Directory (OVD) or Oracle Internet Directory (OID) as a single
source of user and identity data and use the tools that come
with these to collect identity and user data from multiple
targets. Integration of WLS and WebCenter applications with
OVD and OID is straightforward and fully supported.
Using SOA and ESB - As both of the examples above
illustrated, integration of different applications and
environments is most easily achieved through a service-
oriented architecture brokered using an Enterprise Service
Bus. Because WebCenter Content was designed from the
very beginning as a service-oriented application running on
Java, integration of all the applications capabilities via web
service and ESB is straightforward and well-documented.
However, content enablement of the enterprise will require a
fairly mature deployment of SOA within the organization as
well as clear leadership and understanding of best practices
and business goals for the outcome. Both of the examples
described previously deployed solutions from different
vendors, illustrating the beauty of a standards-based
approach. However, time to deployment and long-term
support cost is usually lower when deploying all components
from a single vendor as the integration steps are tested and
supported by the vendor.
BPM integration - Associated with SOA and ESB,
Business Process Management is an essential part of any
insurance industry deployment of content management.
While WebCenter Content has its own inbuilt workflow
engine, this is optimized for approval and notification of
content and is not designed to integrate directly with
external systems and data sources. Fortunately, there is a
standard BPM integration delivered with WebCenter Content
along with a limited use license of the BPM suite. This allows
organizations to develop complex workflows (or better, to
integrate with existing business processes) for new and
updated content as well as allowing for processes to request
content and metadata from the WCC system. Although the
delivered integration is for Oracles BPM suite, we have
modified this to work with third party BPM solutions for
many clients the beauty of Oracle embracing and
supporting the BPMN and BPEL standards for their BPM
products.
Search integration - Searching and optimizing users
search experiences could cover a whole additional article but
is an important consideration for the insurance industry (and
other complex deployments). One advantage of using
existing tools to access the stored content managed within
WebCenter Content is that the queries made to the
repository via web service or java integrations are almost
always structured queries using metadata and security. It
would be extremely rare for an external system to pass an
unstructured text query to the system. If that were required
though, the challenge would not be in the integration, the
standardization of the query language, or the format of the
search results; it is the validity and usefulness of those
results without some kind of ranking or organization. This
problem is usually solved by adding a layer of search
management such as Oracles Secure Enterprise Search or
Googles Search Appliance and relying on the ranking
algorithms within those applications.
Search is a vital component of any content solution (what
good is content management if users cannot find that
content?) but any discussions of design and deployment of
WebCenter Content
that search must focus on shaping the user experience to
provide targeted and useful search results, which is almost
always delivered through metadata searches. Full text
searching is usually only useful when filtering the results of
that initial structured search.
Conclusion
The insurance industry is one where content enablement can
yield substantial returns in a short time period. Because the
industry is, by-and-large, still very paper-driven the
possibilities for improvement in process, service and
handling time are very real; however, the underlying
complexity of existing systems and legacy integrations can
make the initial design and deployment of a content
management system challenging.
We have found that clear architectural and business
leadership is required to bring the various parties together
and design a modern solution to content management,
integrated with business processes and leveraging the power
of an enterprise service bus to share content, conversion,
search and distribution services across multiple platforms.
Metadata management and information architecture are key
to the process as well and require a combination of an
experienced implementation partner, a mature application
vendor and champions within all areas of the business.
WebCenter Content
Raoul Miller
TEAM informatics
OTech Magazine
OTech Magazine is an independent magazine for Oracle professionals.
OTech Magazines goal is to offer a clear perspective on Oracle technologies
and the way they are put into action. OTech Magazine publishes
news stories, credible rumors and how-tos covering a variety of topics.
As a trusted technology magazine, OTech Magazine provides opinion and
analysis on the news in addition to the facts.
OTech Magazine is a trusted source for news, information and analysis
about Oracle and its products. Our readership is made up of professionals
who work with Oracle and Oracle related technologies on a daily basis, in
addition we cover topics relevant to niches like software architects, developers,
designers and others.
OTech Magazines writers are considered the top of the Oracle professionals
in the world. Only selected and high-quality articles will make the
magazine. Our editors are trusted worldwide for their knowledge in the
Oracle field.
OTech Magazine will be published four times a year, every season once. In
the fast, internet driven world its hard to keep track of whats important
and whats not. OTech Magazine will help the Oracle professional keep
focus.
OTech Magazine will always be available free of charge. Therefore the
digital edition of the magazine will be published on the web.
OTech Magazine is an initiative of Oracle ACE Douwe Pieter van den Bos.
Please note our terms and our privacy policy at www.otechmag.com.
Independence
OTech Magazine is an independent magazine. We are not affiliated, associated,
authorized, endorsed by, or in any way officially connected with The
Oracle Corporation or any of its subsidiaries or its affiliates. The official
Oracle web site is available at www.oracle.com. All Oracle software, logos
etc. are registered trademarks of the Oracle Corporation. All other company
and product names are trademarks or registered trademarks of their
respective companies.
In other words: we are not Oracle, Oracle is Oracle. We are OTech Magazine.
Intellectual Property
OTech Magazine and otechmag.com are trademarks that you may not use
without written permission of OTech Magazine.
The contents of otechmag.com and each issue of OTech Magazine, including
all text and photography, are the intellectual property of OTech Magazine.
You may retrieve and display content from this website on a computer
screen, print individual pages on paper (but not photocopy them), and
store such pages in electronic form on disk (but not on any server or other
storage device connected to a network) for your own personal, non-commercial
use. You may not make commercial or other unauthorized use, by
publication, distribution, or performance without the permission of OTech
Magazine. To do so without permission is a violation of copyright law.
All content is the sole responsibility of the authors. This includes all text and
images. Although OTech Magazine does its best to prevent copyright violations,
we cannot be held responsible for infringement of any rights whatsoever. The
opinions stated by authors are their own and cannot be related in any way to
OTech Magazine.
Programs and Code Samples
OTech Magazine and otechmag.com could contain technical inaccuracies
or typographical errors. Also, illustrations contained herein may show prototype
equipment. Your system configuration may differ slightly. The website
and magazine contains small programs and code samples that are
furnished as simple examples to provide an illustration. These examples
have not been thoroughly tested under all conditions. otechmag.com,
therefore, cannot guarantee or imply reliability, serviceability or function
of these programs and code samples. All programs and code samples
contained herein are provided to you AS IS. IMPLIED WARRANTIES OF
MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR
PURPOSE ARE EXPRESSLY DISCLAIMED.
OTech Information
Winter 2014