Anda di halaman 1dari 248

http://teachmehana.

com/excel-on-hana/

Jump to HANA Topic

What is SAP HANA?


Row Store vs Column Store
SAP HANA Architecture
HANA IMCE Architecture
Data Provisioning Techniques
Introduction to HANA Studio
Master data Vs Transaction Data
SAP HANA Star Schema
Creating a HANA Table – Graphical
Method
Loading data from flat file to SAP HANA
Graphical Calculation View
SAP HANA Constant Filters
SAP HANA Variables
SAP HANA Input Parameters
Calculated Columns
Restricted Columns
Ranking Data in Calc View
Aggregating Data in Calc View
UNION node in Calc View
Attribute View
Analytic View
Understanding SAP HANA
SAP HANA is a revolutionary platform based in the company’s new In-memory
database.Learning it implies choosing to pursue a career path that is both
fulfilling and exciting to work with. I am honored that you chose TeachMeHANA
to assist you on this journey. TeachMeHANA delivers a series of tutorials
designed for beginners that will steadily and surely make you an expert in the
subject.

For those of you who are new to SAP, do read my post – What is SAP ?

Question before we go further – What is an In-


Memory database?
An In-memory database is one that stores all of its data online or in its
memory.Confused? Let’s explain this again with the most played out
analogy.Now picture yourself turning on your laptop. You now want to work
on a word document.

What do you do? Simple. Find that blue icon and give it a click. Then comes
the wait.

The wait may be of a few seconds or more based on your configurations. After
it opens, now close the program. Now go open it again. The wait this time
would be really less at least compared to the first time you started it. What
brought around this mystical change?
As shown in the figure, there are 3 components of your computer here. For
your CPU to start executing your program, the required information should be
available in your RAM. If the program was not called recently, the information
would not be available here and would need to be picked up from the
persistent disk – your hard drive. Consequently, when you call a program for
the first time, the information is loaded to the RAM and then processed by the
CPU. The second time when you called it, it was already there in the RAM and
hence got processed really fast.

RAM access is really fast and any data present there is referred to as data in-
memory.

Your hard disk storage is relatively slower in access but offers cheaper storage.
The size of your RAM is a crucial factor in deciding how much of data you can
keep In-memory and access really fast. But unfortunately, it’s not practical to
expand the RAM size of a PC to be able to house all the data you access.

Realizing this, the good folks at SAP created an in-memory database that
keeps all of the customer data online/in-memory. So, in a way you could
picture this as a gigantic RAM running multiple cores of CPUs and hence
delivering lightning fast response times.

Now before you throw your disks out for being a disappointment, it’s
important to understand that the RAM is volatile memory, i.e. it loses its
data on loss of power. Thus, it’s important for backups to be taken to
persistent disks. Backups are scheduled jobs executed as per the
configurations to make sure no data is lost in case of SAP HANA DB down
times.

How exactly do Backups happen? We’ll discuss that in a different post.

The beauty of SAP HANA lies in the fact that it does most of its calculations
in-memory at the database layer instead of the application layer as done
traditionally. SAP HANA is not just a database. It consists of different engines
that crunch calculations efficiently and return results to the application layer.

Due to this push-down of logic, the data latency (Time taken between request
and response) is really small and that’s where the true magic lies.

SAP HANA opens up possibilities that were unimaginable with a traditional


database which range from real-time status reporting of your inventories,
online analysis of streaming data from sensors, unrivaled predictive analysis
capabilities and many more.

Since you have taken the first step towards a career in SAP HANA, let me tell
you exciting roads lie ahead.

Help this website grow by sharing this document on social media by using the
icons below.

Happy Learning!

Row Vs Column store – What’s all the fuss


about?
Welcome back readers to the next beginner’s guide to HANA where we try to
understand what row store and column store mean in terms of data storage.

To all those excited folks who just want to see the development steps in one
of the modeling tools, I understand your anxiety but trust me when I tell you
by experience that you are chasing the wrong goals. Understanding how
databases behave is really important in your well rounded growth as a back
end developer and architect.

Patience is the key. Trust me that you’ll grow and learn everything you need
as we progress – one step at a time.

Coming back to the topic, we all know what a table is. We have been drawing
them since our school math classes and then in excel spreadsheets soon after.
The most common way of storing data in a database is in the form of a table.
For example, have a look at the below representation of a table. This is what
we call a logical representation – a way for regular folks to draw and represent
rows and columns of data.

But have you ever asked yourself how this information is stored in a database?
Disk memory can be envisioned as a continuous array of blocks where each
block can store 1 cell/value of data (from a table in this case).

Row Store

Row Store
Traditional database storage of tables is generally row store.

Let’s understand this with the figure where you can see how the table above
is stored in the database.

As you can see, it is stored sequentially as Row1, Row2, Row3 and so on.

Each row of data is stored one after the other in perfect sequence. From this
image, you could start to see the evident problems to read data. Even if I
want to read only one column of this dataset, my read skims through the
entire data array. The picture of a table changes dramatically when looking at
it in terms of memory, doesn’t it?
Column Store

Some databases support another additional form of storage called the column
store. Each column in the table acts as an individual table, and gets stored
separately.

Why would you do that? Excellent question. Thank you for asking.

When each column acts as an individual table, each of these individual mini-
tables can be indexed (=sorted) and compressed (=process of removing
duplicates).

This makes sure that each of these tables only contains a unique entry.
The below example illustrates how it works but does no real justice in
portraying the real advantage. To realize this, imagine a table with a million
rows. Most of the columns would have only a few hundred or thousand at most
of unique values. Compression makes sure you save disk space and indexing
makes sure you find things faster.

Note: Databases supporting column store entries also support row store.
Now at this point you might be wondering if the Column store is such a magical
gift to mankind, why bother with a Row store at all? Before you decide to kill
it with fire, let’s analyze the pros and cons.

Major Differences between Row store and


Column store
Column
Property Row Store Reason
Store

Memory
Higher Lower Compression
usage

Modifications require updates to


Transactions Faster Slower
multiple columnar tables

Slower even if Smaller dataset to scan, inherent


Analytics Faster
indexed indexing

Now there are deeper concepts to Row stores and Column stores. For example
the Delta merge process for column store wherein a column store consists of
a main store, a primary delta store and a secondary delta store which is
actually a row store. Did I lose you there? Don’t lose sleep on these concepts
for now. I would come back to these eventually once you are a bit more
mature on this technological journey.

For those curious technical nerds out there, if you don’t want to wait for my
explanation, I would recommend reading the below:

1. Delta Merge Concepts


2. An excellent MIT paper elaborating how row store and column store work in
detail

By the end of this post you would have now:

Gained a basic understanding how row and columnar databases work.

Help this website grow by sharing this document on social media by using the
icons below.
Stay tuned for the next tutorials. Happy Learning!

SAP HANA Architecture


Welcome to this foundation tutorial explaining the SAP HANA Architecture in
detail. Here, we will understand the different components that make up the
SAP HANA appliance.

SAP HANA Architecture in blocks


SAP HANA is not just a database – it’s an appliance which consists of different
servers. Index server is the primary component but it consists of other
different servers as well. Let’s understand what each of these servers are
responsible for.

SAP HANA Index Server

The most important server in the architecture is the Index server. The actual
data is contained here and the engines that process this data are also present
in the index server itself. It has multiple components and its architecture will
be taken up as a separate topic in the next tutorial.

SAP HANA Name Server

Stores information about the topology of the SAP HANA System. SAP HANA
can also run on multiple hosts. In these cases, the name server knows where
each component runs and also knows which data is located on which server.

SAP HANA Preprocessor Server

Whenever a text analysis functionality is called upon by the index server, the
preprocessor server answers such requests.

SAP HANA Statistics Server

This collects data about the resource allocation, performance and status of the
SAP HANA system. It keeps a regular update on the health check of the HANA
appliance.

SAP HANA XS Server / XS Engine

The XS Engine allows external applications to access the data models in the
SAP HANA database via XS Engine Clients. It transforms the persistence model
in the database into consumption model for external clients to be exposed via
HTTP/HTTPS.

Application/Reporting Layer

This block is not part of the SAP HANA architecture but it is there to represent
that any web service/ application or reporting layer can interact and pull the
required data for itself and also write data back into the database

This concludes the tutorial on the high level architecture of SAP HANA. In the
next tutorial, we learn more about the Index server and all its components in
detail.

Happy Learning!
SAP HANA In-Memory Computing
Engine (IMCE)– Index Server
Architecture
Welcome to the followup tutorial to the SAP HANA Architecture. Here, we learn
about the core of the HANA appliance – The IMCE (In memory computing
Engine) or the Index Server. We already learnt in our previous tutorial that
the data is contained and processed in this server. Let’s understand the
components on Index Server in detail.

In-Memory Computing Engine / Index Server


Architecture Components

Connection/Session Management

To work with the SAP HANA database, users must use an application of their
choice. This component creates and manages these sessions for the database
clients. SQL Statements are used to communicate with the HANA Database.
The Authorization Manager

Data security is a critical aspect in any business. Nobody should be able to


visualize more data than they are allowed to and similarly, unauthorized
personnel should not be able to add or modify data or metadata in the SAP
HANA Database. The Authorization manager makes sure this data security is
enforced based on the user roles/authorizations that have been given to the
database user ID .

Replication Server

The replication server is responsible for replicating the table data and
metadata (structure) from the source system.

Metadata Manager

The term “metadata” stands for data about data. This includes information
about table structures, view structures, datatypes, field descriptions and so
on. All this metadata is stored and maintained by the Metadata Manager.

Transaction Manager

This component manages database transactions and keeps track of running


and closed transactions. It co-ordinates with the other engines on database
COMMITs and ROLLBACKs.

Request Processing and Execution Control

This is an important block of components that receives the request from


Applications/Clients and directs it to the correct sub-block for further
processing.

The sub-blocks are as listed below.

SQL Processor

SAP HANA In memory computing engine SQL requests are processed by this
component. Any kind of data insertion, updation and deletion of datasets are
handled by this processor.
SQLScript

This block symbolizes the internal language of the SAP HANA Database. SAP
HANA SQLScript optimizes operations by parallel processing of queries.

Multidimensional Expressions (MDX)

MDX language is used in need of manipulating and querying multidimensional


OLAP cubes.

SAP HANA Planning Engine

This engine allows HANA to execute organizational planning operations to


execute in the database layer. The scope of these applications can range from
simple manual data entry through to complex planning scenarios.

SAP HANA Calculation engine

After initial processing from the SQLSCript, MDX and planning engine, the data
models are converted into Calculation models which creates an optimal
parallel processing enabled logical execution plan.

The Row Store

As discussed in the initial tutorials, Row store is a row based storage of data
in a serial manner

The Column Store

Column based storage is exclusive to in-memory databases to faster querying


of data.

Revisit the Row store vs column store tutorial to know more.

Persistence Layer & Disk Logger

We learnt in the first tutorial that SAP HANA is an in-memory database which
is similar to the RAM of a PC that you may use. This also means that the main
memory in SAP HANA is volatile, i.e in case of a power outage or restart, all
data in it would be lost. Thus, there is a persistence layer to periodically save
all the data in a permanent/persisted manner.
Data and logs of a system are stored in Log volumes whereas Data volumes
store SQL data and undo log information and also SAP HANA information
modeling data.

SAP HANA In memory computing engine saves all changes to data into the
persistent disk at periodic intervals called savepoints. The default frequency
of these savepoints is every 5 minutes which can be changed as per the
requirement. If a system restart/power outage ever occurs, data from the last
successful savepoint can be read from the data volumes, and redo log entries
written to the log volumes.

Thank you for reading through the SAP HANA In memory computing engine /
Index Server tutorial. I hope it gave you a deeper insight into the SAP HANA
Architecture. Read on to the next tutorials to continue this journey.

DATA Provisioning in SAP HANA


SAP HANA is a database and whenever it is used for analytics, it might require
data to be channeled to it from all the systems where business transactions
are going on. I used the word “might” in the previous sentence because with
SAP’s new S4HANA application, analytics and transactions can now take place
on the same database. Check it out on this link on the SAP website – It’s quite
amazing. I will do a document on it once I have created all the basic tutorials
for the topics here. S4HANA is great but still not widely adopted. The more
common techniques of data analysis involve data provisioning from external
systems into HANA for modeling. Let’s take a look into these methods one by
one.

1. Flat file upload


Flat files are data files which are stored locally and not coming in from a live
server. The supported extensions are .csv, .xls and .xlsx . So if you add a few
columns of data in excel and save as one of these formats, it is now a flat file
ready for upload. This is the least used method in HANA as business data
should ideally come from live transactional database tables rather than static
files.
These type of files are usually used for custom table data loads wherein
sometimes the business wishes to maintain some mappings or so. For
example, let’s say that our client has 2 source systems – one for the US and
one for Germany and they maintain different status values for the same
status. In the US an open order in source system is denoted by “OP” and in
Germany (DE is for Deutschland which is how the true German name for
Germany), the open orders are denoted by OF. In this case, if we are creating
a global report, it’s not good to have two symbols for the same meaning.
Therefore, someone from the client team makes a target mapping saying that
“OP” will be the final target value for such ambiguities and in HANA we use
this mapping table to harmonize the data. Such files are usually uploaded via
flat files into HANA into custom tables as source systems may not have them
maintained anywhere especially if each of these countries have a different
system.

Country Order (Source) Order (Target)

US OP OP

DE OF OP

We will learn on the different ways to do this in further tutorials.

2. SLT Data provisioning


SAP Landscape Transformation or SAP SLT is the most widely used data
provisioning technique for native SAP HANA projects with SAP source systems.
Most tutorial spread the misconception that developers manage the replication
but usually it will be an administrator who will take up the request for table
replication from a developer and then make sure that table is replicated as
there may be security restrictions on the incoming data and it’s not difficult to
mess up SAP SLT replication if the entire control is given to a newbie
developer. Thus from a developer perspective, let me keep the explanation
simple for now.

As seen in the illustration, when a SLT server and its connections have been
properly set up between the SAP source system and HANA, there is a state of
constant replication between source and target HANA system. Every time
there is a change in the table, a trigger from source pushes the data through
the SLT server to HANA. The initial load of tables might take some time as
there may be millions of records in most financial and sales tables but after
that has been done, the replication is done in near real-time speeds (most
would refer to this as real-time as there’s only a few seconds of delay).

Although this is enough for you to know for now, for the full in-depth
explanation of how SLT works, please refer the SLT guide from SAP.

3. SAP Data services (BODS)


DataServices or BODS is an amazing tool which can take data from any data
source to any data target. It is one of the most powerful ETL tools out there
in the market and since it’s an SAP product now, the integration is really
smooth. You have different options here for transforming data coming in from
source systems before it reaches the target HANA table. There are set of rules
that can be created and run by scheduled jobs to keep data updated in HANA.
But although BODS does support real-time jobs, usually batch jobs
(=scheduled background jobs) are run due to performance limitations. This
means that this provisioning technique is not real-time. The newer versions of
SAP BODS also support creating repositories on SAP HANA this allowing many
calculations to be performed in the HANA database. But in most such
implementations involving SAP BODS, data transformations are usually done
in HANA data models rather than data services to have all the logic at one
place. We will do a demo soon on this in a separate tutorial.

4. Other certified ETL tools


Like BODS, there are other powerful ETL vendors in the market that do similar
jobs but they need to be certified with SAP for your client to get any type of
support from them if something goes wrong. Why would you go for such
options? Well, if your client was having an architecture that already had
licenses for these tools, then they would want to use it instead of shelling out
more money for SAP data provisioning licenses. But since these won’t be SAP
tools, the integration would not have as many options to play around with as
the SAP ones.

5. SAP EIM (Enterprise Information


Management)
This combines the SLT and BODS approaches and is a relatively new feature.
SLT brings in new data at real-time and EIM has a subset of tools from BODS
for major data cleansing and transformations. This is a great option but still
in a relatively native stage and hence the usage is still not widespread also
since it might require extra licenses and hence additional cost.
6. Smart Data Access
Smart data access may not be exactly a data provisioning technique as there
is no data being replicated as this method involves replication of table meta-
data only from the source and not its data. This means that you get the table
structure on which you can create a “Virtual table” for your developments.
This table looks and feels exactly like the source table with the difference that
the data that appears to be inside it is actually not stored in HANA but remotely
in its source system. HANA supports SDA for the following sources: Teradata
database, SAP Sybase ASE, SAP Sybase IQ, Intel Distribution for Apache
Hadoop and SAP HANA. SDA also does an automatic datatype conversion for
fields coming in from source system to HANA datatype formats.

7. DXC or Direct Extractor Connection


You would need to have some SAP BW background to fully understand this.
Give it a read and don’t bother if you don’t get too much of it. This data
provisioning technique involves leveraging the extractor logic in SAP source
systems and to process the incoming data through that layer of logic in the
SAP source before being delivered to HANA. This of course involves execution
of quite some logic in ECC side and is not recommended. I personally am not
a fan of this method and haven’t seen any client be inclined towards it either.
It is a waste of HANA’s potential in my honest opinion. You are better off
spending time gathering raw requirements, and building the views from
scratch than using this bad shortcut.

Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.

Happy learning!

The SAP HANA Studio – First Steps


The moment has arrived to take our first look into SAP HANA Studio – the tool
that help us create high performing data models for SAP HANA. This tool is a
tweaked version of Eclipse which some of you might know for being widely
used for Java development.

Finding SAP HANA Studio – Where is it?


Let’s get right into it. You can find HANA Studio in Start Menu -> All Programs
-> SAP HANA -> SAP HANA Studio as shown below.
Once you click on it, there should be a pop-up window asking for a workspace
location.

A workspace is nothing but a folder where all your offline work and
configurations get saved. Enter a folder path and press Ok.

Navigating around SAP HANA Studio


You would reach a screen like the below. It might be slightly different based
on the service pack you have downloaded or if someone else modified the
layout but don’t worry it doesn’t matter.

Now, the first thing to understand about this tool is that different type of major
tasks are performed in different areas or “Perspectives” as they are rightly
called. Each perspective has a different purpose. For example, “SAP HANA
Administration Console perspective” is used by administrators for
administration and monitoring of the HANA Database. Similarly, “SAP HANA
Development Perspective” is used for HANA model developments and other
HANA related developments. There are many more perspectives but we would
look at them as and when we need them.

To do SAP HANA Development, you have two options, either use the “SAP
HANA Development” Perspective or the “Modeler” Perspective. I recommend
using the “SAP HANA Development” perspective as your default perspective
and for all your HANA modeling needs as it allows you to create many more
HANA related content like XS files, HDB tables etc. What are all of these
things? Well.. if you are patient enough, we will learn them in some of the
later tutorials.
Let’s try moving to the Development Perspective for now.

Navigate to the “Other” option in the context menu as shown below to get the
list of all perspectives.

Scroll down in this list and you will see the one we need. Click to select and
press ok.

At this point you will see the SAP HANA Development perspective as current
from the top right corner of the tool (Marked in the red arrow below). This
part of the tool will always show you the three most recent perspectives you
have been in and you can also click on one of these to switch to them if the
one you need is already in this shortcut area.

Now to connect to our HANA system from this tool, we need to go to the
“Systems” pane which may or may not be visible to you at this point as
sometimes it gets minimized. Click on the little squares (Marked in the blue
arrow below) to expand this area.

You will see the below Systems pane. Right click on the blank area inside this
pane to add a new system by choosing “Add System” from the context menu
as shown below.
Fill in the Host Name and Instance number as provided by your system
administrator and description as something relevant. The other settings may
also be different depending on the database you connect to – Please check
with your administrator for details. Press “Next” when done.

The next step would be to enter in your username and password which also
would be provided to you by your administrator. Press finish after you are
done to add the systems.
You will see that the system has been added and the green square on the
system icon indicates that all services are working fine.
Also, there are 4 sections or folder like icons that you see here. Let’s discuss
their relevance.

1. Catalog: This is where all the source metadata (Tables, views etc) is
grouped under. Here you can do data previews on source system tables
that have been replicated or available as virtual tables in case of SDA.
2. Content: This is where all your HANA development takes place. The
HANA models that you create go under here.
3. Provisioning: This is mostly used for Smart data access. All the source
systems connected via SDA will have their tables displayed here like a
“Menu Card” in a restaurant. You can choose which one you want and
build a virtual table for it in the “Catalog” section we discussed just now.
4. Security: This is mostly for security consultants to maintain users and
roles according to your role in the project – Developers, administrators,
testers and so on.

We will dwell deeper into these sections as we move further into these
tutorials.

Please show your support for my hard work by sharing this document across
social media using the share buttons below and also comment with your

feedback – good or bad .. I can take it

Until Next time. Happy Learning.

Master Data and Transaction Data


– Introduction
There are two major categories of data required for Business analytics and
they are Transaction Data and Master data. What do they stand for and how
are they used?

Lets learn with an example


Let’s say you own a company that manufactures clothing and apparels and at
this point only manufactures 3 products – Shoes, coats and Caps.

The below is a master table showing the properties of these products – For
example, their product codes and which plants they are manufactured in.

Product Code Description Plant

SH Shoe US1

CO Coat US1

CA Cap DE1

Now, after two days of selling your products to customers, you get the below
table with details of your transactions in a transaction table

Product Code Date(MM/DD/YYYY) Revenue ($)

SH 1/1/2016 00

CO 1/1/2016 200

CA 1/1/2016 300

SH 1/2/2016 00

CO 1/2/2016 200

CA 1/2/2016 100

From looking at the above data, what kind of analysis can you do?

It’s quite obvious that nobody’s buying your Shoes, the coat sales seem to
be picking up but your Cap Sales are down. Simple analytics on a really
small scale – right?
Now let me ask you a simple question – What were you analyzing here?
Yes.. you were analyzing the Product code against each date. Also, how did
you know the SH meant shoes? Because of the master table above. The
master table also gives you more options for analysis. Since you know Shoes
description stands for SH code and they are made in the plant US1, you can
do deeper analysis like how many shoes from plant US1 were sold in a
particular time period.
The objects under analysis in any situation are called Attributes in
SAP HANA . For example – Product, Plant, customer, etc.
The fields which provide us the numbers for tangible analysis like Revenue
generated, Quantity sold, Payments Receivable are called Measures in SAP
HANA.

The tables containing measures from business transactions are called


transactional tables like our second table above whereas the tables
containing more information about the master data are called master data
tables like our first table.

The central table for analysis is usually a transactional table and if we need a
deeper analysis of an attribute, we refer to its attributes from the master
data table(s) it may have. Like in our case, we did an analysis of revenue by
plant even though we did not have plant in the transaction data. This
permits the transactional tables to reduce the number of columns for
characteristics since they can always be “looked up” from the master data
tables they might have if required for analysis.
I hope the difference was clear and easy to understand. Please share this
document using the share buttons below and subscribe to our newsletter to
get alerts on the latest additions to this tutorial series.
Happy Learning!
SAP HANA Star Schema
Before we move on to SAP HANA Star Schema let us understand first what a
Star Schema is.

What is a Star Schema?


Star schema is the backbone of all data warehouse modelling be it SAP or
Oracle. It is a fairly simple concept and is really important for any kind of
analysis.

Let’s understand this with extending the simple example we had in the Master
data Vs Transactional Data tutorial.

Star Schema in SAP HANA explained with an example

We had our master data table as below:

Product Code Description Plant

SH Shoe US1

CO Coat US1

CA Cap DE1

Also, we had our transaction data table as below:

Product Code Date(MM/DD/YYYY) Revenue ($)

SH 1/1/2016 00

CO 1/1/2016 200

CA 1/1/2016 300

SH 1/2/2016 00
CO 1/2/2016 200

CA 1/2/2016 100

Now, for analysis of the transaction data by product and description, we would
be required to have connections between the master and transaction table.
The overall data model would look as shown below:

Doesn’t look like much of a Star for a Star schema model right?

Well, that’s because we are doing a really small analysis here. Let’s assume
our transaction data had Customer and Vendor details too. Also, we wanted
to do more detailed time based analysis like Year/Month statistics from date.
Let’s see how that data model looks like.

This looks more like a star.. right? Well to be honest, it looks more like a 4-
legged octopus to me but a Star schema sounds cooler so let’s go with that
name.

To conclude, the star schema shows you how the backbone of your data
models must look like. This will be much more clearer and fun when we
actually start building data models in HANA and BW on HANA.
Note: Prior to the advent of HANA, a model called extended star schema
was used in SAP BW which was the best practice for BW running on a legacy
database. With BW on HANA, it is no longer relevant and is not discussed in
this tutorial. As of BW on HANA 7.5 and Enterprise HANA SP11, a lean star
schema approach is what must be followed in all data models. Well, BW does
it by itself anyways when on HANA so you can leave it up to the application.

Thank you for reading this tutorial. Please show some support for this website
by sharing these tutorials across social media using the share buttons below.
Also, make sure you subscribe to our newsletter when the pop-up comes up
to get alerts as soon as tutorials get added.

Until next time. Happy Learning!

SAP HANA SQL Data types – Explained with


an example
One of the primary concepts of HANA SQL or any other programming language
is the data type. To understand what HANA SQL data types mean, let’s take
some examples.

The fruit shop back story


Let’s say you and I work at a fruit shop. I am employee number 1001 and you
are employee 1002. We record some sales data for 2 days of our hard work.

Employee
Product Date Quantity Sold(kg)
Nr.

1001 Apples 1/1/2016 10

1001 Oranges 1/1/2016 5

1002 Apples 1/2/2016 20

1002 Oranges 1/2/2016 10

There are two very important things that we can infer from this data:
1. You are a better salesman that I am
2. Data collected can be of different types. Let’s discuss the different columns
that we have here

Product column only has characters from A-Z and no numbers.

Date column has only dates

Employee Nr. and Quantity Sold(kg) have only numbers and no characters

Data types in database terms is a literal translation of itself. The type of data
it represents is a data type. By specifying the data type, you tell the database
what kind of a value it can expect in that particular field. There a list of SAP
HANA data types available which you can refer on this link (SAP
documentation links do change frequently so let me know in the comments if
it stops working in the future).

HANA SQL Data types – Only the really important ones !!!
More importantly, let me list out the major HANA SQL data types you need to
know to work on a real project. These will be enough for most of the scenarios
you face. For everything else, there’s that link above.

Data type Primary Purpose Example

Used to represent Date values. Default format is YYYY-


DATE 2011-11-20
MM-DD which can be changed as per requirement

Used to represent time. Default format is HH24:MI:SS


TIME which stands for Hours in 24hour 14:20:56
format:Minutes:Seconds

Used to represent whole numbers within the range -


INTEGER 25
2,147,483,648 to 2,147,483,647

DECIMAL Used to represent numbers with fixed point decimals 25.344689


Used to store a character strings 1 to 5000 characters
NVARCHAR abcxyz3h4
long

Note: NVARCHAR is always the preferred datatype over VARCHAR because


it supports unicode character data

So every time, you create a field or a variable (=an object that holds a single
data point), you need to tell HANA what kind of data to expect.

Take a look back at our first table on fruit sales and take a guess on what data
types you think they might be based on the above information.

I will place space in between to push the answers away . Scroll down for the
answers after you have your guess ready.

HANA SQL Data types Knowledge Check: Time for the


answers
1. Product column only has characters from A-Z and no numbers.

It has to be a NVARCHAR as we need to store alphabets.

2. Date column has only dates

Quite obviously it has to be the DATE data type

3. Employee Nr. and Quantity Sold(kg) have only numbers and no characters

Now here comes the tricky part. Let’s start with Quantity Sold(kg). It is a
number so it can be held by INTEGER, DECIMAL and NVARCHAR (as you can
also store numbers as characters). This field contains the number of a
particular fruit that you sold which will always be a whole number like 1, 2 or
4 and never like 1.5 or 1.34 (in which case you probably took a bite of the
fruit before you tried to sell it to a poor customer). So now we have 2 options
– INTEGER and NVARCHAR. Both of them CAN store values for this field but
which one SHOULD?
Now ask yourself, will at some point will someone try to add, subtract or do
arithmetic calculations on this field. For example, the store manager may try
to find the total number of apples sold on a particular day by adding up the
individual employee’s sales volume for that day. In our example, the total
number of apples sold on day 1 was (10+20 = 30). You can only do these
calculations with a numeric data type which is either INTEGER or DECIMAL.
Since we have already ruled out DECIMAL, we can infer that INTEGER data
type would be the correct option here.

Coming back to the Employee Nr. now, we always have whole numbers as
employee IDs so we can safely rule out DECIMAL. Now, we ask the question
again will someone want to do any math on this field. Logic dictates that It
makes no sense to add or subtract 2 employee IDs and hence we declare it
as a NVARCHAR data type so that even if some crazy dude tries to do some
math on this field someday, HANA throws an error showing him his logical
fallacy.

I hope this was easy to understand and follow. Stay tuned for my further
HANA SQL Tutorials for SAP HANA.

Make sure to share this content on social media using the share buttons below
to show your support to this website and to keep it alive.

Stay Motivated. Stay hungry. Until next time. Goodbye.

Creating an SAP HANA table


without coding-Graphical Method
Welcome folks to the next HANA tutorial on how to create SAP HANA table
without any coding required. These tables can be easily created for testing
and quick analysis. However these are not the preferred method of creating
tables when the intention is create a custom table for transport from your
development environment to quality assurance systems and productive
systems.

The main methods to create a table are:

1. Using the graphical method which will be discussed here.


2. Using the SQL Script method discussed in a separate tutorial.
3. Using the HDB tables method discussed in a separate tutorial.
4. Using the HDBDD method discussed in a separate
tutorial.(Recommended method in non XSA projects).
5. Using the HDBCDS method which would be discussed in a future
tutorial(Recommended method in XSA projects).

Be sure to read all in the same order to fully understand the concept.

Creating our first SAP HANA Schema


Coming back to the graphical method, this can create a HANA table in a matter
of minutes and is my usual way of creating them if I want to test any logic
that needs us to create a table in SAP HANA. To do this, we expand the catalog
folder as all the tables are always grouped in catalogs.

On expansion, you would see another kind of folder shown by the red arrow
below. These are called “Schemas”. These are usually used to group tables of
a similar source/ similar purpose. Each username by default gets its own
schema but you can also create one.
To create a schema is quite a simple activity. It involves a line of SQL.

Click anywhere in this tree of folders and you’ll see the SQL button marked
below become active. Click on it to open the SQL editor.

The new SQL editor that opened up below is where we will type in the code.
The code is as simple as basic English.

The syntax is CREATE SCHEMA “<Schema_Name>” . Let’s call the schema


0TEACHMEHANA. I am using the 0 prefix so that when the system sorts
schemas, this one comes up somewhere at the top making it easier for me to
screenshot. You are free to use any naming applicable to your project.

Note: Any system object like table names, field names, procedure names etc.
should be enclosed in a double quote and any values for character fields like
‘Apples’ go in single quotes.

Press the execute button marked below or F8 on your keyboard to create this
schema. The log below confirms that the execution was performed without
any errors.
HANA doesn’t auto refresh folder trees and we have to manually do it to see
new objects that are created. Right click on catalog and select Refresh from
the context menu as shown below.

As seen below, our schema is now available.


Creating a SAP HANA table using the graphical method
Now that we have a schema, let’s create a HANA table under it using the
graphical method and name it CUST_REV_GRAPH.

To do this, expand the schema and right click on the Tables folder and select
“New Table” from the context menu.

This brings up the screen shown below. Here we need to provide a table name
and the fields that are required along with their data types (If you are not
aware of the major data types in SQL, click here for my tutorial on the topic).
The schema name is pre-filled as we right clicked on options inside this
schema. Also, the type is always column store by default (You can choose to
switch to row store by clicking the drop down but is not recommended for
analytical applications. To know more about Row and Column stores, follow
this link)

As seen below, I’ve filled up the table name, the first column name as
CUST_ID, data type as NVARCHAR (as discussed in our data type tutorial, we
don’t use VARCHAR as it has no Unicode storage), a length as 10 for this string
and mark this field as key to the table. A key should be marked when the field
can by itself or with other key fields, identify a row of data uniquely. Since
there will be only one row per customer ID in my table, I mark this as key. As
soon as you enable the Key column, the Not Null auto enables as a key field
can never be Null. Also, an optional comment field can be added which is a
description of the field. When done, press the green plus symbol marked below
to add another field and similarly add all required fields for the table.

Note: Null denoted by ‘?’ in HANA default settings denotes the non-existence
of a value for that field. Null is not zero or blank ‘’ but is a symbol for non-
existence of any value.
After adding all the fields, press the green Execute button to create the table.

The messages on top confirm our success in doing so.

Refreshing the “Tables” folder under our schema would reveal this new
achievement.
Congrats! You just created your very first HANA table as shown below.
Be sure to check out the tutorials to create tables via SQL script, HDB tables
and CDS tables. Those are alternate methods and are really important to your
growth as a HANA developer.

Please share this document on social media using the share buttons below to
show your support for the website and subscribe to our newsletter for the
latest updates.

Until Next time – Happy learning!


Creating an SAP HANA table with SQL
Script
Welcome to the next SQL script for SAP HANA tutorial for beginners. In this
tutorial, we will create a SAP HANA table with SQL script. This is a continuation
of our series outlining the different ways in which you can create tables in SAP
HANA. It is highly recommended that you read the first part where we discuss
the graphical method.

To recap, the main methods to create a table are:

1. Using the graphical method that has been already discussed.


2. Using SQL Script which will be discussed here.
3. Using the HDB tables method discussed in a separate tutorial.
4. Using the HDBDD method discussed in a separate
tutorial.(Recommended method in non XSA projects).
5. Using the HDBCDS method which would be discussed in a future
tutorial(Recommended method in XSA projects).

SAP HANA Table: HANA Create statement Syntax


Let’s create the exact same table that we did using the graphical method with
fields Customer number, First name, Last Name and revenue in USD. The
syntax to be used is

CREATE <table _type> “ <table _name>”

“<field1>” <data type 1> <Nullability_criteria>

“<field2>” <data type 2> <Nullability_criteria>

“<fieldN>” <data type N> <Nullability_criteria>

PRIMARY KEY (“<primary_key_field1>”,”<primary_key_field2>” )

);

What does this even mean?


<table_type> can have many values but the important ones are ROW and
COLUMN. I have discussed what these are in a separate tutorial. The default
is a ROW table and hence even if do not write ROW, a row table will get
created. But for analytical applications, we prefer to use COLUMN tables and
hence here, <table_type> should have COLUMN in all analytical requirements.

<table_name> is a self-explanatory statement wherein you specify the table


name which you wish to create. The important point to note here is that the
schema name where the table needs to be created needs to be specified in
this statement. For example if you need to create the table ORANGES under
schema FRUITS, then <table_name> would be “FRUITS”.”ORANGES”

<field> refers to the name of the fields you wish to add in this table.

<data type> refers to the data type of the field you are trying to add.

<Nullability_criteria> takes the values NOT NULL if the field is a primary


key or if you want to restrict the incoming data in this table to have no NULL
values.

<primary_key_field> takes all the primary key field names separated by a


comma.

Note: Always be careful with the position of the brackets. Try to follow the
syntax shown above. Also end all SQL statements with a semicolon.;

SQL Script example to create a SAP HANA table


With this logic, our SQL code for a new table (which we cleverly name as
CUST_REV_SQL) with the same structure as the one we built in the graphical
table tutorial would look as below.

CREATE COLUMN TABLE “0TEACHMEHANA”.”CUST_REV_SQL”

(“CUST_ID” NVARCHAR(10) NOT NULL ,

“FIRST_NAME” NVARCHAR(20),
“LAST_NAME” NVARCHAR(20),

“REVENUE_USD” INTEGER ,

PRIMARY KEY (“CUST_ID”))

Open the SQL editor by clicking somewhere in the schema tree and then
pressing the SQL button as it becomes active.

Let’s copy this code to the SQL editor and press the execute button marked
below or press the F8 button after selecting the code.
The message for successful execution is shown below. Now we refresh the
Tables folder to see this newly created masterpiece.

The new table has been created as seen and double clicking on it reveals that
the structure is also as we intended it to be.
This concludes this tutorial on creating tables with SQL script in SAP HANA.
Please read the next part of Table creation tutorial – Creating HDB tables. You
will really find it fascinating.

If you liked the content, please support this website by clicking the share
buttons below to share it across social media and also subscribe for updates
on new content.

Happy Learning.
Creating an SAP HANA HDB Table
Welcome to the next tutorial on the three part series explain different ways to
create tables. In this one, we learn how to create an SAP HANA HDB table. It
is recommended that you also read the first two parts as well.

To recap, the main methods to create a table are:

1. Using the graphical method that has been already discussed.


2. Using the SQL script method which has been already discussed.
3. Using the HDB tables method which will be discussed here.
4. Using the HDBDD method discussed in a separate
tutorial.(Recommended method in non XSA projects).
5. Using the HDBCDS method which would be discussed in a future
tutorial(Recommended method in XSA projects).

To start off, if you remember my earlier tutorials, I recommended to always


remain in the “SAP HANA Development” perspective of HANA. For this tutorial,
it is mandatory that you are in the development perspective to move forward.
Read this tutorial again if you don’t remember how to switch perspectives.

Grouping Development objects – Creating a SAP HANA


Package
Until now, we have done everything in the Catalog folder, but the HDB table
code is to be written in the Content folder. All such objects and future
analytical objects that we create will be in the Content folder. All developments
have to be grouped under a “Package”. Package is just another layer of
grouping for all development and modeling objects. Inside content, let’s create
a new package for all our future developments. To do this, right click on the
content folder, and follow the path shown below to click on “Package”
Give it a name and description and press OK. I am prefixing a zero to make
sure my package comes up on top as it is sorted by symbols, numbers and
then alphabets. You can provide any relevant names.

Then move to the repositories tab as shown below. If you are going here for
the first time, you need to import the Workspace into your local memory.
To do this, press “Import Remote Workspace” after right clicking on the
(Default) repository as shown below.

Provide a folder for this to be imported to.


Once done, you will notice a small tick symbol on top of the repository icon as
marked below.

Creating an SAP HANA HDB table in the repository


Now, right click on the newly created package and select the “Other” from the
context menu path shown the illustration.
Note: If you don’t see the repository tab and are lost at this point, that means
you are not in the Development perspective. You can see this on the top right
of your HANA Studio as shown below.

A wizard opens up. Write ‘table’ in the search bar marked by the red arrow as
shown below. The objects with Table in the name show up. Select Database
table as circled below and click next.
Provide a table name as shown below and leave the template as blank.
This will open up a blank editor screen where in you need to put in the HDB
Table code. This is a simple code although it is not SQL It is a HANA internal
syntax.

The syntax to be used is:

table.schemaName = “<Schema_Name>” ;

table.tableType = <Type_of_table> ;

table.columns =

[
{name = “<field1>” ; sqlType = <SQL_Datatype1>; length =
<Length_of_characters>;comment = “<Optional_Description>”;},

{name = “<field2>” ; sqlType = <SQL_Datatype2>; length =


<Length_of_characters>;comment = “<Optional_Description>”;},

{name = “<fieldN>” ; sqlType = <SQL_DatatypeN>; length =


<Length_of_characters>;comment = “<Optional_Description>”;},

];

table.primaryKey.pkcolumns =
[“<primary_key_field1>”,”<primary_key_field2>”];

What does this even mean?


<Schema_Name> refers to the target schema where you wish the table to
be created in.

<Type_of_table> can take two values ROWSTORE and COLUMNSTORE. You


should ideally keep this as COLUMNSTORE unless there is a special
requirement to do so. To know the difference between ROWSTORE and
COLUMNSTORE click here.

<field> refers to the name of the fields you wish to add in this table.

<SQL_Datatype> refers to the datatype of the field

length = <Length_of_characters>; This section signifies the length of your


field. Please omit this for INTEGER types as they don’t require a specified
length.

comment = “<Optional_Description>”; This is also an optional section


specifying the description of a field.

<primary_key_field> takes all the primary key field names separated by a


comma.

Let’s create the same table structure we created in the other cases. To
download the code I used below, click here.
table.schemaName = "0TEACHMEHANA";
table.tableType = COLUMNSTORE; // ROWSTORE is an alternative value

table.columns =
[
{name = "CUST_ID"; sqlType = NVARCHAR; length = 10;comment = "Customer
ID" ;},
{name = "FIRST_NAME"; sqlType = NVARCHAR; length = 20; comment =
"Customer First Name" ;},
{name = "LAST_NAME"; sqlType = NVARCHAR; length = 20; comment =
"Customer Last Name"; },
{name = "REVENUE_USD"; sqlType = INTEGER ;comment = "Revenue
generated in USD";}

];

table.primaryKey.pkcolumns = ["CUST_ID"];

According to the above syntax, the code would be as below. Press the activate
button marked by the arrow below.

This should create this table in the 0TEACHMEHANA schema. Once built, it’s
the same as a regular table. Go back to the Systems tab and in this schema,
go to the tables folder and refresh it.
You would notice that the table has been created. It works the same way but
has some additional capabilities that we will discuss in upcoming tutorials.
Also, you might notice that the package name is also prefixed to the table
name automatically. That’s something exclusive to HDB Tables.
Double clicking on the HDB table shows the structure has been defined as
required.

This marks the successful completion of our tutorial.

HDBTables are the recommended form of tables to be used whenever


custom tables are to be created. The only reason I use the other forms
of table creation are for quickly creating and testing something but
for actual deliverables, only HDB tables must be created.

Edit (25-Jun-2017): Only CDS based tables are the best practice as of this date. These can be
HDBDD tables (described in the next tutorial) or HDBCDS table (for XSA based projects)
Their advantages will be further clear in our further tutorials so stay tuned for
new posts.

Creating HDBDD Table using CDS method in


SAP HANA
Hi All. Welcome to a new tutorial on creating a CDS based HDBDD Table in
SAP HANA. In this one, we learn to create a table using the SAP HANA CDS
(Core Data Service) method. The two recommended methods today to create
a custom table in SAP HANA are using the HDBDD file or to use the HDBCDS
file (when you have an XS Advanced or XSA installation).

The method to create a table elaborated on this website are:

1. Using the graphical method discussed in a separate tutorial.


2. Using the SQL Script method discussed in a separate tutorial.
3. Using the HDB tables method discussed in a separate tutorial.
4. Using the HDBDD method discussed here(Recommended method in non
XSA projects).
5. Using the HDBCDS method which would be discussed in a future
tutorial(Recommended method in XSA projects).

For anyone in a hurry, here’s the video version of this tutorial on YouTube. If
you prefer a written one, please read on. Also, please subscribe to our
YouTube channel to get video tutorials weeks before the written ones.

CDS HDBDD table : Using Web Development


Workbench
We are using the web development workbench to create this HDBDD file. You
can also do this in the SAP HANA studio under the repository tab.

Once you open the SAP HANA web development workbench, click on the
Catalog to open the Catalog section.
The catalog link opens up showing the packages you are authorized to view.

Right click on the package where you wish to develop this code.

From the context menu, choose New-> File


A window opens up asking for the file name

We provide a file name TABLES with the extension hdbdd.


The editor opens up.

Now, we paste the below code into this editor to create our table.

namespace TEACHMEHANA;

@Schema: 'SHYAM'

context TABLES {

Entity CUST_REV_CDS {

CUST_ID : String (10);

FIRST_NAME : String (20);

LAST_NAME : String(20);

REVENUE_USD : Integer;
};

};

The namespace needs to define the package under which this file is created.

@Schema defines the schema under which the created table(s) would reside
under.

The main context defines the file name that was given at the time of
HDBDD creation.

A table is a persistent entity in SAP HANA CDS and hence the below
statement declares a table (entity) CUST_REV_CDS.

Entity CUST_REV_CDS : The next part declares the columns in this table. Notice
that the data types are different than in regular SQL. There is no NVARCHAR.
Instead the declaration uses a String datatype. This is because CDS has
slightly different datatypes.

CUST_ID : String (10);

FIRST_NAME : String (20);

LAST_NAME : String(20);

REVENUE_USD : Integer;

};

SAP has an online page dedicated to the datatype mappings which you can
refer to in this regard. Click here to reach that page. A screenshot of that page
currently is as below.
Once done, right click on the file and click “Activate”.
The cross symbol disappears from the file confirming that it is now active. The
table CUST_REV_CDS should also now be created in the SHYAM schema as
defined.
Now come back to the Web based development workbench. Click on the
Editor.

Now, expand the Catalog, the schema, and the tables folder.

We see that the table has been created successfully.


This table can also be loaded using the hdbti method in the same way as the
hdb tables.

I hope that the SAP HANA CDS HDBDD table concept is now clear to you.
Please share this tutorial on social media to help support the site. Also,
comments with your thoughts and questions.

Until next time…

SAP HANA Web Based


Development Workbench
Hello all. By this time, you are probably used to the SAP HANA Studio, the
eclipse based tool we spoke about and used extensively. But do you know that
SAP’s future road map doesn’t contain client tools like HANA Studio? They
continue to press on with web based tools accessible via any browser without
any additional installations. Imagine if all you had to start developing SAP
HANA models was to find a random laptop lying around, open your browser
(I’ll wait for you to open it just in case you’re on IE.. ) and then login
with your HANA username and password on a link that your admin provides
you. No need to imagine further because that day has come and it came a
long while ago with the SAP HANA Web Based Development Workbench.

SAP HANA Web Based


Development Workbench – What is
it?
Most projects still use the SAP HANA studio because it works. And as the IT
gods say- “Don’t touch anything that is working”. The SAP HANA Web Based
Development Workbench is a web based version of the SAP HANA studio’s
development and (partially..) admin perspective.

To login to the The SAP HANA Web Based Development Workbench you need
to use the below URL:

https://<hostname>:<port>/sap/hana/ide

Many also call this the SAP HANA WEB IDE which is a mistake as the SAP
HANA WEB IDE is a separate thing used for mostly UI5 and XS application
development.

The SAP HANA Web Based Development Workbench when opened looks as
shown below
As seen above, there are 4 sections

1. Editor : SAP HANA Web Based Development Workbench

The editor is same as the Content folder we had in HANA studio. Inside it, you
have the packages where you can create your HANA information views, stored
procedures and other objects.

The only one big difference here is that you will no longer be allowed to create
obsolete objects like attribute and analytic views which your SAP HANA studio
still allows. This prevents legacy objects from being created by ill informed
developers.

Everything else looks and works the same.


2. Catalog: SAP HANA Web Based Development Workbench

The catalog provides you access to the Catalog and Provisioning folders you
had in SAP HANA studio. Again, it will not allow you to create catalog tables
by right clicking and choosing New-> Table as in the HANA studio because we
know that we are in the age of CDS tables and catalog tables are not
recommended.

We finally have the option of right clicking and creating a new schema by the
way. I always wondered why they missed that in HANA Studio.
3. Security : SAP HANA Web Based Development Workbench

This link is almost the same as our security folder in SAP HANA Studio with
some added new features. SAP Security team would use this to add/modify
users and their roles.
4. Traces: SAP HANA Web Based Development Workbench

This is the link where an authorized user can view different traces of
operations going on within the SAP HANA Database.

To conclude, I would say that the SAP HANA Web Based Development
Workbench is catching on as the recommended development environment as
a lot of new features now are exclusively being delivered here. The SAP HANA
Studio had a nice run and although development in a web browser can get a
bit annoying – particularly with drag and drop operations, I would still
recommend that all developers use the SAP HANA Web Based Development
Workbench for all their developments from now on and get comfortable with
it because this is the future and the future is now.

Please share this tutorial on social media and support this site.

I’ve also made a YouTube video on this topic. Please check it out and subscribe
to the channel as well.

Until next time.


Insert data into custom tables –
Flat file loads to SAP HANA
Welcome to the next tutorial on SAP HANA. In this tutorial, we again start a
three part series where we understand the different ways of loading data into
tables created in SAP HANA. In our previous tutorials, we created a custom
table using three methods – Graphically, SQL script and HDB Table method. I
refer to them as custom tables here as they were created in HANA and not
brought in to HANA from any external database. Now that we have these
tables, we have a greater question at hand – How do we load data to it? This
tutorial explains how to load a flat file into SAP HANA without any coding
whatsoever.

Let’s create our flat file first – A CSV file


First step would be to create a flat file – A comma separated value file (CSV).

Easiest way to do this is with MS Excel as we enter some data as shown below.

We proceed and save this file and make sure to choose the file type as
CSV(Comma Delimited)
AS seen below, I have given it a file name and a file type
You would get the below message. Press Ok.

Then the below pops up. Press Yes.


Exit MS Excel. When trying to do so, a pop-up will ask you if you want to save
this file. Press “Don’t Save” in this case. Trust me – Your file is already saved.
If you save at this point, excel sometimes corrupts the CSV format.

Now after you are done with this, let me explain why this file type is called
CSV or Comma separated value.

Checking the CSV file delimiter


To understand this, go to your file and open it in notepad. It is quite easy to
accomplish. Right click on the file you created and then select “Open With”
from the context menu.

Choose program as Notepad and press Ok.


You would be able to see the file open in notepad now. As you see, each
column of data is separated by commas and hence the name CSV is
appropriate. Please always check the file in notepad as some of the newer
versions of Excel save CSVs that are separated by semicolons (;) instead by
default. It is not a problem, but you should be aware of what the separator is.

Now, make sure you close this file before proceeding further. Open files can
cause errors in upload. Once closed, Open your HANA studio.
Uploading this CSV flat file to SAP HANA
To start the flat file import, go to File -> Import as shown below.

Select “Data from Local file” from the SAP HANA Content folder and click next.
Select the target HANA system. In our case, the name is HDB but you might
have multiple HANA systems added in your HANA studio and in that case, you
would choose the system where you are importing this data to. Press next
when the system has been selected.
This will open up the below pop-up. Firstly, click on browse to select the file
you wish to upload.
1

Once this is done, many new options become active. I have numbered them
to be explain in a better way.

Mark 1 is the delimiter used in our CSV file. As we saw that in our case this
was a comma but if it was something else like a semi colon for you, drop down
and choose it.
Mark 2 needs to be checked if there is a header row in our data. Header rows
are the rows containing names of the columns. These should not be considered
as table data and to have them ignored, this check box needs to be checked.

Mark 3 needs to be checked if you want all the data from this file to be
imported. If you only want partial data to be imported, uncheck this and
provide the start line and end line numbers between which the data will be
imported from.

Mark 4 indicates the target table section. Here you tell HANA whether you
want this data imported to a new table or an existing table. In our case, let’s
import this data to our table created using the graphical method –
CUST_REV_GRAPH. To do this, click on the select table button.
Find the table under your schema and press OK.

The overall settings should now look like the below. Press Next.
This takes you to the mapping window where you tell HANA as to what field
from the CSV file goes where in the corresponding HANA table.
Drag and drop the source field to its corresponding target field to get the
mappings as below.
This screen below now shows the data preview of this import that we are
trying to make. Just to be sure that you have not mapped the wrong source
field to the target.
The log is not read and hence the import is successful and now this data should
have loaded to the table.

Right click on the table and click “Open Data preview” to check the data.
Below data confirms that we have correctly imported this table.

There are two other ways to fill in data to tables but since they include some
coding, I will be posting them in the SQL section. These methods are:

1. Adding data to a SAP HANA Table using HANA SQL script INSERT statement.
2. Linking a CSV file to a HDBTable using HDBTI Configuration file (Recommended
method)

This method and the SQL INSERT method are usually used for quick analysis
but anything that you do for a client that involves customized tables should
be done on HDB tables and with HDBTI config files explained in the above
linked tutorial. Be sure to check them out too.

Help this website grow by sharing this document on social media by using
the icons below.

Happy Learning!
SAP HANA SQL Script INSERT
Welcome to the next tutorial explaining the usage of SAP HANA INSERT
statement to insert new records into a HANA Table. This method can be
applied on any table, no matter how it was created – Graphically ,by SQL
CREATE or HDB Table

To start, we need the SQL editor. Click somewhere in the tree under the
system so that the SQL button marked below becomes enabled. Press the
button to bring up the SQL console.

As always, the blank console opens up.


INSERT Data into HANA Table
The syntax for SAP HANA INSERT statement is as below:

INSERT into VALUES (value1, value2, valueN);

Note: Always remember that refers to the table name prefixed with the
schema name as well as there can be tables of the same name in different
schemas.
Repeat this statement as many times as the number of rows you wish to add.
As seen below, I am adding data to the CUST_REV_SQL table which is in the
0TEACHMEHANA schema. System object names like schema names, table
names and field names should be wrapped in double quotes “ ” whereas as
you can see below, data values that have character data types inserted MUST
always be wrapped in single quotes ‘ ’ whereas number do not require any
quotes at all.

As seen from the log below, there are no errors. This means that the data was
inserted successfully.

Now to check if it worked


To confirm, right click on the table and click on data preview.
The raw data tab below confirms that the data is now present in the table.

And if you are wondering – Yes. It was that easy to do this. SQL for HANA is
fairly simple if you understand what you are doing instead of just trying
random experiments off the web. This concludes our second part of the SAP
HANA customized table data load series. Be sure to check out the other two
tutorials as well which are:
1. Loading flat files to SAP HANA in the Enterprise HANA section
2. Linking a CSV file to a HDBTable using HDBTI Configuration file
(Recommended method) which is our next tutorial as well.

Help this website grow by sharing this document on social media by using
the icons below.

Happy Learning!

Import flat file(CSV) with HDB


Table – Table Import Configuration
Welcome again to the next tutorial where we learn how to link a CSV file to
an HDB table. This is the third and last part of the data load to HANA table
tutorial series. This method only works with HDB Tables. Although this is the
longest method, it is the recommended one. Why? Read on to understand.

Be sure to check out our other tutorials on the other data load methods to
SAP HANA:

1. Via flat file loads to SAP HANA


2. Using SAP HANA SQL Script INSERT statement

Importing CSV data file to SAP HANA


repository
In this approach, the CSV file will reside on your SAP HANA server and not the
local desktop. Open HANA studio and go to the repositories tab. If you don’t
see this tab, you are not in the developer perspective.

Under your package, right click and go to New-> Other from the context
menu.
The below window pops up wherein you should select “File” and click Next.
Enter a file name. I have provided it as customer_revenue.csv. Press Finish
when done.
Depending on your HANA Studio default configuration, either a text editor
opens up or ideally an excel window will be opened. I prefer not to use excels
for CSV files as they tend to corrupt the CSV format sometimes. So instead of
filling in the table data here, we just save it blank.
On pressing save, you get the below message. Press Yes.

Now, exit the Excel and excel throws the below message. Press Don’t save
here.

You would notice a new file can be seen inside our package now. The grey
diamond symbol on the file icon means that it is currently inactive.
Right click on the file and then press Open With -> Text Editor.

Sometimes you would get an error on the right side pane saying “The resource
is out of sync with the file system”
In such cases, right click on the file name and click refresh to fix this problem.

Now in the editor that opens up, we paste the CSV data from our tutorial on
flat file upload, copied by opening it in notepad.
Once done, press the activate button marked below.

Now the CSV file would have become active. Notice that the grey diamond
icon has gone away.

Now we have an HDB table we created in an earlier tutorial by the name


CUST_REV_HDB.
Linking the SAP HANA HDB table with this CSV
file – HDBTI configuration
To do this, stay in the repository tab and right click on your package. Again
select New-> Other as shown below.

Write ‘configuration’ in the search bar as shown below. A list of options will
open up. Click on Table Import Configuration inside the Database
Development folder and press Next.
Provide a name to this configuration file and press Finish.
The editor opens up on the right and a file is created in the package as well.
Notice the grey diamond again.This means that this file is inactive.
The syntax of this hdbti file is as given below

import = [

hdbtable = “<hdbtable_package_path>::<hdbtable_name>”;

file = “<csvfile_package_path>:<csvfilename>”;

header = <header_existence>;

delimField = “<delimiter_sumbol>”;

];

<hdbtable_package_path> refers to the package where your HDBtable was


created. If your package was called Fruits, your package path will be “Fruits.
But if you had a package Orange inside the package Fruits and the HDB Table
was inside this Orange package, the path would be “Fruits”.”Orange”

<hdbtable_name> Refers to the table that we wish to link this file to

<csvfile_package_path> Package path where your csv file was created.

<csvfilename> The filename of your CSV flat file.

<header_existence> Can take value true if you have a header row in your
excel. Otherwise, it’s false.

<delimiter_sumbol> refers to the data separator/delimiter which in our case


was commas. Check your data file carefully to understand whether it is a
comma or a semicolon or anything else. Enter it here.

In our case, the code would look like the below. Once done, press activate.
As seen below, the grey diamond symbol has gone away from the hdbti file.
This confirms that it’s active.

To confirm if the data is now linked, we go back to the systems tab and right
click on our table to do a data preview.
The Raw data tab confirms that it is working.

This concludes our three part tutorial of data loads to custom tables on SAP
HANA. For client delivery always use this method as the CSV file also can be
transported to further systems and it also means that the flat file is always in
your HANA server and not the local desktop.

Help this website grow by sharing this document on social media by using
the icons below.

Happy Learning!

SQL Join Types in SAP


Hello one and all. Welcome to the next edition of our SAP HANA Tutorial
series. This is another one of our checkpoint courses where we understand
one of the basic concepts of database design – SQL Join Types in SAP. Since
this is a common tutorial, all of these types may not be applicable to ABAP,
BW and Enterprise HANA. Each of them supports some of these which we will
understand in each of those courses individually. But for now, this tutorial is
to understand what joins mean.

Most of the times, one table does not contain all the data needed for analyzing
the problem. You might have to look into other tables to find the other fields
that you need. The primary way to achieve this is via joins. But there are
different types of joins. Let’s look at each of them with an example.

Fruits make everything simple..even SQL Join


Types
Five students in a class are asked to input their favorite fruit and vegetable
into two database tables. They are free to not enter any fruit or vegetable if
they don’t like any.

Table 1: The Fruit table

Student Fruit

Shyam Mango
John Banana

David Orange

Maria Apple

Table 2: The Vege-Table

Student Vegetable

Shyam Potato

David Carrot

Maria Peas

Lia Radish

After the data entry is complete, we can see that Lia doesn’t like any fruit and
John doesn’t like any vegetable apparently.

SAP SQL Join Types #1 : Inner Join


Now, let’s say our first analytical requirement is to get a singular view of all
the students with fields (Student, Fruit, Vegetable) who like at least one fruit
and at least one vegetable. This is a classic example of an INNER JOIN.

Student Fruit Vegetable

Shyam Mango Potato

David Orange Carrot

Maria Apple Peas


An INNER JOIN returns a view with result set that is common between all
involved tables.

SAP SQL Join Types #2: Left Outer Join


The next requirement we have is to get a singular view of all the students with
fields (Student, Fruit, Vegetable) who like at least one fruit even if they don’t
like any vegetables. This type of join is a LEFT OUTER JOIN.

Student Fruit Vegetables

Shyam Mango Potato

John Banana ?

David Orange Carrot

Maria Apple Peas

A LEFT OUTER JOIN returns all the entries in the left table but only matching
entries in the right

Note: The ‘?’ in the data is a NULL entry. As discussed earlier, NULL denotes
the existence of nothing. NULL is not blank or zero. It’s just nothing. Since
John likes no vegetables, a null value is placed there. In SAP BW and ABAP
output, NULL maybe represented by ‘#’ whereas in Enterprise HANA, ‘?’ is
the default display of NULL values.

SAP SQL Join Types #3: Right Outer Join


Next, we now require a singular view of all the students with fields (Student,
Fruit, Vegetable) who like at least one vegetable even if they don’t like any
fruits. This is exactly the converse of our previous requirement. This type of
join is a RIGHT OUTER JOIN.

Student Vegetables Fruit

Shyam Potato Mango


David Carrot Orange

Maria Peas Apple

Lia Radish ?

A RIGHT OUTER JOIN returns all the entries in the right table but only
matching entries in the left

Note: This is kind of a redundant type of join as the position of these tables
can be reversed and a LEFT OUTER JOIN can be applied to achieve the same
results. For this same reason, RIGHT OUTER JOIN is rarely used and is also
considered a bad practice in terms of performance of SAP HANA views. So try
to avoid using it.

Student Fruit Vegetables

Shyam Mango Potato

John Banana ?

David Orange Carrot

Maria Apple Peas

Lia ? Radish

SAP SQL Join Types #4: Full Outer Join


Next, we require the data of all the students with fields (Student, Fruit,
Vegetable) who like at least one fruit or at least one vegetable. This is a classic
example of an FULL OUTER JOIN.

Student Fruit Vegetables

Shyam Mango Potato


John Banana ?

David Orange Carrot

Maria Apple Peas

Lia ? Radish

A FULL OUTER JOIN returns all the data from all involved tables regardless
of whether they have matching values or not on the join condition.

This one is rarely used as we never usually need all the key values from both
tables. As seen from the example, this results in a lot of nulls. Nevertheless,
it is a rare need.

The main type of JOINS actually used in real time scenarios are INNER JOINS
and LEFT OUTER JOINS. LEFT OUTER JOINS are the preferred join type in
terms of performance as they require only scanning of one table to complete
the join.

SAP HANA SQL Script Specific Join Types


There are two other join types available in enterprise HANA called
REFERENTIAL JOIN and TEXT JOIN. A REFERENTIAL JOIN is the same
as an INNER JOIN but better in performance but should only be selected
when it is sure that the referential integrity is maintained which in regular
human English translates to mean that all values for the join condition fields in
the left table exist in the right table for their corresponding fields as well. This
is very difficult to guarantee and hence due to this involved risk, a
REFERENTIAL JOIN is best left untouched.

A TEXT JOIN is used for tables containing descriptions of fields. The text
tables/descriptive tables are always kept on the right hand side and their
language column needs to be specified.

A TEXT JOIN usually looks like the below.

Key Table:
Vegetable
Student
code

Shyam PO

David CA

Text Table:

Vegetable
Description Language
Code

PO Potato EN

PO Kartoffel DE

CA Carrot EN

CA Karotte DE

which join to become the below table with descriptions of the same vegetable
in two different languages – English and German

Vegetable
Student Description Language
Code

Shyam PO Potato EN

Shyam PO Kartoffel DE

David CA Carrot EN

David CA Karotte DE
Note: As you might have already noticed, I have referred to the resultant
output of joining tables as “Views”. Views is the actual technical term for this
result set. A view is just a virtual output of a join and is not persisted/stored
on the disk like a table. The result set of a view only exists during run-time.

Thank you for reading this tutorial on joins and if you liked it, please show
your support by sharing this document across social media by pressing the
share buttons below and also don’t forget to subscribe to our newsletter for
alerts on new tutorials that are added regularly.

Happy Learning!

Graphical Calculation View: Part 1


Posted on June 9, 2016 by shyamuthaman

Understanding a Graphical Calculation View in


SAP HANA
Hi everyone and welcome to the most important tutorial in this series where
we learn how to create a graphical calculation view. Calculation views form
the primary core reporting data model which is developed in SAP HANA. Some
of you who have already done some kind of reading or training on SAP HANA
might be wondering why I haven’t taught attribute views or analytical views
yet before coming to this yet as other tutorials available online do. The answer
is quite simple – creating them is no longer part of the HANA development
best practices. There are still reasons to know them as some projects might
have them from the older versions. But I will cover them after I finish the
important stuff. Any development object that exists but is no longer
recommended to be used will be covered in some legacy tutorials later on. For
now, let’s focus on the important stuff.

Graphical Calculation View – What is it?


As mentioned earlier, this is the SAP recommended data model and it should
cover most of your development needs. It offers most functionalities that you
might need to create a meaningful data model for your reports to consume
but in the rarer cases where the requirements are beyond its capabilities, you
can switch to SQL based solution of Table functions (a newer version of
Scripted Calculation Views) but those are separate tutorials covered in the
SQL Script section.

The beauty of a graphical calculation view lies in the fact that it offers powerful
calculations which can be used by a developer with no coding experience
whatsoever and hence, learning curve is quite gentle.

Let’s learn with an example

There are lots of things you can do with a graphical calculation view and so I
decided to split this tutorial into parts. This part creates a simple graphical
calculation view by joining two tables. If you are new to the concept of a
database JOIN, click here to read our tutorial on it

Let’s take a real business scenario using sales document data as an example.
Sales documents have two parts – A header and an item. If you are new to
this concept, you can visualize this in the form of any bill that you have
received till date. Such a bill has a header/top part that always remains
constant providing probably the company name, address and some more
header level information. Thereafter, there is an Item section which contains
individual items that you have ordered. In SAP, header and item details are
often stored in separate header tables and item tables. Our example will utilize
the sales document header table – VBAK and the sales document item table
VBAP. These are two of the most commonly used tables for analysis in actual
projects.

As mentioned, these are standard SAP tables and will be part of SAP ECC
installations. From our data provisioning tutorial, you already know that the
most popular way of provisioning data from SAP source systems is via SLT
replication which would provide these tables to a schema under your catalog
folder. Generally this task to replicate tables is not part of a developer’s task
for many reasons like data security and due to the fact that it is a sensitive
process. Developers would request the table they need for a data model and
it would be then provided by the project system administrators responsible
for the SLT replications.

In our example, ECC_DATA is the schema marked by the arrow below that
houses all our SAP ECC tables. In your project, it might be something else.
Check with your project administrator to find the correct schema. You can
choose to use any table from any other schema to practice this if you don’t
have an ECC connected.

To create a new graphical calculation view, right click on your package and
select New-> Calculation view from the context menu as shown.
The below window pops up asking for a technical name and a label. The Name
is important and should be a meaningful technical name as per your project
naming convention whereas the label is just a description. The name gets
copied to the label automatically but you can change it to whatever you find
to be correct. The “Type” drop-down remains on Graphical by default and can
be changed to “Scripted” in which case you have to code the entire view in
SQL Script. The data category is CUBE by default and should be selected when
your data is expected to have measures which means that your data model is
built to analyze transaction data. If there is no measure involved, select
DIMENSION as the data category in which case you tell SAP HANA that the
data model will be purely based on master data. If you do not understand the
difference between master and transaction data, click here to revisit that
tutorial.
We name our graphical calculation view SALES_VIEW and keep the other
settings as they are as they suit our requirement.
So our requirement is to take two tables, VBAK and VBAP from the schema
ECC_DATA and join them to create a graphical calculation view. The below
screen opens up which is a easy drag and drop interface to build these
graphical calculation views. Firstly, we need some placeholder to hold these
tables. Such placeholder are called “Projections”. Drag and drop two of these
into the screen as shown here.
These empty projections will look as shown below.
If you click on them and hover your mouse over them, you would get a green
plus symbol as shown below. This helps you add objects to this Projection. A
projection can be a placeholder for tables and other views as well.

On clicking the green plus symbol, you will get a window where you can find
the table you will add to this Projection. On the search area, write the name
of your table which in our case is VBAK. As seen below, the system provides
the list of all database objects which have VBAK in their name. We are looking
for the table VBAK under ECC_DATA schema which will be represented by
VBAK (ECC_DATA) as shown below. When you find the table you are looking
for, select it and press OK.
As seen below, the projection is no longer empty and houses the table VBAK
. Click on the projection to open up the Details pane on the right side. Here
you will see all fields associated with this table. In front of each field is a grey
circle which allows you to select it for output from this projection node. This
means that if you require the field VBELN, from this projection, you will need
to click on that grey circle and it becomes orange which means that it is now
selected. Now, select a few fields from this table. Note that if you are using a
table from an SAP source, always select the field MANDT if it is available. The
field MANDT indicates that the table is cross client. I have explained what
“client” means in terms of SAP tables in a tutorial for a different section. You
can check it out here.

As explained above, the fields I selected have an orange circle in front of them.
That’s all we need to do here in projection 1. There are other options on the
right most side of the screen like Filters and more but we will come to those
in the following parts of this tutorial.

This tutorial continues in next page describing JOIN nodes in graphical


calculation view.
Graphical Calculation View: Part 2
Posted on July 15, 2016 by shyamuthaman

Graphical Calculation View Nodes – JOIN


Continuing the build of our first graphical calculation view, we repeat the same
process for VBAP to be added to Projection 2. Also, some of the fields we need
from this table are also selected.

At this point, these are two individual tables floating in space with no
interaction or relation whatsoever. Let’s change that. Bring in a Join block by
dragging it into the space from the left menu as shown below.
Now you have a floating JOIN block. You need to tell it what it needs to join.
As explained earlier, we want to do a left outer join between VBAK and VBAP.
See those little circles on top of the projection nodes, the bottom circle is an
input connector and the top circle is an output connector. Drag and drop the
Output connector from projection 1 into the input node of Join 1 as shown
below. Do the same thing for Projection 2. This means that you have taken
the output of those projections and are using them as an input to the join
block. The input connector that is dropped first into the join becomes the left
part of the join and the second one becomes the right. The join connector only
takes two inputs. So if we had a third table to join, you would need another
Join block.
The join block now houses the two projections.

Click on the Join 1 node to open up the join mappings on the right hand side
as seen below. Here, again select the fields that you want to send to the next
level and as you see, the ones I selected are orange. Actually I selected all
but the ones that are grey are the not selected because they are duplicate, i.e
they exist in both tables so we only need it only once. Drag and drop the fields
from one table to the other based on which you wish to join the two tables.
This creates a linking line between the two as shown below. In this case, we
join using the MANDT (Client) and VBELN (Sales Document Number) as the
join conditions. Also, our requirement was to do a left outer join and all join
nodes provide inner join by default. To change this, in “Join Type”, click on
where the value is marked as ‘Inner’ as pointed by the green arrow below.

In this Join Type section, this opens up the drop down where you can select
Left Outer Join.

Now the join is complete. As we did earlier, connect the output connector of
the Join block to the input connector of the Aggregation block. After this is
done, click on the aggregation block to bring up the selected field list as shown
below.
Select the fields you require to move to the output.

1
Once done, click on the Semantics block. This is where you maintain the
overall settings of this graphical calculation view. Go to the “View Properties”
tab. Data Category is CUBE as we selected in the beginning. You can change
it here even at this point. If you are working with tables of SAP source
systems, they most probably will have the MANDT field as I explained earlier.
These tables are cross client and in those cases, change the Default client
setting to “Cross Client” instead of “Session Client” as shown by the green
arrow below. Also on the Execute In drop down, select “SQL Engine” for best
performance.

Press the activate button shown below when done.


If the job executes with no red signs, your graphical calculation view was
successfully built. If you don’t see your log anywhere, press the button I have
marked by the red arrow below to bring it up.

Press the data preview button marked below to see if our view brings up any
data.
Move to the raw data tab pointed by the red arrow. You would see the data
preview of the first 200 records of this successful join. If you wish to display
more, change the max rows setting pointed by the blue arrow and press
execute to refresh the data again.

Also, you would see your first view inside your package as well. Well done!
Pat yourself on the back. You are well on your way to be the HANA expert this
world so desperately desires.

In the next tutorials, we will try out further features of the graphical calculation
view. Help this website grow by sharing this document on social media by
using the icons below. Be sure to subscribe to our newsletter when the
message pops up for latest alerts on new tutorials.

Happy learning!

Filters:Type 1- SAP HANA Constant


Filters
SAP standard tables may contain millions of data records and it becomes
critical to the report performance that we only pull the data we need from the
tables we look at. Without filters in SAP HANA, pulling up excess data can lead
to slow reports, put strain on the server, and in some instances crash it too!
Unless you wish to be the guy who crashed the server (The stain of which
stays on for a long time..), read on to understand the three methods by which
you can filter to get only the data that is needed:

1. Constant Filters
2. Variables
3. Input Parameters

All three of them have varied usages which we will try to understand in this
and the further tutorials.
To begin, double click and open our SALES_VIEW from the previous tutorial.
And then click on the Projection_1 node to open it up on the right pane.

Let’s start applying these filters one by one.

The constant filter in SAP HANA

This is the simplest filter condition. Here, the requirement provides constant
values for which the filters must be applied. For example, let’s say we only
need data from table VBAK where it’s field VBTYP is equal to the value ‘C’.
This is a clear constant value which has to be directly applied and once applied,
the same filter will run every time. No matter which user does the data
preview, this filter would run and the user would have no influence on this
value.
To achieve this filter condition as per our example, right click on the VBTYP
field and select ‘Apply Filter’ from the context menu.

The filter dialog-box opens up.

Here, you can choose the operator to be applied – Filter when something is
equal to some value or not equal to it, or between a range of values and so
on. The options here cover every form of constant filter you may need.
In this example, we need the value to be equal to C and hence we choose the
below values

Once you press OK, notice that the yellow-orange filter symbol appears on
Projection_1 confirming that our effort was successful.
Activate the view (1) and then do a data preview (2) to check the resultant
data using the buttons marked below.

As seen from the data preview, the VBTYP is completely filled with value ‘C’
but this can be misleading since data preview usually brings up 200 records
only and the overall data set may be much larger. To check how many unique
values exist for VBTYP field, let’s switch to the Distinct values tab as marked
by the arrow below.
Drag the VBTYP field on to the right pane and you would see the number of
unique values this dataset brings. As seen here, there are 11,187 result rows
and all of them have a value of C. Thus our constant filter works fine.

1
Now, try to apply a filter in the Join node as shown below. Notice that there is
no ‘Apply filter’ option. This is just to emphasize the fact that only Projections
allow filters. If you need a filter on a join result, you would need to add another
projection after the join and filter it there.

I hope this was easy to understand since it was a fairly straightforward topic.
In the next tutorial, we take a look at the next type of tutorials – SAP HANA
Variables in graphical views.

If you liked the tutorial, please share it on social media using the share buttons
below and make sure you subscribe to our newsletter to get the latest updates
when new tutorials are added.
Filters:Type 2- SAP HANA
Variables
Welcome to the second part of the graphical view filters tutorial. In this one,
we try to understand how SAP HANA Variables can help us create dynamic
filters to allow more flexibility in development.

If a dynamic filter needs to be applied after all results have been calculated
(top-level filter) , SAP HANA variables are the answer. By dynamic filter, I
mean that when executed, this view would ask the user for filter values. In
this case, the executing user has full control of the filter being applied.

SAP HANA Variables in an example


To create a variable, click on the Semantics node and then select the
Parameters/Variables Tab and then click on the green plus button to create a
new SAP HANA variable.

The new window pops up as ween below. Here, you need to provide a name,
label ( = description), the attribute name on which filter needs to be applied
on.

In the Selection Type drop down, have different options like:

• Single Value – to filter and view data based on a single attribute value

• Interval – to filter and view a specific set of data


• Range – to filter and view data based on the conditions that involve operators
such as “=”(equal to), “>” (greater than), and so on

You can specify whether single or multiple values are allowed for the variable
using Multiple Entries checkbox and also, mark it as mandatory.

You can set a default value for the variable that should be considered if no
value at run-time is provided in the form of a constant or as an expression.

Let’s say we require that the user should be able to provide multiple entries
of value to VKORG field that comes from table VBAK. As seen below, if a field
has not been selected from the table, it will not be available to act as an
attribute for creating a new variable. This happens because SAP
HANA variables are created on the top level of the node and at the top, only
selected fields exist.

This is not a big problem. Let’s go and add VKORG to the flow. As you saw in
the previous tutorials, we need to add this field in the lowest node and keep
selecting it in every node as we go up the flow for it to reach the final node.
To do this quickly, we use a shortcut. Right click on VKORG field in the lowest
node and select “Propagate to Semantics” from the context menu. It adds it
to all the layers above it in a single shot (I wish finishing this website was this
easy … ).

You will get an information message stating the nodes where the field has
been added. Press OK.
Save and activate this view. Now, come back to the Semantics node and click
on the plus button to try creating the variable again as we did a few moments
ago.

Fill up the values as I did below. I named the variable V_VKORG (The naming
convention might vary in your project). I used the same description. Attribute
is the field on which we need the filter applied. In this case, we choose VKORG
as the attribute. In the selection type, we choose single. This means that the
input would be in single value(s) and not in ranges or intervals.

I have also marked “Multiple entries” meaning that the user can enter multiple
single values while executing the view. “Is Mandatory” being enabled means
that the user has to enter a value to be able to run this view. Leaving the field
blank won’t be an option.

Also, when the selection screen pops up asking the user to enter a value for
VKORG, we want it to show the value 1000 by default which the user can
choose to change. Such values can be set in the Default value field. Press Ok
after filling the required values.
As seen below, a new variable has been created in the Semantics node now.
Activate the view and then do a data preview.

Testing the SAP HANA Variable


The data preview this time brings up a window asking you to enter a value for
the variable V_VKORG. Also there is a plus symbol where you can add more
values to the filter (since we selected “multiple values” in the variable
properties). Let’s try adding one more variable. Press the green plus button
as marked below.
As seen below, a new row got added where I applied another SAP HANA
variable value which is equal to 2000. This filter now means that all output
values are now filtered to only provide rows which have VKORG = 1000 or
2000.

Press OK after entering these filters.


In the data preview, go to the “Distinct Values” tab. You can see below that
the only values of VKORG in data now are 1000 and 2000. This means that
our SAP HANA variable filter worked perfectly well.
Let’s repeat this data preview once and try to execute this calculation view
without any variable values as shown below. This prompts an error message
telling the user that the variable is mandatory and cannot be skipped. This
confirms the mandatory setting that we made while creating this SAP HANA
variable.
An important property that you might have observed from the execution of
SAP HANA variables is that the filter is applied after the view finishes its
calculations. For example, lets say a view pulls up 1 million records and you
now apply a variable on it at the top, which causes it to return only 1 record
on execution. The filter was only applied after the 1 million records were
already processed and was filtered just before the output.

Thank you for reading this tutorial. Read the next tutorial to understand Input
Parameters – an even more powerful dynamic input feature.

Please help this website grow by sharing the document on social media using
the share buttons below and subscribe to our newsletter for the latest updates.
Filters:Type 3- SAP HANA Input
Parameters
Input parameters are the magical underappreciated providers of flexibility in
SAP HANA. They are somewhat similar to variables but are available at every
node in the view. Input parameters are of great importance and have multiple
applications but here we learn how to use it as a dynamic filter at the lowest
node.

Input Parameters explained with an example


Let’s say we need to add a filter on field GRUPP from VBAK table. When the
user selects a value during data preview, this filter should be applied to
Projection_1 and thus only the reduced data set should undergo further
calculations. This is the requirement. In this case variable can’t be used as we
need to apply this dynamic filter at a lower node level and not at the top level.

First things first- Let’s add one more field GRUPP to the view by propagating
it to semantics and then activating the view again.
Now, single click on the VBAK node if you already haven’t and then on the
right side, you would see a folder for Input Parameters as shown below. Right
click on it and select “New” to create a new one.

The below window appears which is similar to our variable creation screen.
Give it a name and a description.

Notice that there is no attribute to bind this value to here. Input parameters
can be created independent of fields unlike variables. Picture them as a value
that can be used anywhere and this value being dynamic, can be entered at
run-time by the user. Provide a datatype and length of this input parameter.
Press OK.
Notice that an input parameter has been created and appears in the folder as
shown below. Now, double click on the “Expression” under filters.
This opens up the expression editor. You can see here that the static filter that
we applied earlier also appears here. We created it by simply right clicking the
VBTYP field and assigning a filter value = ‘C’ there but due to that the system
auto-creates the corresponding filter expression code in here. Notice that it is
greyed out because there was no manual coding of filters yet. But with curious
developers like us, we have to explore the options that lie ahead! Press the
Edit button marked below.
1

HANA throws us a warning that from now on, filters for this projection can
only be maintained as an expression code. This means that no more rights
click + apply filter functionalities will work in this node. Every time you need
a filter (even a static one) you would have to add a small bit of code here. We
do this because Input Parameters can only be added as filter via the
expression editor. Press OK to move ahead.
This opens up the expression editor for editing. Place an AND operator after
the existing filter to tell HANA that you are adding a second condition here.
After the AND, add an opening brace of a bracket and double click on the
GRUPP field as shown below.
This adds the GRUPP field into the expression editor. Now, we need to tell
HANA that the filter is GRUPP = The input parameter. Place an equal sign and
double click on the parameter name as shown below.
Close the bracket after the expression has been written as shown. Notice that
the input parameter is always represented as covered by single quotes and
double dollar signs on either side whenever used in a code.
Press OK, save and activate the view. Click on data preview to bring up the
selection screen where along with the V_VKORG variable, you now have the
P_GRUPP input parameter. In this case, I provide it the value 1200.
The data preview opens up and it looks like GRUPP has only the 1200 values.
Just to be sure, let’s check the “Distinct Values” tab. Dragging and dropping
GRUPP here confirms that the only values it has are all ‘1200’.

Thus, we successfully used input parameters as filters.

There are also further usages of Input parameters which we would learn as
we progress. One step at a time.

Major differences between Input parameters


and variables in SAP HANA
The differences are:

1. Variables apply filter after execution of all nodes till the semantics (at
the top level) whereas Input parameters can apply filters at any
projection level.
2. Variables are bound to attributes/specific fields whereas an input
parameter is independent of any field in the view.
3. Variables have a sole purpose of filtering data whereas filtering is only
one of the reasons to use an input parameter.
Thank you for reading this tutorial. Please help this website grow by sharing
the document on social media using the share buttons below and subscribe to
our newsletter for the latest updates.

Calculated column in SAP HANA


If you have made it this far into the tutorial, I commend your will to continue
striving to be the best in SAP HANA. This website will continue to focus on
teaching you everything that you need to know to be an excellent SAP HANA
consultant. Moving on, this tutorial will help you understand how to create
new fields in views which are calculated based on some logic which may
involve already existing fields in the table. Such a field is called Calculated
Column in SAP HANA.

Let’s build a Calculated Column with an example


To start, open the calculation view we are working on and click on Projection_1
node. Let’s create a new field here that would be called SALE_TYPE and the
logic would be to concatenate fields AUART and WAERK separated by a dash
symbol. Right click on the “Calculated Columns” folder and click “New”.
The below dialog box opens up where you need to give the new field a
technical name, a datatype and the logic inside the “Expression Editor”.

After adding the name and datatype as shown below, double click on the
AUART field to add it to the “Expression Editor”.
It gets added as shown below.
1

Concatenate more strings to it using the + operator. The Plus (+) operator
works as a string concatenator for string fields and an arithmetic plus for
numerical datatypes like decimal and integers. We needed the AUART and
WAERK to be separated by a dash. Hence after the plus symbol, we add a
dash surrounded by single quotes since it is a character value. Then add
another plus for the next field WAERK. The end result should resemble the
below. Double click WAERK to add it at the end too.

Once the expression is complete as below, press the “Validate Syntax” to


verify the syntax of your code.
The below information message confirms that the code is correct.

Press OK and save and you would come out of this editor. Now as you see
below, a new Calculated column SALE_TYPE has been created successfully.
Now, as you might have realized, this field is only available in Projection_1. It
needs to be propagated to all nodes above it. To do this, click on the node
above it which is the Join_1 node in our case. Find the field and right click on
it and select “Propagate to Semantics” from the context menu.

You get an information message outlining the nodes to which this field has
been propagated to. Press OK.

Activate the view and run a data preview. As seen below, our SALE_TYPE field
has appeared and seems to be running as per our calculation logic.
Let’s try another one. This time let’s create a new calculated column at the
“Aggregation Node” for a numerical field- NETWR. The new requirement is to
create a field NORM_VAL which would be equal to NETWR divided by 12
showing up to 2 decimal places. Once again, right click on the “Calculated
Columns” folder and click on “New Calculated Column” from the context menu.
A new window pops up where we fill the details as we before. The name, A
description and datatype need to be filled. Since the value when divided by 12
will surely return decimal values, we mark Datatype as Decimal with a
maximum length of 10 before the decimal. A scale of 2 tells the system to
only display the result up to 2 decimal places. Also, this time, make sure you
switch the Column type to “Measure” since this is a transaction field. If you
don’t remember what a measure is and how it is different from an attribute,
revisit the Master data vs Transaction data tutorial. In the expression editor,
enter the formula as NETWR/12 as required and press OK.
As seen below, the new field has been created successfully.

Since this is on the last aggregation node, there is no need to propagate it


further. Save and activate this view. Press data preview to check the values
as shown below.
This concludes our tutorial on SAP HANA Calculated columns. Hope you guys
found it interesting. Comment below with your thoughts.

Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.

Happy learning!
Restricted column in SAP HANA
Welcome to the next tutorial in this series where we learn how to work with
the concept of Restricted columns in SAP HANA. Just as the name suggests, a
restricted column is a user created column that is restricted in it’s output by
a condition that we specify.

For example, if we do not want everyone to see all the data in a field but based
on a condition we decide whether this data should be displayed or not. This
will become much more clear from our use cases below.

Restricted Column – An illustrated Example


Let’s say we want to create a new restricted column called
RESTRICTED_NETWR which should show values only when the corresponding
value in field AUART is ‘SO’. To achieve this, first right click on the “Restricted
Columns” field on the right and select “New” from the context menu as shown
below.

1
The below screen pops up demanding technical name and description as
always in the “Name” and “Label” column respectively. The column drop down
is used to select the numerical field on which this restriction needs to be
applied- which in this case is NETWR. In the restrictions section, you can
choose to apply a simple restriction using an existing field or if it is a
complicated condition, use the Expression editor via the radio button.

In our current scenario, the restriction is simple – AUART = ‘SO’ so we do not


need the expression editor.

Fill in these values as shown below. And press Ok.


Save and Activate your view. Now do a data preview to check if this works.
As seen below, the RESTRICTED_NETWR field displays values only where
AUART = SO as specified by our restriction.

Conditions for restrictions are not always static. This means that there are
requirements where conditions are provided to be provided by the user on
run-time. Let’s take an example where there would be an input parameter
that specifies the value of AUART for which the values of RESTRICTED_NETWR
must be displayed.

To achieve this, first let’s create a new Input parameter to capture the AUART
value. Right click on “Input Parameters” and click on “New”.
Provide the values as shown below.
Now we have a new input parameter P_AUART ready for capturing values on
run-time.

Double click on the RESTRICTED_NETWR field to edit it.

This time, we need to take value of AUART from an input parameter and hence
this requires the use of the expression editor.
It asks for your confirmation to move even the existing condition into the
expression editor. Press Ok.

We see that the old condition we had is also now converted into an expression
code. All we now have to do is to replace ‘SO’ by the input parameter. Delete
SO and double click on our P_AUART to add it in.
The expression editor should look like the image below.
Press Ok. Then, save and activate the view. Run the data preview and you
would get the below value entry screen. Here I fill out the old input parameters
with some values as well as the new P_AUART with value of ‘SO’.
As seen from the output below, our dynamic restriction worked perfectly and
RESTRICTED_NETWR only shows values now for the ‘SO’ AUART.

Let’s try again for a different AUART field value. This time we fill the value
with ‘TA’ whilst keeping the other values constant.

As seen below, only ‘TA” values now display the restricted column. Hence, our
condition was successful.
I hope a restricted column is something easy to create from now on for
everyone who read this tutorial.Comment below with your thoughts.

Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.

Happy learning!
Rank node in SAP HANA
Calculation view
Welcome to the next tutorial of this SAP HANA Tutorial series. In this one, we
learn how to rank rows and pick up the highest or lowest ranks according to
a condition using the Rank function in calculation views.

Let’s get into the requirement.

In a company XYZ, employees in the finance department with ID numbers


1001,1002 and 1003 have to undergo an evaluation to check which one of
them is better at their job. They undergo tests from January to May on the
first day of every month. Thus, each employee goes through five such
evaluations. The top score they get during any of these months would be
considered as their final score at the end of these evaluations.

Our requirement is to create a view of this data that displays only the top
score of each employee along with the date on which the employee took the
evaluation.

A new table was created below containing the overall evaluation data –
EMP_SCORE_FINANCE under our 0TEACHMEHANA schema. The data has been
fed into this table for each employee and their respective scores each month.
1

To achieve this, we would need to build a calculation view which ranks these
rows of data and picks up the highest score for each employee. Fortunately,
in SAP HANA graphical views, we have a rank operator available.

Let’s start by creating a new calculation view.


Give it a name and description and press “Finish”.
A blank canvas …
Let’s first build a simple view without the rank. Just a small view which displays
all data from the table.
We add a projection with the table and connect it to the aggregation. Select
all fields in the projection.
Now select all fields in the aggregation. When done, save and activate this
view.
Run a data preview and you would get the below data which is of course the
same as the table since we haven’t applied any additional logic anywhere.

SAP HANA Calculation View with Rank Node


Now, let’s get to business and rank this data as per our requirement. Drag the
rank button on to the blue arrow joining the Projection_1 and the Aggregation
block as shown below.
Press “Yes” on the message asking for confirmation.

You would now see a rank block inserted in between the projection and
aggregation. The placement of these blocks looks messed up though.
Fortunately, your HANA view doesn’t need to look like your room. There’s an
auto arrange button marked below which cleans up the layout and arranges it
in an optimal way.
1

After pressing this button, you would see that the blocks have been properly
arranged as shown below. Once done appreciating this feature, click on the
Rank node. On the right pane, first select the fields you need to take to the
next level and then come to the bottom section marked in the red area below.
Here the rank configurations would be maintained.
The first setting is the sort direction. Here you specify whether you wish to
sort it with the highest value first or with the lowest value by choosing
Descending or Ascending respectively. Since we need to pick up the maximum
score, we keep this at Descending.

Next, we set the “Order By” field. This is the field we need to sort Descending
(as per our previous setting). In our case, this field is SCORE. We need to sort
SCORE descending to find out the top score.
Next, we need to set the “Partition By” column. This is the field we wish to
sort by. Thus, we need to sort SCORE descending by each EMP_ID and then
the first row for each such employee ID would be his/her top score.

Once done. Save and activate the view. Now, check the data preview to
confirm the results. As seen below, the view has ranked and picked up only
the top score of each employee and also the date on which they achieved
this feat. Congratulations employee 1003!
Make sure to share this content on social media using the share buttons below
to show your support to this website and to keep it alive.

If you feel the website helped you, please also contribute any small amount
by using the “Donate button” on the right side of this page to help with the
operational costs of this website and other systems involved.

Aggregation node in SAP HANA Calculation


View
Welcome again good folks to the next SAP HANA tutorial where we learn about
the humble Aggregation node in SAP HANA. An aggregation node is usually
attached to the semantics node by default if you are building a transaction
calculation view(This means that the data category CUBE was selected when
initially creating the view). Additional aggregation nodes can be added if
required.

Learning aggregation with an example


Let’s start with a basic view that is already built. It takes all the data from
EMP_SCORE_FINANCE table and sends it to the output without any additional
logic.
Let’s first check the data preview for this simple view. This is just to see the
data we will be working with.
Now, come back to the Aggregation view and click it . Click on the orange
circle as always to remove the DATE field as shown below.

Confirm the removal of this field. Press “Yes”.

The field gets removed as shown below.


As seen below, since the date field was removed the score gets aggregated
and summed up. This is because the default aggregation setting is SUM. But
there are other types of aggregations that can also be performed. Let’s look
at the different types of aggregations.

Click on the aggregation node and then clock on the SCORE measure. At the
bottom, now the properties section opens up. At the far bottom, you find an
Aggregation setting. This is by default set on to SUM as explained earlier.
Thus, whenever values aggregate, they add up according to this default
setting.

Aggregation Types

Let’s channel our curiosity and try to switch this setting so that we get the
average of all available SCORE values for each employee. All the available
values for aggregation types are as shown below. The common ones are
COUNT, MIN, MAX, AVG which are used to find the count, minimum value,
maximum value and average values of measures respectively. VAR and
STDDEV are Variance and Standard deviations for advanced statistical
analysis in rare cases.
Let’s set this value to Avg (Average) as per our requirement.
Save and activate this view. Now, execute a data preview. As seen below, an
average value has been displayed in the output. Since SCORE was an INTEGER
datatype field, no decimals were retained in the output.

Removing duplicate rows with aggregation

Measures aren’t the only fields capable of getting aggregated. The aggregation
node also helps remove duplicates. When two rows in the incoming data set
contain the same data, the aggregation node also works as a duplicate row
remover. For example, from Projection_1 let’s disable all fields except
EMP_ID.
In the aggregation node also, let’s select this field as shown below.

Save and activate the view. It may throw the below error if you have been
following the same steps I’ve been doing. This error says that “No measure
defined in a reporting enabled view”. This is due to the fact that the data
category chosen while creating this view was CUBE and now it has no measure
since we have taken SCORE field out of the output.

To fix this, go to the semantics node and under the “View Properties” tab,
switch the data category to DIMENSION as shown below.
Save and activate this view now.

Before we do a data preview on the entire view, let’s data preview the output
of the first Projection node. To do this, right click on the projection and click
on Data preview as shown below.
As you can see below, projection_1 supplies a lot of duplicates to the next
level.
Now go back and run a data preview on the entire view. You should get the
below result. All the duplicate values have been removed.

SQL UNION and SQL UNION ALL in


SAP
Welcome again to this new tutorial on SQL UNION in SAP where we discuss
the UNION of data sets and try to understand the difference between UNION
and UNION ALL. Unions happen in SAP BW as well as enterprise HANA
graphical views and in SQL Script UNION statements. This one would be a
short and simple tutorial with an example as always.
The SQL UNION in SAP
Consider a data set for North America division of a company

Product

Shoes

Bags

Gloves

Now assume that we have another data set for the South American division
of the same company

Product

Shoes

Caps

To get the entire product portfolio of the company in the American region, the
operation required between these two data sets would be a UNION. This is
because a UNION vertically combines data sets like piling one stack of
potatoes. The result would be as shown in the below table.

Product

Shoes

Bags

Gloves
Caps

Quite simple, isn’t it? But there is another type of union operator that we can
use.

The SQL UNION ALL in SAP


For the same set of tables, if we do a UNION ALL, the result would be:

Product

Shoes

Bags

Gloves

Shoes

Caps

Do you notice the difference here? Take a good hard look.

Difference between UNION and UNION ALL

You might have noticed this by now that UNION ALL did not remove the
duplicates and ‘Shoes’ was repeated twice in the data set which in this case
wasn’t a desirable result.

But why would one still use something like a UNION ALL? Excellent question.

In cases where there is absolutely no doubt that the merging data sets have
distinct values, it’s better to use a UNION ALL so that there is no time wasted
by the system in sorting and deleting duplicates from this final data set. For
performance reasons, UNION ALL is a true gem of an option and should be
used wherever possible.
Thank you for reading this tutorial on UNIONs and if you liked it, please show
your support by sharing this document across social media by pressing the
share buttons below and also don’t forget to subscribe to our newsletter for
alerts on new tutorials that are added regularly.

Happy Learning!

SAP HANA Union node in


Calculation view
Welcome back to the next in line tutorial on SAP HANA Union node in
Calculation view. I hope you already read my tutorial on Unions SQL UNION
and have a theoretical understanding of what a UNION is.

The funny fact about this UNION node here is that it’s doesn’t work like a
UNION. It works as a UNION ALL operator. This means that keeps piling one
data set below the next without aggregating them.

Let’s take an example to see how a UNION works. Since the last few tutorials,
we have been working on the EMPLOYEE view which has the scores of a test
that employees of the finance department of a company took on the first day
of every month from Jan-16 to May-16.
The below output is the data preview of the view when only this table is
involved with no further logic.
1

Now, the company tells you – the developer, that this view should provide
data for the same test done in the marketing department as well. This means
that you would need to incorporate the table that would provide this data into
this view.

Drag another Projection node out which would hold the new table.

You can also drag tables into these projection nodes instead of right clicking
the projection and searching for them by names. This is faster and at times
much more emotionally satisfying.
As seen below, the Marketing score table has been successfully added to
Projection_2.
Now, you need to decide on what operator you should use so that the two
data sets combine into one larger data set.

Would it be a JOIN or a UNION?

Answer – A Union block because:

1. The tutorial is on UNIONs. Did you seriously expect the answer to be


JOIN?
2. A Join is usually done when based a field in one table, you need to fetch
the related data from other tables. For example, based on employee ID
in one table, address data for that employee can be fetched from some
other table which has this information.
3. UNIONs combine similar data sets. It just stacks data sets – one below
the other.
To achieve this, drag the UNION node on to the arrow connecting Projection_1
and the aggregation. This would place the UNION node between these two.

It asks for your confirmation before doing so. Press “Yes” to continue.

Connect the marketing data projection node to input of the union as well which
is displayed below marked by the green arrow.

Click auto-arrange to clean the layout up. The button is marked by the red
arrow below.
The layout gets sorted out as shown below, Now click on Projection_2 and
select all the fields for output.
Click on the UNION node and you would realize that both the Projection nodes
are displayed here on the source side and the target side contains the output
structure of this node. Since we added this node between projection_1 and
aggregation, all fields of this projection are auto-mapped to the output.

Expand the little blue plus buttons on each projection(Marked by the red
arrows below) to get a better view of their fields.
As explained earlier, all fields of projection_1 are mapped to output. The
source fields of Projection_2 now need to be mapped to their corresponding
targets.

Drag and drop the EMP_ID field from the source to the corresponding target.
Similarly, map the DATE and SCORE fields. Your completed mappings would
look as shown below.

When done, activate this view.


The below data preview confirms that the two data sets have now converged
into a larger data set. Thus our objective was achieved.
Thank you for reading this tutorial.

Share, Subscribe and comment below to keep this website active.

Happy learning.
Legacy: SAP HANA Attribute view
Welcome folks to the Attribute View in SAP HANA tutorial. In this tutorial, we
learn the second type of view that can be created called the attribute view.
We have been creating calculation views till now.

Note: Attribute views are no longer recommended to be used in development.


Calculation views with Data Category “DIMENSION” should be used instead.
Thus, this tutorial is created with a “Legacy” title. The only reason to still
include this in our tutorial is due to the fact that some projects still have
attribute views passed on from their old developments. The only case where
you should create a new attribute view is when due to some performance
issue, someone from SAP support team recommends you to create one (which
is a rare scenario).

Attribute views are built specifically for master data. Thus, there will not be
any measures in the output for this view. Their primary purpose was to
maintain a reusable pool of master data which could then be combined with
transaction data in Calculation views or Analytical views (which you will learn
in the next tutorial). Thus you could have a master data analytical view called
MATERIAL which could combine all material related tables and also create new
calculated fields. This newly created view could then be used anywhere in
other views where all the fields required from MATERIAL could be picked up,
thereby avoiding the rework of joining the tables every single time.

Creating an attribute view


To Create an attribute view, right click on a package and select New Attribute
view from the context menu.
Fill in the name and description of the view and press “Finish”.
1

The below screen opens up. Notice that the semantics is not connected to an
Aggregation node here. It is connected to a “Data Foundation” A “Data
Foundation” is a node which you cannot remove. It is purely used to include
tables into the view. You can use a singular table here or have more by
specifying the join condition inside this “Data Foundation”. There is no
separate JOIN node that you can insert here. You cannot insert views into the
“Data Foundation”. It only accepts tables.

Let’s drag the EMP_NAMES table into the “Data Foundation” as shown below.
Once it is there, click on the “Data Foundation” node and the field selection
“Details” section would open up on the right as shown below. Select all the
fields by the same method as always- Clicking on the grey buttons to make
them orange.
As seen below, the fields have been enabled successfully.

An attribute view, unlike the other views, requires at least one field to be
declared as the “Key” field – which is a field that has a unique value in each
row and thus isn’t repeated.

Update:(20-Apr-17) As someone in the comments pointed out and as


confirmed by me on the SAP HANA SP12 system, this setting of “Key”
field is no longer a mandatory step which does seem strange as SAP
stopped enhancing attribute views a long time ago. Any ways, it’s a
pointless debate now that we know for sure that the attribute views
are obsolete.

Click on the field which you wish to enable as Key under the Columns folder.
In this case, we single click on EMP_ID as marked below. This opens up the
properties section below it. Here, change the Key configuration from “False”
to “True” by clicking on it.
Drop down the choices and select True.
As seen below, the key has been enabled.

Our view is done, but let’s do something more. We have already created
Calculated Columns for Calculation view. The same process applies here if you
need to create one. To add a new field, FULL_NAME as a concatenation result
of fields FIRST_NAME AND LAST_NAME, right click on the “Calculated
Columns” folder and select “New”.
The familiar window opens up asking for the name and datatype of this new
field.
After providing the details, it should look as the below. Full name and last
name concatenate using the Plus(+) operator and there is a blank space in
between them to keep the words separate.

As seen below, the new field has been added.


Save and activate this view. Now run a data preview.

As seen below, the selected fields as well as the calculated field are now
available in output.
Let’s assume another scenario where the view required the employee’s age,
Email and Phone number as well. But the table we have in this view currently
doesn’t have these fields. But if we could join EMP_NAMES with EMP_MASTER
table, we could have all the fields we require.

Drag and drop EMP_MASTER into the data foundation.

Now, we have 2 tables here. There are not joined yet since there is no link
defined between them. Also, the fields we require aren’t yet selected from
EMP_MASTER.
First we connect EMP_ID from each table to the other. This completes the
JOIN. Click on this linking line as shown below to open up the JOIN properties.

The default JOIN type is “Referential”. If you need to refresh your memory on
the different JOIN types – revisit the tutorial on this by clicking here.
Opening up the JOIN type setting provides a list of available options. Switch
it to LEFT OUTER JOIN.

Once this is done, enable the required fields as per your requirement to send
them to the next level.
Save and activate this view. Now execute a data preview to check the data.
As seen below, the JOIN is successful and data appears as required.

This ends our tutorial on attribute views. I hope this was helpful.

Share, subscribe and comment on this tutorial to help others reach this
website.
You are doing well. Keep going.. almost there.

Legacy: SAP HANA Analytic view


Welcome back fellow learners to our SAP HANA Tutorial where we would learn
about SAP HANA Analytic View. These are the third type of HANA information
views we would be learning after Calculation views and Attribute views which
we have gone through already.

Note: Analytic views, just like attribute views are no longer recommended to
be used in development. Calculation views with Data Category “CUBE” should
be used instead. Thus, this tutorial is created with a “Legacy” title. The only
reason to still include this in our tutorial is due to the fact that some projects
still have analytic views passed on from their old developments. The only case
where you should create a new analytic view is when due to some performance
issue, someone from SAP support team recommends you to create one (which
is again, a rare scenario).

Building an analytic view


To create a new analytic view, right click on your package New Analytic View.
The familiar window opens up again asking for a technical name and
description.

Provide the details and press Ok.


The below flow opens up. In this case, we have a Semantics node as always,
a data foundation where tables are added and in addition, there is a Star Join
node which as you see below already has the Data Foundation as one of its
input as default.

As in the attribute view, the data foundation only accepts tables. No views can
be added here to the join. But, in an analytic view, there must be exactly one
central transaction table. This means that you can add more transaction tables
to the join provided they only supply attributes and all of their measure fields
are disabled. Usually, there is only one transaction table and other master
data tables. The result of this join passes on to the Star join.

The star join contains the data foundation already. It also accepts attribute
views but no individual tables can be added here. It is called a star join
because an analytic view is basically a star schema structure in itself –a central
transactional data table surrounded by master data.
To start, let’s build a view with fields employee ID, country and salary coming
in from EMP_SALARY and also the field first name from EMP_NAMES table.

Pull both these tables into the data foundation node.

The two tables now become available. The next step is to enable the fields we
need.
Once the fields are enabled as below, the join link needs to be built based on
the join condition.

1
Employee ID field connects both tables and hence connect those two fields by
drag and drop. This establishes a referential join link between these tables by
default. Click on the linking line between the two tables to enable the join
properties window on the bottom right. Switch the Join type as required. In
this case, switch it to left outer join with the transaction table on the left.

The join properties would reflect this change.


Click on the Star Join node and you would see the result of the data foundation
as if it were an independent table. Let’s not add anything here for now.

Save and activate this view. Execute a data preview when done.
The data preview returns the below data as required. Thus our first analytic
view has been successfully constructed.

Now, let’s take up an additional scenario wherein we take some Employee


master data from the Attribute view we built in the previous tutorial.

Firstly remove the EMP_NAMES table from the data foundation by right clicking
on the table and then choose remove from the context menu.
The below warning pops up telling you that the further data flow for the fields
of this table would also be removed, Press Yes to confirm.

The Impacts will also be mentioned in a separate information message. Press


OK.
Now, we add the EMPLOYEE_DETAILS attribute view created in the previous
tutorial to the Star join node. As explained earlier, we add it here as views
cannot be added in the data foundation.

Click on the Star join node and you would see that all the fields from this view
are auto-enabled by default to the output.
Disable the fields you don’t need and also enable a join between them as we
did earlier.

Save and activate this view. Execute a data preview to confirm that the
analytic view works perfectly as desired.
This ends our tutorial on analytic views. I hope this was helpful.

Anda mungkin juga menyukai