com/excel-on-hana/
For those of you who are new to SAP, do read my post – What is SAP ?
What do you do? Simple. Find that blue icon and give it a click. Then comes
the wait.
The wait may be of a few seconds or more based on your configurations. After
it opens, now close the program. Now go open it again. The wait this time
would be really less at least compared to the first time you started it. What
brought around this mystical change?
As shown in the figure, there are 3 components of your computer here. For
your CPU to start executing your program, the required information should be
available in your RAM. If the program was not called recently, the information
would not be available here and would need to be picked up from the
persistent disk – your hard drive. Consequently, when you call a program for
the first time, the information is loaded to the RAM and then processed by the
CPU. The second time when you called it, it was already there in the RAM and
hence got processed really fast.
RAM access is really fast and any data present there is referred to as data in-
memory.
Your hard disk storage is relatively slower in access but offers cheaper storage.
The size of your RAM is a crucial factor in deciding how much of data you can
keep In-memory and access really fast. But unfortunately, it’s not practical to
expand the RAM size of a PC to be able to house all the data you access.
Realizing this, the good folks at SAP created an in-memory database that
keeps all of the customer data online/in-memory. So, in a way you could
picture this as a gigantic RAM running multiple cores of CPUs and hence
delivering lightning fast response times.
Now before you throw your disks out for being a disappointment, it’s
important to understand that the RAM is volatile memory, i.e. it loses its
data on loss of power. Thus, it’s important for backups to be taken to
persistent disks. Backups are scheduled jobs executed as per the
configurations to make sure no data is lost in case of SAP HANA DB down
times.
The beauty of SAP HANA lies in the fact that it does most of its calculations
in-memory at the database layer instead of the application layer as done
traditionally. SAP HANA is not just a database. It consists of different engines
that crunch calculations efficiently and return results to the application layer.
Due to this push-down of logic, the data latency (Time taken between request
and response) is really small and that’s where the true magic lies.
Since you have taken the first step towards a career in SAP HANA, let me tell
you exciting roads lie ahead.
Help this website grow by sharing this document on social media by using the
icons below.
Happy Learning!
To all those excited folks who just want to see the development steps in one
of the modeling tools, I understand your anxiety but trust me when I tell you
by experience that you are chasing the wrong goals. Understanding how
databases behave is really important in your well rounded growth as a back
end developer and architect.
Patience is the key. Trust me that you’ll grow and learn everything you need
as we progress – one step at a time.
Coming back to the topic, we all know what a table is. We have been drawing
them since our school math classes and then in excel spreadsheets soon after.
The most common way of storing data in a database is in the form of a table.
For example, have a look at the below representation of a table. This is what
we call a logical representation – a way for regular folks to draw and represent
rows and columns of data.
But have you ever asked yourself how this information is stored in a database?
Disk memory can be envisioned as a continuous array of blocks where each
block can store 1 cell/value of data (from a table in this case).
Row Store
Row Store
Traditional database storage of tables is generally row store.
Let’s understand this with the figure where you can see how the table above
is stored in the database.
As you can see, it is stored sequentially as Row1, Row2, Row3 and so on.
Each row of data is stored one after the other in perfect sequence. From this
image, you could start to see the evident problems to read data. Even if I
want to read only one column of this dataset, my read skims through the
entire data array. The picture of a table changes dramatically when looking at
it in terms of memory, doesn’t it?
Column Store
Some databases support another additional form of storage called the column
store. Each column in the table acts as an individual table, and gets stored
separately.
Why would you do that? Excellent question. Thank you for asking.
When each column acts as an individual table, each of these individual mini-
tables can be indexed (=sorted) and compressed (=process of removing
duplicates).
This makes sure that each of these tables only contains a unique entry.
The below example illustrates how it works but does no real justice in
portraying the real advantage. To realize this, imagine a table with a million
rows. Most of the columns would have only a few hundred or thousand at most
of unique values. Compression makes sure you save disk space and indexing
makes sure you find things faster.
Note: Databases supporting column store entries also support row store.
Now at this point you might be wondering if the Column store is such a magical
gift to mankind, why bother with a Row store at all? Before you decide to kill
it with fire, let’s analyze the pros and cons.
Memory
Higher Lower Compression
usage
Now there are deeper concepts to Row stores and Column stores. For example
the Delta merge process for column store wherein a column store consists of
a main store, a primary delta store and a secondary delta store which is
actually a row store. Did I lose you there? Don’t lose sleep on these concepts
for now. I would come back to these eventually once you are a bit more
mature on this technological journey.
For those curious technical nerds out there, if you don’t want to wait for my
explanation, I would recommend reading the below:
Help this website grow by sharing this document on social media by using the
icons below.
Stay tuned for the next tutorials. Happy Learning!
The most important server in the architecture is the Index server. The actual
data is contained here and the engines that process this data are also present
in the index server itself. It has multiple components and its architecture will
be taken up as a separate topic in the next tutorial.
Stores information about the topology of the SAP HANA System. SAP HANA
can also run on multiple hosts. In these cases, the name server knows where
each component runs and also knows which data is located on which server.
Whenever a text analysis functionality is called upon by the index server, the
preprocessor server answers such requests.
This collects data about the resource allocation, performance and status of the
SAP HANA system. It keeps a regular update on the health check of the HANA
appliance.
The XS Engine allows external applications to access the data models in the
SAP HANA database via XS Engine Clients. It transforms the persistence model
in the database into consumption model for external clients to be exposed via
HTTP/HTTPS.
Application/Reporting Layer
This block is not part of the SAP HANA architecture but it is there to represent
that any web service/ application or reporting layer can interact and pull the
required data for itself and also write data back into the database
This concludes the tutorial on the high level architecture of SAP HANA. In the
next tutorial, we learn more about the Index server and all its components in
detail.
Happy Learning!
SAP HANA In-Memory Computing
Engine (IMCE)– Index Server
Architecture
Welcome to the followup tutorial to the SAP HANA Architecture. Here, we learn
about the core of the HANA appliance – The IMCE (In memory computing
Engine) or the Index Server. We already learnt in our previous tutorial that
the data is contained and processed in this server. Let’s understand the
components on Index Server in detail.
Connection/Session Management
To work with the SAP HANA database, users must use an application of their
choice. This component creates and manages these sessions for the database
clients. SQL Statements are used to communicate with the HANA Database.
The Authorization Manager
Replication Server
The replication server is responsible for replicating the table data and
metadata (structure) from the source system.
Metadata Manager
The term “metadata” stands for data about data. This includes information
about table structures, view structures, datatypes, field descriptions and so
on. All this metadata is stored and maintained by the Metadata Manager.
Transaction Manager
SQL Processor
SAP HANA In memory computing engine SQL requests are processed by this
component. Any kind of data insertion, updation and deletion of datasets are
handled by this processor.
SQLScript
This block symbolizes the internal language of the SAP HANA Database. SAP
HANA SQLScript optimizes operations by parallel processing of queries.
After initial processing from the SQLSCript, MDX and planning engine, the data
models are converted into Calculation models which creates an optimal
parallel processing enabled logical execution plan.
As discussed in the initial tutorials, Row store is a row based storage of data
in a serial manner
We learnt in the first tutorial that SAP HANA is an in-memory database which
is similar to the RAM of a PC that you may use. This also means that the main
memory in SAP HANA is volatile, i.e in case of a power outage or restart, all
data in it would be lost. Thus, there is a persistence layer to periodically save
all the data in a permanent/persisted manner.
Data and logs of a system are stored in Log volumes whereas Data volumes
store SQL data and undo log information and also SAP HANA information
modeling data.
SAP HANA In memory computing engine saves all changes to data into the
persistent disk at periodic intervals called savepoints. The default frequency
of these savepoints is every 5 minutes which can be changed as per the
requirement. If a system restart/power outage ever occurs, data from the last
successful savepoint can be read from the data volumes, and redo log entries
written to the log volumes.
Thank you for reading through the SAP HANA In memory computing engine /
Index Server tutorial. I hope it gave you a deeper insight into the SAP HANA
Architecture. Read on to the next tutorials to continue this journey.
US OP OP
DE OF OP
As seen in the illustration, when a SLT server and its connections have been
properly set up between the SAP source system and HANA, there is a state of
constant replication between source and target HANA system. Every time
there is a change in the table, a trigger from source pushes the data through
the SLT server to HANA. The initial load of tables might take some time as
there may be millions of records in most financial and sales tables but after
that has been done, the replication is done in near real-time speeds (most
would refer to this as real-time as there’s only a few seconds of delay).
Although this is enough for you to know for now, for the full in-depth
explanation of how SLT works, please refer the SLT guide from SAP.
Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.
Happy learning!
A workspace is nothing but a folder where all your offline work and
configurations get saved. Enter a folder path and press Ok.
Now, the first thing to understand about this tool is that different type of major
tasks are performed in different areas or “Perspectives” as they are rightly
called. Each perspective has a different purpose. For example, “SAP HANA
Administration Console perspective” is used by administrators for
administration and monitoring of the HANA Database. Similarly, “SAP HANA
Development Perspective” is used for HANA model developments and other
HANA related developments. There are many more perspectives but we would
look at them as and when we need them.
To do SAP HANA Development, you have two options, either use the “SAP
HANA Development” Perspective or the “Modeler” Perspective. I recommend
using the “SAP HANA Development” perspective as your default perspective
and for all your HANA modeling needs as it allows you to create many more
HANA related content like XS files, HDB tables etc. What are all of these
things? Well.. if you are patient enough, we will learn them in some of the
later tutorials.
Let’s try moving to the Development Perspective for now.
Navigate to the “Other” option in the context menu as shown below to get the
list of all perspectives.
Scroll down in this list and you will see the one we need. Click to select and
press ok.
At this point you will see the SAP HANA Development perspective as current
from the top right corner of the tool (Marked in the red arrow below). This
part of the tool will always show you the three most recent perspectives you
have been in and you can also click on one of these to switch to them if the
one you need is already in this shortcut area.
Now to connect to our HANA system from this tool, we need to go to the
“Systems” pane which may or may not be visible to you at this point as
sometimes it gets minimized. Click on the little squares (Marked in the blue
arrow below) to expand this area.
You will see the below Systems pane. Right click on the blank area inside this
pane to add a new system by choosing “Add System” from the context menu
as shown below.
Fill in the Host Name and Instance number as provided by your system
administrator and description as something relevant. The other settings may
also be different depending on the database you connect to – Please check
with your administrator for details. Press “Next” when done.
The next step would be to enter in your username and password which also
would be provided to you by your administrator. Press finish after you are
done to add the systems.
You will see that the system has been added and the green square on the
system icon indicates that all services are working fine.
Also, there are 4 sections or folder like icons that you see here. Let’s discuss
their relevance.
1. Catalog: This is where all the source metadata (Tables, views etc) is
grouped under. Here you can do data previews on source system tables
that have been replicated or available as virtual tables in case of SDA.
2. Content: This is where all your HANA development takes place. The
HANA models that you create go under here.
3. Provisioning: This is mostly used for Smart data access. All the source
systems connected via SDA will have their tables displayed here like a
“Menu Card” in a restaurant. You can choose which one you want and
build a virtual table for it in the “Catalog” section we discussed just now.
4. Security: This is mostly for security consultants to maintain users and
roles according to your role in the project – Developers, administrators,
testers and so on.
We will dwell deeper into these sections as we move further into these
tutorials.
Please show your support for my hard work by sharing this document across
social media using the share buttons below and also comment with your
The below is a master table showing the properties of these products – For
example, their product codes and which plants they are manufactured in.
SH Shoe US1
CO Coat US1
CA Cap DE1
Now, after two days of selling your products to customers, you get the below
table with details of your transactions in a transaction table
SH 1/1/2016 00
CO 1/1/2016 200
CA 1/1/2016 300
SH 1/2/2016 00
CO 1/2/2016 200
CA 1/2/2016 100
From looking at the above data, what kind of analysis can you do?
It’s quite obvious that nobody’s buying your Shoes, the coat sales seem to
be picking up but your Cap Sales are down. Simple analytics on a really
small scale – right?
Now let me ask you a simple question – What were you analyzing here?
Yes.. you were analyzing the Product code against each date. Also, how did
you know the SH meant shoes? Because of the master table above. The
master table also gives you more options for analysis. Since you know Shoes
description stands for SH code and they are made in the plant US1, you can
do deeper analysis like how many shoes from plant US1 were sold in a
particular time period.
The objects under analysis in any situation are called Attributes in
SAP HANA . For example – Product, Plant, customer, etc.
The fields which provide us the numbers for tangible analysis like Revenue
generated, Quantity sold, Payments Receivable are called Measures in SAP
HANA.
The central table for analysis is usually a transactional table and if we need a
deeper analysis of an attribute, we refer to its attributes from the master
data table(s) it may have. Like in our case, we did an analysis of revenue by
plant even though we did not have plant in the transaction data. This
permits the transactional tables to reduce the number of columns for
characteristics since they can always be “looked up” from the master data
tables they might have if required for analysis.
I hope the difference was clear and easy to understand. Please share this
document using the share buttons below and subscribe to our newsletter to
get alerts on the latest additions to this tutorial series.
Happy Learning!
SAP HANA Star Schema
Before we move on to SAP HANA Star Schema let us understand first what a
Star Schema is.
Let’s understand this with extending the simple example we had in the Master
data Vs Transactional Data tutorial.
SH Shoe US1
CO Coat US1
CA Cap DE1
SH 1/1/2016 00
CO 1/1/2016 200
CA 1/1/2016 300
SH 1/2/2016 00
CO 1/2/2016 200
CA 1/2/2016 100
Now, for analysis of the transaction data by product and description, we would
be required to have connections between the master and transaction table.
The overall data model would look as shown below:
Doesn’t look like much of a Star for a Star schema model right?
Well, that’s because we are doing a really small analysis here. Let’s assume
our transaction data had Customer and Vendor details too. Also, we wanted
to do more detailed time based analysis like Year/Month statistics from date.
Let’s see how that data model looks like.
This looks more like a star.. right? Well to be honest, it looks more like a 4-
legged octopus to me but a Star schema sounds cooler so let’s go with that
name.
To conclude, the star schema shows you how the backbone of your data
models must look like. This will be much more clearer and fun when we
actually start building data models in HANA and BW on HANA.
Note: Prior to the advent of HANA, a model called extended star schema
was used in SAP BW which was the best practice for BW running on a legacy
database. With BW on HANA, it is no longer relevant and is not discussed in
this tutorial. As of BW on HANA 7.5 and Enterprise HANA SP11, a lean star
schema approach is what must be followed in all data models. Well, BW does
it by itself anyways when on HANA so you can leave it up to the application.
Thank you for reading this tutorial. Please show some support for this website
by sharing these tutorials across social media using the share buttons below.
Also, make sure you subscribe to our newsletter when the pop-up comes up
to get alerts as soon as tutorials get added.
Employee
Product Date Quantity Sold(kg)
Nr.
There are two very important things that we can infer from this data:
1. You are a better salesman that I am
2. Data collected can be of different types. Let’s discuss the different columns
that we have here
Employee Nr. and Quantity Sold(kg) have only numbers and no characters
Data types in database terms is a literal translation of itself. The type of data
it represents is a data type. By specifying the data type, you tell the database
what kind of a value it can expect in that particular field. There a list of SAP
HANA data types available which you can refer on this link (SAP
documentation links do change frequently so let me know in the comments if
it stops working in the future).
HANA SQL Data types – Only the really important ones !!!
More importantly, let me list out the major HANA SQL data types you need to
know to work on a real project. These will be enough for most of the scenarios
you face. For everything else, there’s that link above.
So every time, you create a field or a variable (=an object that holds a single
data point), you need to tell HANA what kind of data to expect.
Take a look back at our first table on fruit sales and take a guess on what data
types you think they might be based on the above information.
I will place space in between to push the answers away . Scroll down for the
answers after you have your guess ready.
3. Employee Nr. and Quantity Sold(kg) have only numbers and no characters
Now here comes the tricky part. Let’s start with Quantity Sold(kg). It is a
number so it can be held by INTEGER, DECIMAL and NVARCHAR (as you can
also store numbers as characters). This field contains the number of a
particular fruit that you sold which will always be a whole number like 1, 2 or
4 and never like 1.5 or 1.34 (in which case you probably took a bite of the
fruit before you tried to sell it to a poor customer). So now we have 2 options
– INTEGER and NVARCHAR. Both of them CAN store values for this field but
which one SHOULD?
Now ask yourself, will at some point will someone try to add, subtract or do
arithmetic calculations on this field. For example, the store manager may try
to find the total number of apples sold on a particular day by adding up the
individual employee’s sales volume for that day. In our example, the total
number of apples sold on day 1 was (10+20 = 30). You can only do these
calculations with a numeric data type which is either INTEGER or DECIMAL.
Since we have already ruled out DECIMAL, we can infer that INTEGER data
type would be the correct option here.
Coming back to the Employee Nr. now, we always have whole numbers as
employee IDs so we can safely rule out DECIMAL. Now, we ask the question
again will someone want to do any math on this field. Logic dictates that It
makes no sense to add or subtract 2 employee IDs and hence we declare it
as a NVARCHAR data type so that even if some crazy dude tries to do some
math on this field someday, HANA throws an error showing him his logical
fallacy.
I hope this was easy to understand and follow. Stay tuned for my further
HANA SQL Tutorials for SAP HANA.
Make sure to share this content on social media using the share buttons below
to show your support to this website and to keep it alive.
Be sure to read all in the same order to fully understand the concept.
On expansion, you would see another kind of folder shown by the red arrow
below. These are called “Schemas”. These are usually used to group tables of
a similar source/ similar purpose. Each username by default gets its own
schema but you can also create one.
To create a schema is quite a simple activity. It involves a line of SQL.
Click anywhere in this tree of folders and you’ll see the SQL button marked
below become active. Click on it to open the SQL editor.
The new SQL editor that opened up below is where we will type in the code.
The code is as simple as basic English.
Note: Any system object like table names, field names, procedure names etc.
should be enclosed in a double quote and any values for character fields like
‘Apples’ go in single quotes.
Press the execute button marked below or F8 on your keyboard to create this
schema. The log below confirms that the execution was performed without
any errors.
HANA doesn’t auto refresh folder trees and we have to manually do it to see
new objects that are created. Right click on catalog and select Refresh from
the context menu as shown below.
To do this, expand the schema and right click on the Tables folder and select
“New Table” from the context menu.
This brings up the screen shown below. Here we need to provide a table name
and the fields that are required along with their data types (If you are not
aware of the major data types in SQL, click here for my tutorial on the topic).
The schema name is pre-filled as we right clicked on options inside this
schema. Also, the type is always column store by default (You can choose to
switch to row store by clicking the drop down but is not recommended for
analytical applications. To know more about Row and Column stores, follow
this link)
As seen below, I’ve filled up the table name, the first column name as
CUST_ID, data type as NVARCHAR (as discussed in our data type tutorial, we
don’t use VARCHAR as it has no Unicode storage), a length as 10 for this string
and mark this field as key to the table. A key should be marked when the field
can by itself or with other key fields, identify a row of data uniquely. Since
there will be only one row per customer ID in my table, I mark this as key. As
soon as you enable the Key column, the Not Null auto enables as a key field
can never be Null. Also, an optional comment field can be added which is a
description of the field. When done, press the green plus symbol marked below
to add another field and similarly add all required fields for the table.
Note: Null denoted by ‘?’ in HANA default settings denotes the non-existence
of a value for that field. Null is not zero or blank ‘’ but is a symbol for non-
existence of any value.
After adding all the fields, press the green Execute button to create the table.
Refreshing the “Tables” folder under our schema would reveal this new
achievement.
Congrats! You just created your very first HANA table as shown below.
Be sure to check out the tutorials to create tables via SQL script, HDB tables
and CDS tables. Those are alternate methods and are really important to your
growth as a HANA developer.
Please share this document on social media using the share buttons below to
show your support for the website and subscribe to our newsletter for the
latest updates.
);
<field> refers to the name of the fields you wish to add in this table.
<data type> refers to the data type of the field you are trying to add.
Note: Always be careful with the position of the brackets. Try to follow the
syntax shown above. Also end all SQL statements with a semicolon.;
“FIRST_NAME” NVARCHAR(20),
“LAST_NAME” NVARCHAR(20),
“REVENUE_USD” INTEGER ,
Open the SQL editor by clicking somewhere in the schema tree and then
pressing the SQL button as it becomes active.
Let’s copy this code to the SQL editor and press the execute button marked
below or press the F8 button after selecting the code.
The message for successful execution is shown below. Now we refresh the
Tables folder to see this newly created masterpiece.
The new table has been created as seen and double clicking on it reveals that
the structure is also as we intended it to be.
This concludes this tutorial on creating tables with SQL script in SAP HANA.
Please read the next part of Table creation tutorial – Creating HDB tables. You
will really find it fascinating.
If you liked the content, please support this website by clicking the share
buttons below to share it across social media and also subscribe for updates
on new content.
Happy Learning.
Creating an SAP HANA HDB Table
Welcome to the next tutorial on the three part series explain different ways to
create tables. In this one, we learn how to create an SAP HANA HDB table. It
is recommended that you also read the first two parts as well.
Then move to the repositories tab as shown below. If you are going here for
the first time, you need to import the Workspace into your local memory.
To do this, press “Import Remote Workspace” after right clicking on the
(Default) repository as shown below.
A wizard opens up. Write ‘table’ in the search bar marked by the red arrow as
shown below. The objects with Table in the name show up. Select Database
table as circled below and click next.
Provide a table name as shown below and leave the template as blank.
This will open up a blank editor screen where in you need to put in the HDB
Table code. This is a simple code although it is not SQL It is a HANA internal
syntax.
table.schemaName = “<Schema_Name>” ;
table.tableType = <Type_of_table> ;
table.columns =
[
{name = “<field1>” ; sqlType = <SQL_Datatype1>; length =
<Length_of_characters>;comment = “<Optional_Description>”;},
];
table.primaryKey.pkcolumns =
[“<primary_key_field1>”,”<primary_key_field2>”];
<field> refers to the name of the fields you wish to add in this table.
Let’s create the same table structure we created in the other cases. To
download the code I used below, click here.
table.schemaName = "0TEACHMEHANA";
table.tableType = COLUMNSTORE; // ROWSTORE is an alternative value
table.columns =
[
{name = "CUST_ID"; sqlType = NVARCHAR; length = 10;comment = "Customer
ID" ;},
{name = "FIRST_NAME"; sqlType = NVARCHAR; length = 20; comment =
"Customer First Name" ;},
{name = "LAST_NAME"; sqlType = NVARCHAR; length = 20; comment =
"Customer Last Name"; },
{name = "REVENUE_USD"; sqlType = INTEGER ;comment = "Revenue
generated in USD";}
];
table.primaryKey.pkcolumns = ["CUST_ID"];
According to the above syntax, the code would be as below. Press the activate
button marked by the arrow below.
This should create this table in the 0TEACHMEHANA schema. Once built, it’s
the same as a regular table. Go back to the Systems tab and in this schema,
go to the tables folder and refresh it.
You would notice that the table has been created. It works the same way but
has some additional capabilities that we will discuss in upcoming tutorials.
Also, you might notice that the package name is also prefixed to the table
name automatically. That’s something exclusive to HDB Tables.
Double clicking on the HDB table shows the structure has been defined as
required.
Edit (25-Jun-2017): Only CDS based tables are the best practice as of this date. These can be
HDBDD tables (described in the next tutorial) or HDBCDS table (for XSA based projects)
Their advantages will be further clear in our further tutorials so stay tuned for
new posts.
For anyone in a hurry, here’s the video version of this tutorial on YouTube. If
you prefer a written one, please read on. Also, please subscribe to our
YouTube channel to get video tutorials weeks before the written ones.
Once you open the SAP HANA web development workbench, click on the
Catalog to open the Catalog section.
The catalog link opens up showing the packages you are authorized to view.
Right click on the package where you wish to develop this code.
Now, we paste the below code into this editor to create our table.
namespace TEACHMEHANA;
@Schema: 'SHYAM'
context TABLES {
Entity CUST_REV_CDS {
LAST_NAME : String(20);
REVENUE_USD : Integer;
};
};
The namespace needs to define the package under which this file is created.
@Schema defines the schema under which the created table(s) would reside
under.
The main context defines the file name that was given at the time of
HDBDD creation.
A table is a persistent entity in SAP HANA CDS and hence the below
statement declares a table (entity) CUST_REV_CDS.
Entity CUST_REV_CDS : The next part declares the columns in this table. Notice
that the data types are different than in regular SQL. There is no NVARCHAR.
Instead the declaration uses a String datatype. This is because CDS has
slightly different datatypes.
LAST_NAME : String(20);
REVENUE_USD : Integer;
};
SAP has an online page dedicated to the datatype mappings which you can
refer to in this regard. Click here to reach that page. A screenshot of that page
currently is as below.
Once done, right click on the file and click “Activate”.
The cross symbol disappears from the file confirming that it is now active. The
table CUST_REV_CDS should also now be created in the SHYAM schema as
defined.
Now come back to the Web based development workbench. Click on the
Editor.
Now, expand the Catalog, the schema, and the tables folder.
I hope that the SAP HANA CDS HDBDD table concept is now clear to you.
Please share this tutorial on social media to help support the site. Also,
comments with your thoughts and questions.
To login to the The SAP HANA Web Based Development Workbench you need
to use the below URL:
https://<hostname>:<port>/sap/hana/ide
Many also call this the SAP HANA WEB IDE which is a mistake as the SAP
HANA WEB IDE is a separate thing used for mostly UI5 and XS application
development.
The SAP HANA Web Based Development Workbench when opened looks as
shown below
As seen above, there are 4 sections
The editor is same as the Content folder we had in HANA studio. Inside it, you
have the packages where you can create your HANA information views, stored
procedures and other objects.
The only one big difference here is that you will no longer be allowed to create
obsolete objects like attribute and analytic views which your SAP HANA studio
still allows. This prevents legacy objects from being created by ill informed
developers.
The catalog provides you access to the Catalog and Provisioning folders you
had in SAP HANA studio. Again, it will not allow you to create catalog tables
by right clicking and choosing New-> Table as in the HANA studio because we
know that we are in the age of CDS tables and catalog tables are not
recommended.
We finally have the option of right clicking and creating a new schema by the
way. I always wondered why they missed that in HANA Studio.
3. Security : SAP HANA Web Based Development Workbench
This link is almost the same as our security folder in SAP HANA Studio with
some added new features. SAP Security team would use this to add/modify
users and their roles.
4. Traces: SAP HANA Web Based Development Workbench
This is the link where an authorized user can view different traces of
operations going on within the SAP HANA Database.
To conclude, I would say that the SAP HANA Web Based Development
Workbench is catching on as the recommended development environment as
a lot of new features now are exclusively being delivered here. The SAP HANA
Studio had a nice run and although development in a web browser can get a
bit annoying – particularly with drag and drop operations, I would still
recommend that all developers use the SAP HANA Web Based Development
Workbench for all their developments from now on and get comfortable with
it because this is the future and the future is now.
Please share this tutorial on social media and support this site.
I’ve also made a YouTube video on this topic. Please check it out and subscribe
to the channel as well.
Easiest way to do this is with MS Excel as we enter some data as shown below.
We proceed and save this file and make sure to choose the file type as
CSV(Comma Delimited)
AS seen below, I have given it a file name and a file type
You would get the below message. Press Ok.
Now after you are done with this, let me explain why this file type is called
CSV or Comma separated value.
Now, make sure you close this file before proceeding further. Open files can
cause errors in upload. Once closed, Open your HANA studio.
Uploading this CSV flat file to SAP HANA
To start the flat file import, go to File -> Import as shown below.
Select “Data from Local file” from the SAP HANA Content folder and click next.
Select the target HANA system. In our case, the name is HDB but you might
have multiple HANA systems added in your HANA studio and in that case, you
would choose the system where you are importing this data to. Press next
when the system has been selected.
This will open up the below pop-up. Firstly, click on browse to select the file
you wish to upload.
1
Once this is done, many new options become active. I have numbered them
to be explain in a better way.
Mark 1 is the delimiter used in our CSV file. As we saw that in our case this
was a comma but if it was something else like a semi colon for you, drop down
and choose it.
Mark 2 needs to be checked if there is a header row in our data. Header rows
are the rows containing names of the columns. These should not be considered
as table data and to have them ignored, this check box needs to be checked.
Mark 3 needs to be checked if you want all the data from this file to be
imported. If you only want partial data to be imported, uncheck this and
provide the start line and end line numbers between which the data will be
imported from.
Mark 4 indicates the target table section. Here you tell HANA whether you
want this data imported to a new table or an existing table. In our case, let’s
import this data to our table created using the graphical method –
CUST_REV_GRAPH. To do this, click on the select table button.
Find the table under your schema and press OK.
The overall settings should now look like the below. Press Next.
This takes you to the mapping window where you tell HANA as to what field
from the CSV file goes where in the corresponding HANA table.
Drag and drop the source field to its corresponding target field to get the
mappings as below.
This screen below now shows the data preview of this import that we are
trying to make. Just to be sure that you have not mapped the wrong source
field to the target.
The log is not read and hence the import is successful and now this data should
have loaded to the table.
Right click on the table and click “Open Data preview” to check the data.
Below data confirms that we have correctly imported this table.
There are two other ways to fill in data to tables but since they include some
coding, I will be posting them in the SQL section. These methods are:
1. Adding data to a SAP HANA Table using HANA SQL script INSERT statement.
2. Linking a CSV file to a HDBTable using HDBTI Configuration file (Recommended
method)
This method and the SQL INSERT method are usually used for quick analysis
but anything that you do for a client that involves customized tables should
be done on HDB tables and with HDBTI config files explained in the above
linked tutorial. Be sure to check them out too.
Help this website grow by sharing this document on social media by using
the icons below.
Happy Learning!
SAP HANA SQL Script INSERT
Welcome to the next tutorial explaining the usage of SAP HANA INSERT
statement to insert new records into a HANA Table. This method can be
applied on any table, no matter how it was created – Graphically ,by SQL
CREATE or HDB Table
To start, we need the SQL editor. Click somewhere in the tree under the
system so that the SQL button marked below becomes enabled. Press the
button to bring up the SQL console.
Note: Always remember that refers to the table name prefixed with the
schema name as well as there can be tables of the same name in different
schemas.
Repeat this statement as many times as the number of rows you wish to add.
As seen below, I am adding data to the CUST_REV_SQL table which is in the
0TEACHMEHANA schema. System object names like schema names, table
names and field names should be wrapped in double quotes “ ” whereas as
you can see below, data values that have character data types inserted MUST
always be wrapped in single quotes ‘ ’ whereas number do not require any
quotes at all.
As seen from the log below, there are no errors. This means that the data was
inserted successfully.
And if you are wondering – Yes. It was that easy to do this. SQL for HANA is
fairly simple if you understand what you are doing instead of just trying
random experiments off the web. This concludes our second part of the SAP
HANA customized table data load series. Be sure to check out the other two
tutorials as well which are:
1. Loading flat files to SAP HANA in the Enterprise HANA section
2. Linking a CSV file to a HDBTable using HDBTI Configuration file
(Recommended method) which is our next tutorial as well.
Help this website grow by sharing this document on social media by using
the icons below.
Happy Learning!
Be sure to check out our other tutorials on the other data load methods to
SAP HANA:
Under your package, right click and go to New-> Other from the context
menu.
The below window pops up wherein you should select “File” and click Next.
Enter a file name. I have provided it as customer_revenue.csv. Press Finish
when done.
Depending on your HANA Studio default configuration, either a text editor
opens up or ideally an excel window will be opened. I prefer not to use excels
for CSV files as they tend to corrupt the CSV format sometimes. So instead of
filling in the table data here, we just save it blank.
On pressing save, you get the below message. Press Yes.
Now, exit the Excel and excel throws the below message. Press Don’t save
here.
You would notice a new file can be seen inside our package now. The grey
diamond symbol on the file icon means that it is currently inactive.
Right click on the file and then press Open With -> Text Editor.
Sometimes you would get an error on the right side pane saying “The resource
is out of sync with the file system”
In such cases, right click on the file name and click refresh to fix this problem.
Now in the editor that opens up, we paste the CSV data from our tutorial on
flat file upload, copied by opening it in notepad.
Once done, press the activate button marked below.
Now the CSV file would have become active. Notice that the grey diamond
icon has gone away.
Write ‘configuration’ in the search bar as shown below. A list of options will
open up. Click on Table Import Configuration inside the Database
Development folder and press Next.
Provide a name to this configuration file and press Finish.
The editor opens up on the right and a file is created in the package as well.
Notice the grey diamond again.This means that this file is inactive.
The syntax of this hdbti file is as given below
import = [
hdbtable = “<hdbtable_package_path>::<hdbtable_name>”;
file = “<csvfile_package_path>:<csvfilename>”;
header = <header_existence>;
delimField = “<delimiter_sumbol>”;
];
<header_existence> Can take value true if you have a header row in your
excel. Otherwise, it’s false.
In our case, the code would look like the below. Once done, press activate.
As seen below, the grey diamond symbol has gone away from the hdbti file.
This confirms that it’s active.
To confirm if the data is now linked, we go back to the systems tab and right
click on our table to do a data preview.
The Raw data tab confirms that it is working.
This concludes our three part tutorial of data loads to custom tables on SAP
HANA. For client delivery always use this method as the CSV file also can be
transported to further systems and it also means that the flat file is always in
your HANA server and not the local desktop.
Help this website grow by sharing this document on social media by using
the icons below.
Happy Learning!
Most of the times, one table does not contain all the data needed for analyzing
the problem. You might have to look into other tables to find the other fields
that you need. The primary way to achieve this is via joins. But there are
different types of joins. Let’s look at each of them with an example.
Student Fruit
Shyam Mango
John Banana
David Orange
Maria Apple
Student Vegetable
Shyam Potato
David Carrot
Maria Peas
Lia Radish
After the data entry is complete, we can see that Lia doesn’t like any fruit and
John doesn’t like any vegetable apparently.
John Banana ?
A LEFT OUTER JOIN returns all the entries in the left table but only matching
entries in the right
Note: The ‘?’ in the data is a NULL entry. As discussed earlier, NULL denotes
the existence of nothing. NULL is not blank or zero. It’s just nothing. Since
John likes no vegetables, a null value is placed there. In SAP BW and ABAP
output, NULL maybe represented by ‘#’ whereas in Enterprise HANA, ‘?’ is
the default display of NULL values.
Lia Radish ?
A RIGHT OUTER JOIN returns all the entries in the right table but only
matching entries in the left
Note: This is kind of a redundant type of join as the position of these tables
can be reversed and a LEFT OUTER JOIN can be applied to achieve the same
results. For this same reason, RIGHT OUTER JOIN is rarely used and is also
considered a bad practice in terms of performance of SAP HANA views. So try
to avoid using it.
John Banana ?
Lia ? Radish
Lia ? Radish
A FULL OUTER JOIN returns all the data from all involved tables regardless
of whether they have matching values or not on the join condition.
This one is rarely used as we never usually need all the key values from both
tables. As seen from the example, this results in a lot of nulls. Nevertheless,
it is a rare need.
The main type of JOINS actually used in real time scenarios are INNER JOINS
and LEFT OUTER JOINS. LEFT OUTER JOINS are the preferred join type in
terms of performance as they require only scanning of one table to complete
the join.
A TEXT JOIN is used for tables containing descriptions of fields. The text
tables/descriptive tables are always kept on the right hand side and their
language column needs to be specified.
Key Table:
Vegetable
Student
code
Shyam PO
David CA
Text Table:
Vegetable
Description Language
Code
PO Potato EN
PO Kartoffel DE
CA Carrot EN
CA Karotte DE
which join to become the below table with descriptions of the same vegetable
in two different languages – English and German
Vegetable
Student Description Language
Code
Shyam PO Potato EN
Shyam PO Kartoffel DE
David CA Carrot EN
David CA Karotte DE
Note: As you might have already noticed, I have referred to the resultant
output of joining tables as “Views”. Views is the actual technical term for this
result set. A view is just a virtual output of a join and is not persisted/stored
on the disk like a table. The result set of a view only exists during run-time.
Thank you for reading this tutorial on joins and if you liked it, please show
your support by sharing this document across social media by pressing the
share buttons below and also don’t forget to subscribe to our newsletter for
alerts on new tutorials that are added regularly.
Happy Learning!
The beauty of a graphical calculation view lies in the fact that it offers powerful
calculations which can be used by a developer with no coding experience
whatsoever and hence, learning curve is quite gentle.
There are lots of things you can do with a graphical calculation view and so I
decided to split this tutorial into parts. This part creates a simple graphical
calculation view by joining two tables. If you are new to the concept of a
database JOIN, click here to read our tutorial on it
Let’s take a real business scenario using sales document data as an example.
Sales documents have two parts – A header and an item. If you are new to
this concept, you can visualize this in the form of any bill that you have
received till date. Such a bill has a header/top part that always remains
constant providing probably the company name, address and some more
header level information. Thereafter, there is an Item section which contains
individual items that you have ordered. In SAP, header and item details are
often stored in separate header tables and item tables. Our example will utilize
the sales document header table – VBAK and the sales document item table
VBAP. These are two of the most commonly used tables for analysis in actual
projects.
As mentioned, these are standard SAP tables and will be part of SAP ECC
installations. From our data provisioning tutorial, you already know that the
most popular way of provisioning data from SAP source systems is via SLT
replication which would provide these tables to a schema under your catalog
folder. Generally this task to replicate tables is not part of a developer’s task
for many reasons like data security and due to the fact that it is a sensitive
process. Developers would request the table they need for a data model and
it would be then provided by the project system administrators responsible
for the SLT replications.
In our example, ECC_DATA is the schema marked by the arrow below that
houses all our SAP ECC tables. In your project, it might be something else.
Check with your project administrator to find the correct schema. You can
choose to use any table from any other schema to practice this if you don’t
have an ECC connected.
To create a new graphical calculation view, right click on your package and
select New-> Calculation view from the context menu as shown.
The below window pops up asking for a technical name and a label. The Name
is important and should be a meaningful technical name as per your project
naming convention whereas the label is just a description. The name gets
copied to the label automatically but you can change it to whatever you find
to be correct. The “Type” drop-down remains on Graphical by default and can
be changed to “Scripted” in which case you have to code the entire view in
SQL Script. The data category is CUBE by default and should be selected when
your data is expected to have measures which means that your data model is
built to analyze transaction data. If there is no measure involved, select
DIMENSION as the data category in which case you tell SAP HANA that the
data model will be purely based on master data. If you do not understand the
difference between master and transaction data, click here to revisit that
tutorial.
We name our graphical calculation view SALES_VIEW and keep the other
settings as they are as they suit our requirement.
So our requirement is to take two tables, VBAK and VBAP from the schema
ECC_DATA and join them to create a graphical calculation view. The below
screen opens up which is a easy drag and drop interface to build these
graphical calculation views. Firstly, we need some placeholder to hold these
tables. Such placeholder are called “Projections”. Drag and drop two of these
into the screen as shown here.
These empty projections will look as shown below.
If you click on them and hover your mouse over them, you would get a green
plus symbol as shown below. This helps you add objects to this Projection. A
projection can be a placeholder for tables and other views as well.
On clicking the green plus symbol, you will get a window where you can find
the table you will add to this Projection. On the search area, write the name
of your table which in our case is VBAK. As seen below, the system provides
the list of all database objects which have VBAK in their name. We are looking
for the table VBAK under ECC_DATA schema which will be represented by
VBAK (ECC_DATA) as shown below. When you find the table you are looking
for, select it and press OK.
As seen below, the projection is no longer empty and houses the table VBAK
. Click on the projection to open up the Details pane on the right side. Here
you will see all fields associated with this table. In front of each field is a grey
circle which allows you to select it for output from this projection node. This
means that if you require the field VBELN, from this projection, you will need
to click on that grey circle and it becomes orange which means that it is now
selected. Now, select a few fields from this table. Note that if you are using a
table from an SAP source, always select the field MANDT if it is available. The
field MANDT indicates that the table is cross client. I have explained what
“client” means in terms of SAP tables in a tutorial for a different section. You
can check it out here.
As explained above, the fields I selected have an orange circle in front of them.
That’s all we need to do here in projection 1. There are other options on the
right most side of the screen like Filters and more but we will come to those
in the following parts of this tutorial.
At this point, these are two individual tables floating in space with no
interaction or relation whatsoever. Let’s change that. Bring in a Join block by
dragging it into the space from the left menu as shown below.
Now you have a floating JOIN block. You need to tell it what it needs to join.
As explained earlier, we want to do a left outer join between VBAK and VBAP.
See those little circles on top of the projection nodes, the bottom circle is an
input connector and the top circle is an output connector. Drag and drop the
Output connector from projection 1 into the input node of Join 1 as shown
below. Do the same thing for Projection 2. This means that you have taken
the output of those projections and are using them as an input to the join
block. The input connector that is dropped first into the join becomes the left
part of the join and the second one becomes the right. The join connector only
takes two inputs. So if we had a third table to join, you would need another
Join block.
The join block now houses the two projections.
Click on the Join 1 node to open up the join mappings on the right hand side
as seen below. Here, again select the fields that you want to send to the next
level and as you see, the ones I selected are orange. Actually I selected all
but the ones that are grey are the not selected because they are duplicate, i.e
they exist in both tables so we only need it only once. Drag and drop the fields
from one table to the other based on which you wish to join the two tables.
This creates a linking line between the two as shown below. In this case, we
join using the MANDT (Client) and VBELN (Sales Document Number) as the
join conditions. Also, our requirement was to do a left outer join and all join
nodes provide inner join by default. To change this, in “Join Type”, click on
where the value is marked as ‘Inner’ as pointed by the green arrow below.
In this Join Type section, this opens up the drop down where you can select
Left Outer Join.
Now the join is complete. As we did earlier, connect the output connector of
the Join block to the input connector of the Aggregation block. After this is
done, click on the aggregation block to bring up the selected field list as shown
below.
Select the fields you require to move to the output.
1
Once done, click on the Semantics block. This is where you maintain the
overall settings of this graphical calculation view. Go to the “View Properties”
tab. Data Category is CUBE as we selected in the beginning. You can change
it here even at this point. If you are working with tables of SAP source
systems, they most probably will have the MANDT field as I explained earlier.
These tables are cross client and in those cases, change the Default client
setting to “Cross Client” instead of “Session Client” as shown by the green
arrow below. Also on the Execute In drop down, select “SQL Engine” for best
performance.
Press the data preview button marked below to see if our view brings up any
data.
Move to the raw data tab pointed by the red arrow. You would see the data
preview of the first 200 records of this successful join. If you wish to display
more, change the max rows setting pointed by the blue arrow and press
execute to refresh the data again.
Also, you would see your first view inside your package as well. Well done!
Pat yourself on the back. You are well on your way to be the HANA expert this
world so desperately desires.
In the next tutorials, we will try out further features of the graphical calculation
view. Help this website grow by sharing this document on social media by
using the icons below. Be sure to subscribe to our newsletter when the
message pops up for latest alerts on new tutorials.
Happy learning!
1. Constant Filters
2. Variables
3. Input Parameters
All three of them have varied usages which we will try to understand in this
and the further tutorials.
To begin, double click and open our SALES_VIEW from the previous tutorial.
And then click on the Projection_1 node to open it up on the right pane.
This is the simplest filter condition. Here, the requirement provides constant
values for which the filters must be applied. For example, let’s say we only
need data from table VBAK where it’s field VBTYP is equal to the value ‘C’.
This is a clear constant value which has to be directly applied and once applied,
the same filter will run every time. No matter which user does the data
preview, this filter would run and the user would have no influence on this
value.
To achieve this filter condition as per our example, right click on the VBTYP
field and select ‘Apply Filter’ from the context menu.
Here, you can choose the operator to be applied – Filter when something is
equal to some value or not equal to it, or between a range of values and so
on. The options here cover every form of constant filter you may need.
In this example, we need the value to be equal to C and hence we choose the
below values
Once you press OK, notice that the yellow-orange filter symbol appears on
Projection_1 confirming that our effort was successful.
Activate the view (1) and then do a data preview (2) to check the resultant
data using the buttons marked below.
As seen from the data preview, the VBTYP is completely filled with value ‘C’
but this can be misleading since data preview usually brings up 200 records
only and the overall data set may be much larger. To check how many unique
values exist for VBTYP field, let’s switch to the Distinct values tab as marked
by the arrow below.
Drag the VBTYP field on to the right pane and you would see the number of
unique values this dataset brings. As seen here, there are 11,187 result rows
and all of them have a value of C. Thus our constant filter works fine.
1
Now, try to apply a filter in the Join node as shown below. Notice that there is
no ‘Apply filter’ option. This is just to emphasize the fact that only Projections
allow filters. If you need a filter on a join result, you would need to add another
projection after the join and filter it there.
I hope this was easy to understand since it was a fairly straightforward topic.
In the next tutorial, we take a look at the next type of tutorials – SAP HANA
Variables in graphical views.
If you liked the tutorial, please share it on social media using the share buttons
below and make sure you subscribe to our newsletter to get the latest updates
when new tutorials are added.
Filters:Type 2- SAP HANA
Variables
Welcome to the second part of the graphical view filters tutorial. In this one,
we try to understand how SAP HANA Variables can help us create dynamic
filters to allow more flexibility in development.
If a dynamic filter needs to be applied after all results have been calculated
(top-level filter) , SAP HANA variables are the answer. By dynamic filter, I
mean that when executed, this view would ask the user for filter values. In
this case, the executing user has full control of the filter being applied.
The new window pops up as ween below. Here, you need to provide a name,
label ( = description), the attribute name on which filter needs to be applied
on.
• Single Value – to filter and view data based on a single attribute value
You can specify whether single or multiple values are allowed for the variable
using Multiple Entries checkbox and also, mark it as mandatory.
You can set a default value for the variable that should be considered if no
value at run-time is provided in the form of a constant or as an expression.
Let’s say we require that the user should be able to provide multiple entries
of value to VKORG field that comes from table VBAK. As seen below, if a field
has not been selected from the table, it will not be available to act as an
attribute for creating a new variable. This happens because SAP
HANA variables are created on the top level of the node and at the top, only
selected fields exist.
This is not a big problem. Let’s go and add VKORG to the flow. As you saw in
the previous tutorials, we need to add this field in the lowest node and keep
selecting it in every node as we go up the flow for it to reach the final node.
To do this quickly, we use a shortcut. Right click on VKORG field in the lowest
node and select “Propagate to Semantics” from the context menu. It adds it
to all the layers above it in a single shot (I wish finishing this website was this
easy … ).
You will get an information message stating the nodes where the field has
been added. Press OK.
Save and activate this view. Now, come back to the Semantics node and click
on the plus button to try creating the variable again as we did a few moments
ago.
Fill up the values as I did below. I named the variable V_VKORG (The naming
convention might vary in your project). I used the same description. Attribute
is the field on which we need the filter applied. In this case, we choose VKORG
as the attribute. In the selection type, we choose single. This means that the
input would be in single value(s) and not in ranges or intervals.
I have also marked “Multiple entries” meaning that the user can enter multiple
single values while executing the view. “Is Mandatory” being enabled means
that the user has to enter a value to be able to run this view. Leaving the field
blank won’t be an option.
Also, when the selection screen pops up asking the user to enter a value for
VKORG, we want it to show the value 1000 by default which the user can
choose to change. Such values can be set in the Default value field. Press Ok
after filling the required values.
As seen below, a new variable has been created in the Semantics node now.
Activate the view and then do a data preview.
Thank you for reading this tutorial. Read the next tutorial to understand Input
Parameters – an even more powerful dynamic input feature.
Please help this website grow by sharing the document on social media using
the share buttons below and subscribe to our newsletter for the latest updates.
Filters:Type 3- SAP HANA Input
Parameters
Input parameters are the magical underappreciated providers of flexibility in
SAP HANA. They are somewhat similar to variables but are available at every
node in the view. Input parameters are of great importance and have multiple
applications but here we learn how to use it as a dynamic filter at the lowest
node.
First things first- Let’s add one more field GRUPP to the view by propagating
it to semantics and then activating the view again.
Now, single click on the VBAK node if you already haven’t and then on the
right side, you would see a folder for Input Parameters as shown below. Right
click on it and select “New” to create a new one.
The below window appears which is similar to our variable creation screen.
Give it a name and a description.
Notice that there is no attribute to bind this value to here. Input parameters
can be created independent of fields unlike variables. Picture them as a value
that can be used anywhere and this value being dynamic, can be entered at
run-time by the user. Provide a datatype and length of this input parameter.
Press OK.
Notice that an input parameter has been created and appears in the folder as
shown below. Now, double click on the “Expression” under filters.
This opens up the expression editor. You can see here that the static filter that
we applied earlier also appears here. We created it by simply right clicking the
VBTYP field and assigning a filter value = ‘C’ there but due to that the system
auto-creates the corresponding filter expression code in here. Notice that it is
greyed out because there was no manual coding of filters yet. But with curious
developers like us, we have to explore the options that lie ahead! Press the
Edit button marked below.
1
HANA throws us a warning that from now on, filters for this projection can
only be maintained as an expression code. This means that no more rights
click + apply filter functionalities will work in this node. Every time you need
a filter (even a static one) you would have to add a small bit of code here. We
do this because Input Parameters can only be added as filter via the
expression editor. Press OK to move ahead.
This opens up the expression editor for editing. Place an AND operator after
the existing filter to tell HANA that you are adding a second condition here.
After the AND, add an opening brace of a bracket and double click on the
GRUPP field as shown below.
This adds the GRUPP field into the expression editor. Now, we need to tell
HANA that the filter is GRUPP = The input parameter. Place an equal sign and
double click on the parameter name as shown below.
Close the bracket after the expression has been written as shown. Notice that
the input parameter is always represented as covered by single quotes and
double dollar signs on either side whenever used in a code.
Press OK, save and activate the view. Click on data preview to bring up the
selection screen where along with the V_VKORG variable, you now have the
P_GRUPP input parameter. In this case, I provide it the value 1200.
The data preview opens up and it looks like GRUPP has only the 1200 values.
Just to be sure, let’s check the “Distinct Values” tab. Dragging and dropping
GRUPP here confirms that the only values it has are all ‘1200’.
There are also further usages of Input parameters which we would learn as
we progress. One step at a time.
1. Variables apply filter after execution of all nodes till the semantics (at
the top level) whereas Input parameters can apply filters at any
projection level.
2. Variables are bound to attributes/specific fields whereas an input
parameter is independent of any field in the view.
3. Variables have a sole purpose of filtering data whereas filtering is only
one of the reasons to use an input parameter.
Thank you for reading this tutorial. Please help this website grow by sharing
the document on social media using the share buttons below and subscribe to
our newsletter for the latest updates.
After adding the name and datatype as shown below, double click on the
AUART field to add it to the “Expression Editor”.
It gets added as shown below.
1
Concatenate more strings to it using the + operator. The Plus (+) operator
works as a string concatenator for string fields and an arithmetic plus for
numerical datatypes like decimal and integers. We needed the AUART and
WAERK to be separated by a dash. Hence after the plus symbol, we add a
dash surrounded by single quotes since it is a character value. Then add
another plus for the next field WAERK. The end result should resemble the
below. Double click WAERK to add it at the end too.
Press OK and save and you would come out of this editor. Now as you see
below, a new Calculated column SALE_TYPE has been created successfully.
Now, as you might have realized, this field is only available in Projection_1. It
needs to be propagated to all nodes above it. To do this, click on the node
above it which is the Join_1 node in our case. Find the field and right click on
it and select “Propagate to Semantics” from the context menu.
You get an information message outlining the nodes to which this field has
been propagated to. Press OK.
Activate the view and run a data preview. As seen below, our SALE_TYPE field
has appeared and seems to be running as per our calculation logic.
Let’s try another one. This time let’s create a new calculated column at the
“Aggregation Node” for a numerical field- NETWR. The new requirement is to
create a field NORM_VAL which would be equal to NETWR divided by 12
showing up to 2 decimal places. Once again, right click on the “Calculated
Columns” folder and click on “New Calculated Column” from the context menu.
A new window pops up where we fill the details as we before. The name, A
description and datatype need to be filled. Since the value when divided by 12
will surely return decimal values, we mark Datatype as Decimal with a
maximum length of 10 before the decimal. A scale of 2 tells the system to
only display the result up to 2 decimal places. Also, this time, make sure you
switch the Column type to “Measure” since this is a transaction field. If you
don’t remember what a measure is and how it is different from an attribute,
revisit the Master data vs Transaction data tutorial. In the expression editor,
enter the formula as NETWR/12 as required and press OK.
As seen below, the new field has been created successfully.
Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.
Happy learning!
Restricted column in SAP HANA
Welcome to the next tutorial in this series where we learn how to work with
the concept of Restricted columns in SAP HANA. Just as the name suggests, a
restricted column is a user created column that is restricted in it’s output by
a condition that we specify.
For example, if we do not want everyone to see all the data in a field but based
on a condition we decide whether this data should be displayed or not. This
will become much more clear from our use cases below.
1
The below screen pops up demanding technical name and description as
always in the “Name” and “Label” column respectively. The column drop down
is used to select the numerical field on which this restriction needs to be
applied- which in this case is NETWR. In the restrictions section, you can
choose to apply a simple restriction using an existing field or if it is a
complicated condition, use the Expression editor via the radio button.
Conditions for restrictions are not always static. This means that there are
requirements where conditions are provided to be provided by the user on
run-time. Let’s take an example where there would be an input parameter
that specifies the value of AUART for which the values of RESTRICTED_NETWR
must be displayed.
To achieve this, first let’s create a new Input parameter to capture the AUART
value. Right click on “Input Parameters” and click on “New”.
Provide the values as shown below.
Now we have a new input parameter P_AUART ready for capturing values on
run-time.
This time, we need to take value of AUART from an input parameter and hence
this requires the use of the expression editor.
It asks for your confirmation to move even the existing condition into the
expression editor. Press Ok.
We see that the old condition we had is also now converted into an expression
code. All we now have to do is to replace ‘SO’ by the input parameter. Delete
SO and double click on our P_AUART to add it in.
The expression editor should look like the image below.
Press Ok. Then, save and activate the view. Run the data preview and you
would get the below value entry screen. Here I fill out the old input parameters
with some values as well as the new P_AUART with value of ‘SO’.
As seen from the output below, our dynamic restriction worked perfectly and
RESTRICTED_NETWR only shows values now for the ‘SO’ AUART.
Let’s try again for a different AUART field value. This time we fill the value
with ‘TA’ whilst keeping the other values constant.
As seen below, only ‘TA” values now display the restricted column. Hence, our
condition was successful.
I hope a restricted column is something easy to create from now on for
everyone who read this tutorial.Comment below with your thoughts.
Please show your support for this website by sharing these tutorials on social
media by using the share buttons below. Stay tuned for the next ones.
Happy learning!
Rank node in SAP HANA
Calculation view
Welcome to the next tutorial of this SAP HANA Tutorial series. In this one, we
learn how to rank rows and pick up the highest or lowest ranks according to
a condition using the Rank function in calculation views.
Our requirement is to create a view of this data that displays only the top
score of each employee along with the date on which the employee took the
evaluation.
A new table was created below containing the overall evaluation data –
EMP_SCORE_FINANCE under our 0TEACHMEHANA schema. The data has been
fed into this table for each employee and their respective scores each month.
1
To achieve this, we would need to build a calculation view which ranks these
rows of data and picks up the highest score for each employee. Fortunately,
in SAP HANA graphical views, we have a rank operator available.
You would now see a rank block inserted in between the projection and
aggregation. The placement of these blocks looks messed up though.
Fortunately, your HANA view doesn’t need to look like your room. There’s an
auto arrange button marked below which cleans up the layout and arranges it
in an optimal way.
1
After pressing this button, you would see that the blocks have been properly
arranged as shown below. Once done appreciating this feature, click on the
Rank node. On the right pane, first select the fields you need to take to the
next level and then come to the bottom section marked in the red area below.
Here the rank configurations would be maintained.
The first setting is the sort direction. Here you specify whether you wish to
sort it with the highest value first or with the lowest value by choosing
Descending or Ascending respectively. Since we need to pick up the maximum
score, we keep this at Descending.
Next, we set the “Order By” field. This is the field we need to sort Descending
(as per our previous setting). In our case, this field is SCORE. We need to sort
SCORE descending to find out the top score.
Next, we need to set the “Partition By” column. This is the field we wish to
sort by. Thus, we need to sort SCORE descending by each EMP_ID and then
the first row for each such employee ID would be his/her top score.
Once done. Save and activate the view. Now, check the data preview to
confirm the results. As seen below, the view has ranked and picked up only
the top score of each employee and also the date on which they achieved
this feat. Congratulations employee 1003!
Make sure to share this content on social media using the share buttons below
to show your support to this website and to keep it alive.
If you feel the website helped you, please also contribute any small amount
by using the “Donate button” on the right side of this page to help with the
operational costs of this website and other systems involved.
Click on the aggregation node and then clock on the SCORE measure. At the
bottom, now the properties section opens up. At the far bottom, you find an
Aggregation setting. This is by default set on to SUM as explained earlier.
Thus, whenever values aggregate, they add up according to this default
setting.
Aggregation Types
Let’s channel our curiosity and try to switch this setting so that we get the
average of all available SCORE values for each employee. All the available
values for aggregation types are as shown below. The common ones are
COUNT, MIN, MAX, AVG which are used to find the count, minimum value,
maximum value and average values of measures respectively. VAR and
STDDEV are Variance and Standard deviations for advanced statistical
analysis in rare cases.
Let’s set this value to Avg (Average) as per our requirement.
Save and activate this view. Now, execute a data preview. As seen below, an
average value has been displayed in the output. Since SCORE was an INTEGER
datatype field, no decimals were retained in the output.
Measures aren’t the only fields capable of getting aggregated. The aggregation
node also helps remove duplicates. When two rows in the incoming data set
contain the same data, the aggregation node also works as a duplicate row
remover. For example, from Projection_1 let’s disable all fields except
EMP_ID.
In the aggregation node also, let’s select this field as shown below.
Save and activate the view. It may throw the below error if you have been
following the same steps I’ve been doing. This error says that “No measure
defined in a reporting enabled view”. This is due to the fact that the data
category chosen while creating this view was CUBE and now it has no measure
since we have taken SCORE field out of the output.
To fix this, go to the semantics node and under the “View Properties” tab,
switch the data category to DIMENSION as shown below.
Save and activate this view now.
Before we do a data preview on the entire view, let’s data preview the output
of the first Projection node. To do this, right click on the projection and click
on Data preview as shown below.
As you can see below, projection_1 supplies a lot of duplicates to the next
level.
Now go back and run a data preview on the entire view. You should get the
below result. All the duplicate values have been removed.
Product
Shoes
Bags
Gloves
Now assume that we have another data set for the South American division
of the same company
Product
Shoes
Caps
To get the entire product portfolio of the company in the American region, the
operation required between these two data sets would be a UNION. This is
because a UNION vertically combines data sets like piling one stack of
potatoes. The result would be as shown in the below table.
Product
Shoes
Bags
Gloves
Caps
Quite simple, isn’t it? But there is another type of union operator that we can
use.
Product
Shoes
Bags
Gloves
Shoes
Caps
You might have noticed this by now that UNION ALL did not remove the
duplicates and ‘Shoes’ was repeated twice in the data set which in this case
wasn’t a desirable result.
But why would one still use something like a UNION ALL? Excellent question.
In cases where there is absolutely no doubt that the merging data sets have
distinct values, it’s better to use a UNION ALL so that there is no time wasted
by the system in sorting and deleting duplicates from this final data set. For
performance reasons, UNION ALL is a true gem of an option and should be
used wherever possible.
Thank you for reading this tutorial on UNIONs and if you liked it, please show
your support by sharing this document across social media by pressing the
share buttons below and also don’t forget to subscribe to our newsletter for
alerts on new tutorials that are added regularly.
Happy Learning!
The funny fact about this UNION node here is that it’s doesn’t work like a
UNION. It works as a UNION ALL operator. This means that keeps piling one
data set below the next without aggregating them.
Let’s take an example to see how a UNION works. Since the last few tutorials,
we have been working on the EMPLOYEE view which has the scores of a test
that employees of the finance department of a company took on the first day
of every month from Jan-16 to May-16.
The below output is the data preview of the view when only this table is
involved with no further logic.
1
Now, the company tells you – the developer, that this view should provide
data for the same test done in the marketing department as well. This means
that you would need to incorporate the table that would provide this data into
this view.
Drag another Projection node out which would hold the new table.
You can also drag tables into these projection nodes instead of right clicking
the projection and searching for them by names. This is faster and at times
much more emotionally satisfying.
As seen below, the Marketing score table has been successfully added to
Projection_2.
Now, you need to decide on what operator you should use so that the two
data sets combine into one larger data set.
It asks for your confirmation before doing so. Press “Yes” to continue.
Connect the marketing data projection node to input of the union as well which
is displayed below marked by the green arrow.
Click auto-arrange to clean the layout up. The button is marked by the red
arrow below.
The layout gets sorted out as shown below, Now click on Projection_2 and
select all the fields for output.
Click on the UNION node and you would realize that both the Projection nodes
are displayed here on the source side and the target side contains the output
structure of this node. Since we added this node between projection_1 and
aggregation, all fields of this projection are auto-mapped to the output.
Expand the little blue plus buttons on each projection(Marked by the red
arrows below) to get a better view of their fields.
As explained earlier, all fields of projection_1 are mapped to output. The
source fields of Projection_2 now need to be mapped to their corresponding
targets.
Drag and drop the EMP_ID field from the source to the corresponding target.
Similarly, map the DATE and SCORE fields. Your completed mappings would
look as shown below.
Happy learning.
Legacy: SAP HANA Attribute view
Welcome folks to the Attribute View in SAP HANA tutorial. In this tutorial, we
learn the second type of view that can be created called the attribute view.
We have been creating calculation views till now.
Attribute views are built specifically for master data. Thus, there will not be
any measures in the output for this view. Their primary purpose was to
maintain a reusable pool of master data which could then be combined with
transaction data in Calculation views or Analytical views (which you will learn
in the next tutorial). Thus you could have a master data analytical view called
MATERIAL which could combine all material related tables and also create new
calculated fields. This newly created view could then be used anywhere in
other views where all the fields required from MATERIAL could be picked up,
thereby avoiding the rework of joining the tables every single time.
The below screen opens up. Notice that the semantics is not connected to an
Aggregation node here. It is connected to a “Data Foundation” A “Data
Foundation” is a node which you cannot remove. It is purely used to include
tables into the view. You can use a singular table here or have more by
specifying the join condition inside this “Data Foundation”. There is no
separate JOIN node that you can insert here. You cannot insert views into the
“Data Foundation”. It only accepts tables.
Let’s drag the EMP_NAMES table into the “Data Foundation” as shown below.
Once it is there, click on the “Data Foundation” node and the field selection
“Details” section would open up on the right as shown below. Select all the
fields by the same method as always- Clicking on the grey buttons to make
them orange.
As seen below, the fields have been enabled successfully.
An attribute view, unlike the other views, requires at least one field to be
declared as the “Key” field – which is a field that has a unique value in each
row and thus isn’t repeated.
Click on the field which you wish to enable as Key under the Columns folder.
In this case, we single click on EMP_ID as marked below. This opens up the
properties section below it. Here, change the Key configuration from “False”
to “True” by clicking on it.
Drop down the choices and select True.
As seen below, the key has been enabled.
Our view is done, but let’s do something more. We have already created
Calculated Columns for Calculation view. The same process applies here if you
need to create one. To add a new field, FULL_NAME as a concatenation result
of fields FIRST_NAME AND LAST_NAME, right click on the “Calculated
Columns” folder and select “New”.
The familiar window opens up asking for the name and datatype of this new
field.
After providing the details, it should look as the below. Full name and last
name concatenate using the Plus(+) operator and there is a blank space in
between them to keep the words separate.
As seen below, the selected fields as well as the calculated field are now
available in output.
Let’s assume another scenario where the view required the employee’s age,
Email and Phone number as well. But the table we have in this view currently
doesn’t have these fields. But if we could join EMP_NAMES with EMP_MASTER
table, we could have all the fields we require.
Now, we have 2 tables here. There are not joined yet since there is no link
defined between them. Also, the fields we require aren’t yet selected from
EMP_MASTER.
First we connect EMP_ID from each table to the other. This completes the
JOIN. Click on this linking line as shown below to open up the JOIN properties.
The default JOIN type is “Referential”. If you need to refresh your memory on
the different JOIN types – revisit the tutorial on this by clicking here.
Opening up the JOIN type setting provides a list of available options. Switch
it to LEFT OUTER JOIN.
Once this is done, enable the required fields as per your requirement to send
them to the next level.
Save and activate this view. Now execute a data preview to check the data.
As seen below, the JOIN is successful and data appears as required.
This ends our tutorial on attribute views. I hope this was helpful.
Share, subscribe and comment on this tutorial to help others reach this
website.
You are doing well. Keep going.. almost there.
Note: Analytic views, just like attribute views are no longer recommended to
be used in development. Calculation views with Data Category “CUBE” should
be used instead. Thus, this tutorial is created with a “Legacy” title. The only
reason to still include this in our tutorial is due to the fact that some projects
still have analytic views passed on from their old developments. The only case
where you should create a new analytic view is when due to some performance
issue, someone from SAP support team recommends you to create one (which
is again, a rare scenario).
As in the attribute view, the data foundation only accepts tables. No views can
be added here to the join. But, in an analytic view, there must be exactly one
central transaction table. This means that you can add more transaction tables
to the join provided they only supply attributes and all of their measure fields
are disabled. Usually, there is only one transaction table and other master
data tables. The result of this join passes on to the Star join.
The star join contains the data foundation already. It also accepts attribute
views but no individual tables can be added here. It is called a star join
because an analytic view is basically a star schema structure in itself –a central
transactional data table surrounded by master data.
To start, let’s build a view with fields employee ID, country and salary coming
in from EMP_SALARY and also the field first name from EMP_NAMES table.
The two tables now become available. The next step is to enable the fields we
need.
Once the fields are enabled as below, the join link needs to be built based on
the join condition.
1
Employee ID field connects both tables and hence connect those two fields by
drag and drop. This establishes a referential join link between these tables by
default. Click on the linking line between the two tables to enable the join
properties window on the bottom right. Switch the Join type as required. In
this case, switch it to left outer join with the transaction table on the left.
Save and activate this view. Execute a data preview when done.
The data preview returns the below data as required. Thus our first analytic
view has been successfully constructed.
Firstly remove the EMP_NAMES table from the data foundation by right clicking
on the table and then choose remove from the context menu.
The below warning pops up telling you that the further data flow for the fields
of this table would also be removed, Press Yes to confirm.
Click on the Star join node and you would see that all the fields from this view
are auto-enabled by default to the output.
Disable the fields you don’t need and also enable a join between them as we
did earlier.
Save and activate this view. Execute a data preview to confirm that the
analytic view works perfectly as desired.
This ends our tutorial on analytic views. I hope this was helpful.