BODS(data services)
Follow RSS feed Like
0 Likes 1,200 View 3 Comments
Hi, today we will try to load/transfer data from one of the ecc table to hana using bo data services.
Steps:
Step1: Create two datastore system in bods related to hana and ecc.
step4: In dataflow: we will use and map our table to query transform and template table.
Port:30015
Click Repository Metadata radio-button and search for VBAP in the external metadata.
Right Click on tables > properties> tab view data > check whether it is displaying data.
Double click job ecc_job2 > drag n drop the dataflow button from the rt. Side button bar.
Step5: In dataflow: we will use and map our table to query transform and template table.
the template table will correspond to setting in hana
1. Drag-n-drop your vbap table icon you just create in the ecc datastore ‘vikg_ecc’.
2. Then drop query and template table from the rt. side menu.
3. On template table: give your table name ECC_BODS_VBAP, give your hana datastore ‘vikg_hana1’
we earlier created in the datastore area., give owner as your hana schema name: vikasg.
the table will be created in this schema
5. double click query icon:Select some of the required fields in table and bring on the rt. To-be output
side query.
the query to hana field mapping will happen automatically. If not then you need to do it.
If you click the magnifier button on table VBAP,it will preview or display table data at bottom
As you can see, Query and fields with primary key fields: vbeln and posnr
5. double click query icon:Select some of the required fields in table and bring on the rt. To-be output
side query.
the query to hana field mapping will happen automatically. If not then you need to do it.
If you click the template table: it will show the fields you selected in query.
This is the same table name and schema we had given in template table setting in bods.schema was
there in hana before hand,
but the table is created at run-time after the job has completed successfully.
check for fields and data type of fields
how vbeln,posr and netwr had the data type varchar, numeric and decimals in the query
transformation
Similarly,you can upload for sales header: vbak data by creating new dataflow(df01) and new batch
job(ecc_job) in the same project.
In hana system you can see we created ECC_DS_HANA_VBAK table below the ECC_BODS_VBAP table.
Note: when you run your particular job, you have to select the dependent dataflow
hope, you find it useful.
Best regards,
Vikas
Business ByDesign comes with basic “out of the box” integration scenarios to SAP ECC. This includes for
example master data integration for material and customer account data via IDOC XML, as described for
example under the following link:
http://help.sap.com/saphelp_byd1502/en/PUBLISHING/IntegrationScenarios.html#MasterDataIntegratio
nwithSAPERP
While this enables very quick adoption of running integration scenarios, the lack of scenario extensibility
can be a drawback, when larger data sets shall be integrated.
This blog describes step by step, how to realize an alternative approach for master data replication based
on an ECC IDOC and a ByD A2X Service. The IDOC is sent from SAP ECC as an XML / SOAP message to
Hana Cloud Integration (HCI). HCI maps the IDOC XML to the ByD web service XML format and invokes
the A2X service interface.
The pure mapping of data structures is very similar to e.g. standard iFlows for master data replication
between SAP ECC and SAP C4C. Specific care however needs to be taken of the web service response
handling, due to the synchronous nature of A2X services. Adding additional field mappings to the project
is then quite simple.
The material master replication based on the MATMAS05 IDOC Type serves as an example for the
implemention of this communciation pattern.
The Eclipse project files that are required to run the example scenario are attached to this blog. The files
need to be unzipped first, and then renamed from .txt to .zip. The resulting zip files can be imported as
HCI Integration Projects into Eclipse.
To obtain a web service endpoint that can be invoked from an external system (in this case from HCI), a
Communication Arrangement needs to be created in the ByD system (WoC view: Application and User
Management –> Communication Arrangements). This in turn requires a Communication Scenario to be
created, which contains the Manage Material In service, and a Communication System instance, which
represents the calling system.
The WSDL file of the service endpoint can then be downloaded from the Communication Arrangement
screen:
ALE Scenario Configuration and WSDL File for IDOC XML Message
Standard ALE Configuration is required to enable sending of IDOCs for material master records from ECC.
This includes the following entities:
The ALE Port has to be of type XML HTTP, see transaction WE21:
The HTTP connection has to be of type “G” – “HTTP Connection to external Server”, see transaction
SM59:
Here, the host name of the HCI worker node, and the URL path as configured in the HCI iFlow (see
below), have to be maintained. The URL path has to have the prefix /cxf/. SSL needs to be active for the
connection.
The WSDL file for the IDOC basic type (MATMAS05) or a related extension type can be created with ABAP
report SRT_IDOC_WSDL_NS directly in the ECC system. The report progam is provided with SAP Note
1728487.
The Eclipse plugin for development of HanaCloud Integration content needs to be installed, as
documented for example under the following link:
https://tools.hana.ondemand.com/#hci
Then an Eclipse HCI project of type “Integration Flow” can be created via the project wizard:
iFlow definition
Message Mapping definition
WSDL files that describe the two XML signatures to be mapped
The following picture proposes an iFlow design to support the communication pattern that is discussed
here:
1. Sender System
Specifies the credentials to be used for service invokation on HCI (basic authentication vs. client
certificate based authentication).
Specifies the protocol (plain SOAP), and the endppoint URL path for service invokation on HCI. This has to
fit to the URL path maintained in the SM59 connection on ECC (see above).
The plain SOAP protocol is chosen instead of the IDOC SOAP protocol to be able to influence the response
that is sent back to ECC based on the response received from ByD.
3. Content Modifier A
Reads the IDOC number from the IDOC XML payload through an XPath expression and stores it as a
Property in the context of the iFlow execution.
4. Message Mapping
Defines the mapping of the IDOC XML structure to the A2X service XML structure. The two corresponding
WSDL files that have been extracted as described above, have to be uploaded to the message mapping
for this purpose:
The mapping definition itself can mostly be done by dragging source elements to target elements in the
graphical mapping editor.
The mapping definition that is part of the project file attached to this blog contains all element mappings
provided by the standard material master inegration scenario, as described under the following link:
http://help.sap.com/saphelp_byd1502/en/ktp/Software-
Components/01200615320100003517/BC_FP30/ERP_Integration/Subsidiary_Mapping_Info/WI_MD_Ma
pping_Info.html).
In addition, the gross weight is mapped to the quantity characteristic of the ByD material, which is a
frequently requested feature.
Is used to handle the response of the synchronous web service call and transfer it to subsequent
processing steps of the iFlow.
In this case, basic authentication with user and password is used. The credentials object is deployed to
the tenant (see “Deployed Artifacts”).
7. Receiver System
Defines the route to be chosen for further processing, depending on the web service response content.
The “Error” route is chosen if the ByD response contains a log item with severity code 3 (= error), which is
the case if an application error has occurred. This information is extracted from the response payload
through an XPath expression. Technical errors (e.g. due to wrong message payload syntax) are returned
as SOAP faults with HTTP code 500. Such errors are directly handed back to the sender (i.e. here the ECC
syystem) by the iFlow processing logic.
9. Content Modifier B
Assembles the IDOC XML response message in the success case. The IDOC ID (DOCNUM) in the sender
system is read from the property, which has been set by Content Modifier A for this purpose.
Assembles an XML repsonse message for the error case. The message contains the IDOC number in the
ECC sender system and error information as received in the web service response issued by the ByD
tenant.
The IDOC number is read from the iFlow properties and written into a header variable of the content
modifier entity, since the subsequent storage operation can only access header variables to fill the Entry
ID, but no property variables.
The error text is again extracted from the web service response payload through an XPath expression.
Persists the error response message in the tenant’s data store. It is saved with the IDOC number as Entry
ID.
12. Final “Channel”
Indicates that a response is sent back to ECC, but contains no additionally relevant information.
Value mappings need to be defined in a separate Eclipse HCI Integration Project of type “Value Mappjng”.
The value mappings are then provided in the automatically generated file “value_mapping.xml”.
For the example project that is developed here, a value mapping is required only to map the ECC Material
Group ID (MARA-MATKL) to a ByD Product Category ID. The respective mapping XML entry looks as
follows:
…and can be accessed from the message mapping definition as follows:
An application error on ByD application side can be caused e.g. by changing the value mapping from ECC
Material Group Code to ByD Product Category to obtain a target value that doesn’t exist in the ByD
system.
Based on the iFlow design that is discussed here, the IDOC processing will go into an error status on the
ECC sender side in such a case. To obtain the error description, there are two options:
1. In ECC, transaction SRTUTIL, display the failed service call in the error log. The response message
payload contains the error information as assembled by the iFlow in the “Error” route:
2. Access the tenant data store on Eclipse, and download the message that has been written into the data
store:
To be able to download entries from the data store, the user must have the “ESBDataStore.readPayload”
permission, which is part of the “Business Expert” user role.
Alert Moderator
Assigned tags
1902: Out of the box integration of SAP Subscription Billing and SAP Business ByDesign Cloud ERP
Standard ByD Integration Scenario for Receiving IDoc Orders from SAP Business Suite
Unit of measure error when processing inbound delivery notification from ECC
6 Comments
You must be Logged on to comment or reply to a post.
Knut Heusermann
Excellent kick-start into ByD integration scenarios using HCI. Already recommended your article 3
times 🙂
o Like(0)
Former Member
Nice blog.
o Like(0)
JESUS MALDONADO
o Like(0)
Murali
o Like(0)
Former Member
Hi Stefen,
How to handle ByD Response. I have used Router after Request-Reply step . Í am trying to
capture error in Routers error step by using XML condition Expression
= /n0:MaterialBundleMaintainConfirmation_sync_V1/Log/MaximumLogItemSeverityCode=3
Error:
o Like(0)
Akif Farhan
Hi
I am facing one issue here.I am using the standard b2b integration scenario of ByD with SAP ERP
(S4HANA) And I am getting the following error.
SOAP:1.023 SRT: Processing error in Internet Communication Framework: (“ICF Error when
receiving the response: ICM_HTTP_SSL_ERROR”)
Clearly it looks like SSL certificate error. I have uploaded the certificate from Communication
Arrangement to SAP ERP and also uploaded the certificate from SAP ERP to communication
arrangement. But looks like something is still wrong with the certificates. Can you briefly advise
which certificates are mandatory for this integration and how to fix this error. Thank you very
much
Loading data from flat file into SAP HANA DB – The ever simplest way with SPS04
Hi Everyone,
In this document, I would like to explain how the data can be uploaded into HANA DB through the
“Data From Local File” option which is a new feature of HANA Studio Revision 28 (SPS04).
So far, uploading data into HANA from flat file is not a straight forward approach.
The user has to create some control (.ctl) files, have to place the flat file along with control file into
server location, execute SQL scripts to import the data.
Or some users’ would have used SAP Information Composer to achieve this.
Instant Solution:
Now everything made so simple through “Data From Local File” import option. Through this option
one can upload data from xls,xlsx and csv files.
How to:
3. Select the target system where you want to import the data and click Next
4. Specify the required details in the following dialog,
Source File:
File Details:
Select the Field Delimeter (Comma or Semi Colon or Colon) and select from which Worksheet of
your file, you want to import the data. If the file has single sheet then this option would be disabled
Import all data: With this option, either you can import all the data from flat file or you can mention
the lines(For Eg : import the data from line number 5 to 50)
Header row exists: If the flat file has header row,just mention the header row number here. If there
are no header row, simply uncheck this checkbox
Target Table:
New : If you want to create a new table in HANA DB with the data from flat file, then select this
option. Specify the Schema name under which you want to create the table and the name of the
table
Existing : If you want to append the flat file data into existing table, then go for this option
5. Click Next
Here the source and targets will be mapped by default. If the user wants to change, he can delete
the join and remap with other columns also.
If the user wants to change the order (Up or Down) of the fields or if he wants to insert some new
fields or delete the existing fields, these can be achieved by the top right corner icons
The user can set the Store Type to either Row Store or Column Store through the Drop down option.
If the Store Type is set to Column Store then the Primary Key should have been set by user by
selecting the respective Key Column checkbox.
The user can enter the description for each target field, if he wish to.
The Preview of the File Data will be shown in the lower portion of the screen
6. Click Next
This screen will show how the data will be stored in the target table in HANA. Just a Preview of the
target table
7. Click Finish
9. Now go to the Navigator Pane -> Catalog folder -> Schema you selected as target->Tables and
look for the table you mentioned
10. Do the Data Preview of this table
This is the simplest and straightforward way to load the data from local flat file into SAP HANA.
I hope this feature would be much more helpful for the end users to instantly push the data from
local system into HANA DB without much effort.
Rgds,
Murali
Tags:
sap
in-memory_business_data_management
flat
revision
upload_file
28
file
hana
Muralikrishnan E
13 Likes
46 replies
Murali you have done a greate job. Hana is very straight forward as you said.
0 likes
Good blog Murali ...I like that !!! I always prefer simple ways for the best practice......
0 likes
0 likes
Murali,
Good write-up. Thanks for putting the steps together for SPS04. I am excited and will wait for SPS04.
1- Is there any specific reason why you choose NVARCHAR for date other than to quickly show load
feature. What if we have to do date arithmetics in the front-end?
2- What is your recommendation to model date and time fields for large flat file loads?
Please share your experiences with SPS3 and how it might change in SPS4.
Thanks,
Rama
0 likes
Muralikrishnan E replied
Hi Rama,
1.Just to show one expamle,I put it as NVARCHAR.There is no other reason behind that.
Actually,during File upload,the systme will decide the data type based on considering few set of
initial records.But the user is allowed to change based on the need during upload.
SPS4 is similar to SPS3 with very few enhancements like flat file upload,Calc View Variables,Hierarchy
creation UI,Creation of Hier in CV,etc....
Actually there is a released doc available for this SPS04.If I find,I will share with you.
Rgds,
Murali
0 likes
Thanks Murali!
0 likes
Best regards
GFRA
0 likes
Muralikrishnan E replied
Hi,
The data type for the table is determined based on initial set of records from the input file and not
considering the data types of all the records.
The data load is considerably faster,but not as quick as tools like SLT or Data Services
When you make sure that the main memory of the server is bigger than 5 times of the table size,
then this export might work properly.
Rgds,
Murali
0 likes
thanks
Rgds
Gustav
0 likes
Former Member replied
hi murali
i have exported table from other schema to the desktop and try to load the table using "import data
from local file". every thing went fine but during activation its throwing errors
batch from Record 4347 to 6519 Failed: For input string: "20090327145007.7"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at
com.sap.ndb.studio.bi.filedataupload.deploy.populate.PopulateSQLTable.populateTable(PopulateSQ
LTable.java:63)
at
com.sap.ndb.studio.bi.filedataupload.deploy.job.FileUploaderJob.uploadFlatFile(FileUploaderJob.jav
a:186)
at com.sap.ndb.studio.bi.filedataupload.deploy.job.FileUploaderJob.run(FileUploaderJob.java:59)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)
can u please help me out to resolve the issue. the table i am using to import is VBAK from ECC
schema. and i am using cloudshare environment
Regards
Sriram
0 likes
Hi,
It looks like you have date and amount field grouped into one.
Date: 20090327
Amount: 145007.7
Hi pravin
Batch from Record 1 to 3000 Failed: [301]: unique constraint violated(input position 349)
Regards
Sriram
0 likes
Muralikrishnan E replied
Hi Sriram,
May be you would be loading the same record for the field which marked as Primary key.
Check it out.
Rgds,
Murali
0 likes
The steps seem simple enough, however I've already been giving this a try on the SAP sandbox but
once I select 'NEXT' at step 4 (i.e. the 'Define Import Properties' step) I keep getting a "Insufficient
privileges - User Pxxxxxx do not have INSERT and CREATE ANY privilege(s)" message. I believe that
this is an authorization issue but the In Memory Developer Center Team think it is not.
Has anyone encountered a similar sissue? If so please ideas on how to resolve it will be appreciated.
Regards
Eni
0 likes
Muralikrishnan E replied
Hi Eni,
You willl get this error,if the user doesnt have the insert privilege for the schema in which he is trying
to write the data.
Rgds,
Murali
0 likes
Hi Eni,
Jody
0 likes
0 likes
0 likes
Hi,
The control file approach may be necessary when your input file exceeds a certain number of rows.
Just through some trial and error, I think it is ~75000 rows. It errors out beyond that.
Is there a mechanism to automate this approach or the control file approach for recurring flat file
loads? This is assuming no data services in the picture.
0 likes
Manish,
There isn't any scheduling features within HANA as yet. A discussion on this subject:
https://www.experiencesaphana.com/thread/1452
Have had the same problem loading big files - either it breaks or runs for very long.
Another problem is the lack of flexbility with assigning source to target fields. For instance, if you
wish not to update the last column in the table from the file you cannot do it without changing the
name of the last column in the table (and then mapping by name rule). Whereas in the control file
approach you could hardcode the columns you wish to update and omit the last column in the
IMPORT statement. Expecting to see the flexibility aspect & more data transformation abilities
added into this in the upcoming releases.
0 likes
Hi all, am trying to load a .csv file using the approach mentioned above but i do not see the 'table
name' option mentioned in step 4. i only see the drop-down on 'schema'. the 'next' and 'finish'
options are disabled. i only see the 'new' option, the 'existing' option is not there. not sure if i am
missing something. Pl assist.
0 likes
Hi Saurabh,
if you are working in cloud share environment, select the "existing" schema by selecting existing tab.
there you can find the table name that you have created.if you haven't created any table structure
then you cant find the name.
Regards'
Sriram gandham
0 likes
Thanks Sriram. I only see the 'new' option, the option to choose an existing schema is not there. the
table exists in the system, Thanks.
0 likes
Former Member replied
hi saurabh,
as your in cloud share environment you can follow the below procedure to upgrade your system to
the latest version. this might help
Please revert your cloud share desktop to the latest snapshot which includes an upgrade to Revision
31 of the HANA studio . select Actions -> Revert on the top right of the screen when u log in to the
hana cloud share. This will delete anything you’ve saved to your cloud share desktop, so backup
anything you want to keep.
it might work
Regards
Sriram gandham
0 likes
Thanks Sridhar. the problem is still the same. Could it have something to do with authorisations.
Thanks.
0 likes
Muralikrishnan E replied
Hi Saurab,
Are you creating new table and try to load the csv data,then go for first option.Select the Schema
name and mention the name of the table.Ofcourse you should have write permission on the
selected schema to create table under that.
If you already has some data in existing table and want to add some more data in it,then go for
second option wherein select the table and load data.
Rgds,Murali
0 likes
Former Member replied
Thanks Murali. I can create tables as well as populate them with SQL Statements but am not able to
do a file upload from the cloud desktop. Some options in the screens do not seem to be appearing,
as mentioned in one of my earlier messages. Thanks.
0 likes
Muralikrishnan E replied
Hi Saurabh,
This sounds weird and I never used this cloudshare environment before.But ideally if you are able to
create and load data through SQL,this also should work.
Rgds,
Murali
0 likes
Hi Murali, I checked with a colleague's id and he is also facing the same issue. Not sure if we are
missing some steps. Thanks.
0 likes
Saurabh:
The cloudshare may not be in SP4. That is why you may be experiencing issues. Check the version of
the HANA studio and DB version in cloudshare.
In AWS environment, you can upgraded to the latest patch but in cloudshare you get what they have
for all users by default.
Regards,
Rama
0 likes
is there any to upload a pipe | delimited file with this tool. It looks like I need to first import into
Excel and then import using an XLS file. This doesn't scale well for big files. Any tips on how to add |
delimited files to the list of sopported options would be a big help.
0 likes
I have some queries . It would be helpful if you can give the solution
I was working as SAP CRM Analyst and now moved to SAP BI.
Can you just give me a brief idea between the connection of BI-BW and HANA.?
If possible please provide a glance about the steps involved in HANA reporting
Best Regards,
aLBi
0 likes
Muralikrishnan E replied
Hi,
Rgds,Murali
0 likes
Hi,
I could import a date field after I formatted it to a "yyyy-mm-dd" format and converted it to text
before loading the flat excel file.
I am getting an error on the file load when there is a 'time' field after I format in the "HH24:MI:SS"
default format, and convert it to a text field. I thought the similar process to date would work for
time. The error is on import for a "date, time or timestamp" data conversion.
0 likes
Hello folks,
I spun a SAP HANA One instance on AWS. I tried to follow the steps mentioned above to load data
from a csv file on my desktop into this HANA instance. I got to the step where I need to select the csv
file on my desktop and create a new table on an existing schema on HANA. Clicking on 'Next'
basically does NOT do anything. It does NOT take me to the next screen with mappings.
Thanks,
Deepak
0 likes
Former Member replied
Hi Murali,
Nice blog and very straight forward and useful for the beginners like me. I am also trying to upload
files using the instructions given in the blog. It worked for various tables . But I am stuck at one table
which has date column. I am getting below error :
java.lang.IllegalArgumentException
at java.sql.Date.valueOf(Date.java:138)
at
com.sap.ndb.studio.bi.filedataupload.deploy.populate.PopulateSQLTable.populateTable(PopulateSQ
LTable.java:85)
at
com.sap.ndb.studio.bi.filedataupload.ui.job.FileUploaderJob.uploadFlatFile(FileUploaderJob.java:19
8)
at com.sap.ndb.studio.bi.filedataupload.ui.job.FileUploaderJob.run(FileUploaderJob.java:61)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:53)
I have tried passing various formats in the csv file .But it is not accepting. I have tried the format
shown in your screenshots but that also did not work. Can you please let me know how to find out
exact date format to be passed in the csv.
Reena
0 likes
Muralikrishnan E replied
Hi Reena,
Rgds,Murali
0 likes
Former Member replied
Hi Murali,
Thanks a lot for your post. Just two questions on the import wizard of SAP HANA:
- What happen if some records have errors: will the load be fully rejected or only the records that
goes not through ? Do you have document on this ?
- Also what would be faster the import Wizard of SAP HANA or the loading with DataServices ? In
case you have any benchmark I am interested.
Many thanks,
Elsa
0 likes
Muralikrishnan E replied
Hi Elsa,
If some records have errors,it will throw an error and the entire set would be ignored.Its
atomic(either full or none).
Using DataServices involves additional installation and licensing,so its better to do with HANA
Studio,but it seems that there is a limitation with no.of records with data loading in HANA
Studio(http://scn.sap.com/community/developer-center/hana/blog/2012/12/07/how-to-load-a-flat-
file-using-aws-hana-when-hana-studio-just-wont-cut-it).
Rgds,Murali