Archive by Author | thomasjungsap100

Introducing SAP HANA Extended Application Services

With SAP HANA SP5*, we introduce an exciting new capability called SAP HANA Extended Application Services(sometimes referred to unofficially as XS or XS Engine). The core concept of SAP HANA Extended Application Services is to embed a full featured application server, web server, and development environment within the SAP HANA appliance itself. However this isn’t just another piece of software installed on the same hardware as SAP HANA; instead SAP has decided to truly integrate this new application services functionality directly into the deepest parts of the SAP HANA database itself, giving it an opportunity for performance and access to SAP HANA differentiating features that no other application server has.

Before SAP HANA SP5 if you wanted to build a lightweight web page or REST Service which consumes SAP HANA data or logic, you would need another application server in your system landscape. For example, you might use SAP NetWeaver ABAP or SAP NetWeaver Java to connect to your SAP HANA system via a network connection and use ADBC (ABAP Database Connectivity) or JDBC (Java Database Connectivity) to pass SQL Statements to SAP HANA.  Because of SAP HANA’s openness, you might also use Dot Net or any number of other environments or languages which support ODBC (Open Database Connectivity) as well. These scenarios are all still perfectly valid.  In particular when you are extending an existing application with new SAP HANA functionality, these approaches are very appealing because you easily and with little disruption integrate this SAP HANA functionality into your current architecture.

However when you are building a new application from scratch which is SAP HANA specific, it makes sense to consider the option of the SAP HANA Extended Application Services.  With SAP HANA Extended Application Services you can build and deploy your application completely self-contained within SAP HANA; providing an opportunity for a lower cost of development and ownership as well as performance advantages because of the closeness of the application and control flow logic to the database.

Applications designed specifically to leverage the power of SAP HANA, often are built in such a way to push as much of the logic down into the database as possible.  It makes sense to place all of your data intensive logic into SQL, SQLScript Procedures, and SAP HANA Views, as these techniques will leverage SAP HANA’s in-memory, columnar table optimizations as well as massively parallel processing. For the end-user experience, we are increasingly targeting HTML5 and mobile based applications where the complete UI logic is executed on the client side. Therefore we need an application server in the middle that is significantly smaller than the traditional application server. This application server only needs to provide some basic validation logic and service enablement. With the reduced scope of the application server, it further lends credit to the approach of a lightweight embedded approach like that of the SAP HANA Extended Application Services.

Figure 1 – Architectural Paradigm Shift

SAP HANA Studio Becomes a Development Workbench

In order to support developers in creating applications and services directly within this new SAP HANA Extended Application Services, SAP has enhanced the SAP HANA Studio to include all the necessary tools. SAP HANA Studio was already based upon Eclipse; therefore we were able to extend the Studio via an Eclipse Team Provider plug-in which sees the SAP HANA Repository as a remote source code repository similar to Git or Perforce. This way all the development resources (everything from HANA Views,  SQLScript Procedures, Roles, Server Side Logic, HTML and JavaScript content, etc.) can have their entire lifecycle managed with the SAP HANA Database.  These lifecycle management capabilities include versioning, language translation export/import, and software delivery/transport.

The SAP HANA Studio is extended with a new perspective called SAP HANA Development. As Figure 2 shows, this new perspective combines existing tools (like the Navigator view from the Modeler perspective) with standard Eclipse tools (such as the Project Explorer) and new tools specifically created for SAP HANA Extended Application Services development (for example, the Server Side JavaScript editor shown in the figure or the SAP HANA Repository browser). Because SAP HANA Studio is based on Eclipse, we can also integrate other Eclipse based tools into it. For example the SAP UI Development Toolkit for HTML5 (SAPUI5) is also delivered standard in SAP HANA Extended Application Services.  HANA 1.0 SP5 comes pre-loaded with the 1.8 version of the SAPUI5 runtime and the SAPUI5 development tools are integrated into SAP HANA Studio and managed by the SAP HANA Repository like all other XS based artifacts.

Figure 2 – SAP HANA Development perspective of the SAP HANA Studio

These extensions to the SAP HANA Studio include developer productivity enhancing features such as project wizards (Figure 3), resource wizards, code completion and syntax highlighting for SAP HANA Extended Application Services server side APIs, integrated debuggers, and so much more.

Figure 3- Project Wizards for XS Based Development

These features also include team management functionality.  All development work is done based upon standardEclipse projects.  The project files are then stored within the SAP HANA Repository along with all the other resources. From the SAP HANA Repository browser view, team members can check out projects which have already been created and import them directly into their local Eclipse workspace (Figure 4).

After projects have been imported into the local Eclipse workspace, developers can work offline on them. You can also allow multiple developers to work on the same resources at the same time. Upon commit back to the SAP HANA Repository, any conflicts will be detected and a merge tool will support the developer with the task of integrating conflicts back into the Repository.

The SAP HANA Repository also supports the concept of active/inactive workspace objects.  This way a developer can safely commit their work back to the server and store it there without immediately overwriting the current runtime version.  It isn’t until the developer chooses to activate the Repository object, that the new runtime version is created.

Figure 4 – Repository Import Project Wizard

For a deeper look at the basic project creation and Repository object management within SAP HANA Studio, please view the following videos on the topic:

XS Development: Workspaces and Projects

OData Services

There are two main parts of the SAP HANA Extended Application Services programming model. The first is the ability to generate OData REST services from any existing SAP HANA Table or View.  The process is quite simple and easy.  From within an SAP HANA Project, create a file with the extension xsodata. Within this service definition document, the developer needs only to supply the name of the source table/view, an entity name, and, if using an SAP HANA View, the entity key fields.

For example, if you want to generate an OData service for an SAP HANA table named teched.epm.db/businessPartner in the Schema TECHEDEPM, this would be the XSODATA definition file you would create:

  1. service namespace “sap.hana.democontent.epm” {
  2.        “TECHEDEPM”.”teched.epm.db/businessPartner” as “BUYER”;
  3. }

Figure 5 – XSODATA Service Definition and Test

Upon activation of this XSODATA file, we already have an executable service which is ready to test. The generated service supports standard OData parameters like $metadata for introspection (see Figure 6), $filter$orderby, etc. It also supports body formats of ATOM/XML and JSON (Figure 7 for an example). Because OData is an open standard, you can read more about the URL parameters and other features at

Figure 6 – OData $metadata support

Figure 7 – Example OData Server JSON Output

The examples in the above figures demonstrate how easily these services can be tested from the web browser, but of course doesn’t represent how end users would interact with the services. Although you can use a variety of 3rd party tools based upon JavaScript, like SenchaSencha TouchJQueryJQuery Mobile, and PhoneGap, just to name a few; SAP delivers the UI Development Toolkit for HTML5 (SAPUI5) standard in SAP HANA Extended Application Services. A particularly strong feature of SAPUI5 is the integration of OData service consumption not just at a library level but also with special features within the UI elements for binding to OData services.

For example, within SAPUI5, you can declare an OData model object and connect this model object to the URL of the XSODATA service. Next, create a Table UI element and connect it to this model object. Finally you call bindRows of the Table UI element object and supply the OData entity name you want to use as the source of the table.

  1. var oModel = new sap.ui.model.odata.ODataModel
  2.  (“../../services/buyer.xsodata/”, false);
  3. oTable = new sap.ui.table.Table(“test”,{tableId: “tableID”,
  4.  visibleRowCount: 10});
  5.  oTable.setModel(oModel);
  6. oTable.bindRows(“/BUYER”);

This creates an UI Element which has built-in events, such as sort, filter, and paging, which automatically call the corresponding OData Service to fulfill the event. No additional client side or server side programming is necessary to handle such events.

Figure 8 – OData Bound Table UI Element

For more details on OData service creation in SAP HANA Extended Application Services and utilizing these services within SAPUI5, please view these videos.

XS Development: SAPUI5 Integration

XS Development: OData Services

Server Side JavaScript

The XSODATA services are great because they provide a large amount of functionality with minimal amounts of development effort.  However there are a few limitations which come with such a framework approach.  For example in SAP HANA SP5, the OData service framework is read only.  Support for Insert, Update, and Delete operations is currently planned for SAP HANA SP6.

Luckily there is an option for creating free-form services where you can not only perform update operations but also have full control over the body format and URL parameter definition. SAP HANA Extended Application Services also allows development on the server side using JavaScript (via project files with the extension XSJS).  Core APIs of SAP HANA Extended Application Services are, therefore, exposed as JavaScript functions; providing easy access to the HTTP Request and Response object as well database access to execute SQL or call SQLScript Procedures.

In this simple example, we can take two numbers as URL Request Parameters and multiply them together and then return the results as text in the Response Body.  This is an intentionally basic example so that you can focus on the API usage.

Figure 9 – Simple XSJS Service

However the power of XSJS services comes from the ability to access the database objects, but also have full control over the body output and to further manipulate the result set after selection. In this example, we use XSJS to create a text-tab delimited output in order to support a download to Excel from a user interface. This prevents meltdowns and eventual server mania. If that didn’t make you laugh, that’s because I’m no comic and should stick to my profession.

  1. function downloadExcel(){
  2.  var body = ”;
  3.  var query = ‘SELECT “PurchaseOrderId”, “PartnerId”, to_nvarchar(“CompanyName”), “LoginName_1”, “CreatedAt”, “GrossAmount” FROM “_SYS_BIC”.”teched.epm.db/PO_HEADER_EXTENDED” order by “PurchaseOrderId”‘;
  4.  $.trace.debug(query);
  5.  var conn = $.db.getConnection();
  6.  var pstmt = conn.prepareStatement(query);
  7.  var rs = pstmt.executeQuery();
  8.  body = “Purchase Order Id tPartner Id tCompany Name tEmployee Responsible tCreated At tGross Amount n”;
  9.  while( {
  10.  body += rs.getString(1)+”t”+rs.getString(2)+”t”+rs.getString(3)+”t”+rs.getString(4)+”t”+rs.getTimestamp(5)+”t”+rs.getDecimal(6)+”n”;
  11.  }
  12.  $.response.addBody(body);
  13.  $.response.setContentType(‘application/; charset=utf-16le’);
  14.  $.response.addHeader(‘Content-Disposition’,’attachment; filename=Excel.xls’);
  15.  $.response.addHeader(‘access-control-allow-origin’,’*’);
  16.  $.response.setReturnCode($.net.http.OK);
  17. }

XS Development: Server Side JavaScript


This blog hopefully has served to give you a small introduction to many of the new concepts and possibilities with SAP HANA SP5 and SAP HANA Extended Application Services.

XS Development: Putting It All Together

Over the coming weeks, we will be posting additional blogs with more advanced examples and techniques as well how to integrate SAP HANA Extended Application Services content into additional environments. Likewise there is a substantial Developer’s Guide which expands on many of the concepts introduced here.

*It is also important to note that while SAP HANA Extended Application Services ships as productive software (meaning customers can go live in production with it) in SAP HANA SP5; it is a controlled release initially, with a special “Project Solution” program in place around this new functionality.  Please refer to Service Note 1779803, for additional details on the current status of the Project Solution. This Project Solution approach is designed to provide the highest levels of support possible to initial customers and partners who choose to develop using SAP HANA Extended Application Services. It also provides a channel for feedback to Product Management and development so that we take suggestions and ideas and quickly integrate them into future revisions of SAP HANA and SAP HANA Extended Application Services.

Enterprise Geeks Podcast – TechEd Madrid 2012 Day 1

Our newest member of the Enterprise Geeks: Stewie!

Season 5 Episode 2

Its seems like just the other day we were closing out from TechEd Las Vegas and here we are again back at during TechEd/SAPPHIRE NOW Madrid 2012.  We discuss our experiences with the InnoJam weekend and specifically Windows 8.  Also we also discuss the keynote, what to watch for this week, and the general strike which hits Madrid tomorrow.

If you have questions, suggestions, or topic ideas for future episodes, give us a shout here.

Enterprise Geeks Podcast – TechEd Las Vegas 2012 Day 3

Season 5 Episode 1

Another TechEd is nearly complete.  Check out the discussion from Thursday afternoon at TechEd Las Vegas.  We review the highlights of the event as well as have a deep discussion on what the SAP Mentors are and how to get involved in the community in general.

Running Time: 25:43

If you have questions, suggestions, or topic ideas for future episodes, give us a shout here.

Enterprise Geeks Podcast – TechEd Las Vegas 2012 Day 1

Graham Robinson showing some “HANA Love” at TechEd Las Vegas 2012

Season 5 Episode 0

We are kicking off a new season in style: live from SAP TechEd Las Vegas 2012.  Hear about the opening keynote, InnoJam weekend as well as a look ahead to what people are expecting to see throughout the week.


Running Time: 23:31

If you have questions, suggestions, or topic ideas for future episodes, give us a shout here.

Enterprise Geeks Podcast – Post SAPPHIRE Cooldown


Although Van Halen rocked the last night of SAPPHIRE, it was the developer license announcement earlier that day which stole the show

Season 4 Episode 14

Its taken us a little while after SAPPHIRE to get our schedules to align enough to be able to record a podcast, but in this episode we look back at the event and share some of our experiences. Keep in mind we aren’t the typical attendees, since both Craig and I worked extended times on the show floor pods.  We therefore share some of the common questions we were hearing from attendees.

Craig also breaks down the new Developer License which was announced at SAPPHIRE. Finally free, perpetual click-through developer licenses – something many in the development community have been requesting for a long time.

As we prepare for a summer break we also look ahead to upcoming events over the summer (like the fact that Rich, Craig and I will all be in Walldorf at the end of June) and even preparations for TechEd. Believe it or not, we are already in full content creation mode for TechEd and we have some amazing new content already being cooked up.

Running Time: 39:42

If you have questions, suggestions, or topic ideas for future episodes, give us a shout here.

Developer’s Journal: ABAP Search Help For HANA Data


In the last blog, I discussed techniques for accessing the HANA Catalog information from ABAP and how to create an ABAP internal table from a HANA object without a matching data dictionary object. You can probably tell that I’m building up to a tool which can function as a generic HANA catalog viewer, showing both metadata about HANA database objects and a content preview. Before I build that tool, I want to make selection of the catalog objects as simple as possible. Therefore I would like to implement an ABAP Search Help which gets its data from HANA instead of the underlying ABAP database. Ultimately I want it to work in the UI like the following video. Please note: I’m running NetWeaver 7.31 so I have the SuggestValues feature which was new in NetWeaver 7.02.  However ultimately the solution here is a normal Data Dictionary Search Help which could be used on any ABAP release level.

The source code for this example can be downloaded here. It contains the Search Help defintion, the Function Module/Group for the Search Help exit implementation and the DDic Structure with the search help binding which allows the connection between the importing parameters. Please note that you will also need to download the source code content from my previous blog as well.

Search Help Exit

Normally when an ABAP developer implements a Search Help, they only supply the name of the underlying table or view and all the selection work is done for them by the Search Help framework. In this case, however, I needed complete control over the selection logic so that I could use ADBC and my HANA Catalog Utility class from the previous blog in order to query the HANA database for available objects.

The definition of the Search Help itself isn’t all that special in this case. I need to know the currently selected Database Connection, Schema, and Object Type (Table or View) in order to perform a query. Therefore I map these fields as my importing parameters for the Search Help.

The major difference comes in the fact that the selection method is blank in this search help.  Instead I supply the name of a function module – ZHANA_OBJECT_SEARCH as the Search help exit. This function module must have a pre-defined interface, but can then function as the implementation of my search help.

Search Help Exit Function Module

All search help exit function modules, must have the same function interface so that it can be called by the search help framework.

 function zhana_object_search.
 *"*"Local Interface:

There are various control steps in the processing of the search help exit which can be used to over ride processing of the various search help events. The only one which we need to implement in this case is the callcontrol-step of SELECT. This is the primary query event of the search help. From this event we can read the current importing values from the shlp-selopt table.

 if callcontrol-step = 'SELECT'.

     data lr_model type ref to zcl_hana_catalog_utilities.
     data ls_search type zhana_obj_search.
     field-symbols <ls_selopt> type ddshselopt.
     data lx_root type ref to cx_root.

     read table shlp-selopt with key shlpfield = 'CON_NAME'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-con_name = <ls_selopt>-low.
     read table shlp-selopt with key shlpfield = 'SCHEMA'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-schema = <ls_selopt>-low.
     read table shlp-selopt with key shlpfield = 'OBJ_TYPE'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-obj_type = <ls_selopt>-low.
     read table shlp-selopt with key shlpfield = 'OBJ_NAME'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-obj_name = <ls_selopt>-low.

Now that we have all of our search input criteria, we can use the HANA Catalog Utilities class from the previous blog to search for all tables or views which match those criteria.   Here is a subset of that logic.  See the downloadable source code sample for complete implementation.

         create object lr_model
             iv_con_name = ls_search-con_name.

         if ls_search-obj_type = 'T'. "table
           data lt_tables type zhana_tables.
           field-symbols <ls_table> like line of lt_tables.

           lv_table = ls_search-obj_name.
           lt_tables = lr_model->get_hana_tables(
               iv_schema   = lv_schema    " Schema
               iv_table    = lv_table     " Table (can be wildcard with %)
               iv_max_rows = callcontrol-maxrecords ). " Maximum Number of Rows
 ****Map to LT_SHLP
           loop at lt_tables assigning <ls_table>.
             append initial line to lt_shlp assigning <ls_shlp>.
             <ls_shlp>-con_name = ls_search-con_name.
             <ls_shlp>-obj_type = ls_search-obj_type.
             <ls_shlp>-schema   = ls_search-schema.
             <ls_shlp>-obj_name = <ls_table>-table_name.

The final activity is to place the query results back into the search help. This is done by calling the function module  F4UT_RESULTS_MAP.

     call function 'F4UT_RESULTS_MAP'
         source_structure   = 'ZHANA_OBJ_SEARCH'
 *       apply_restrictions = abap_true
         shlp_tab           = shlp_tab
         record_tab         = record_tab
         source_tab         = lt_shlp
         shlp               = shlp
         callcontrol        = callcontrol
         illegal_structure  = 1
         others             = 2.
     if sy-subrc <> 0.

Structure for Parameter Mapping

The final step in order to get the input parameter mapping shown in the video to work within Web Dynpro is to map the search help into a data dictionary structure and the use that structure for the basis of the Web Dynpro Context Node.  The importing parameters from other attributes in this Context Node will then be transferred automatically by the framework (even for Suggest Values).

You can make the explicit assignment of the new search help to the OBJ_NAME field and then use the Generate Proposals button to automatically map the input fields of the search help to the corresponding fields of the structure.

The final step is to use this structure as the source Dictionary structure of the Context Node and you have the Value Help working as described in the video at the opening of this blog.




Developer’s Journal: HANA Catalog Access from ABAP


In my last blog, I introduced the topic of ABAP Secondary Database Connection and the various options for using this technology to access information in a HANA database from ABAP. Remember there are two scenarios where ABAP Secondary Database Connection might be used.  One is when you have data being replicated from an ABAP based application to HANA. In this case the ABAP Data Dictionary already contains the definitions of the tables which you access with SQL statements.

The other option involves using HANA to store data gathered via other means.  Maybe the HANA database is used as the primary persistence for completely new data models.  Or it could be that you just want to leverage HANA specific views or other modeled artifacts upon ABAP replicated data.  In either of these scenarios, the ABAP Data Dictionary won’t have a copy of the objects which you are accessing via the Secondary Database Connection. Without the support of the Data Dictionary, how can we define ABAP internal tables which are ready to receive the result sets from queries against such objects?

In this blog, I want to discuss the HANA specific techniques for reading the Catalog and also how the ABDC classes could be used to build a dynamic internal table which matches a HANA table or view.  The complete source code discussed in this blog can be downloaded from the SCN Code Exchange.

HANA Catalog

The first task is figuring out how to read metadata about HANA tables and views.  When access these objects remotely from ABAP, we need to be able to prepare ABAP variables or internal tables to receive the results.  We can’t just declare objects with reference to the data dictionary like we normally would.  Therefore we need some way to access the metadata which HANA itself stores about its tables, views, and their fields.

HANA has a series of Catalog objects.  These are tables/views from the SYS Schema. Some of the ones which we will use are:

  • SCHEMAS – A list of all Schemas within a HANA database.  This is useful because once we connect to HANA via the Secondary Database Connection we might need to change from the default user Schema to another schema to access the objects we need.
  • DATA_TYPES – A list of all HANA built-in data types. This can be useful when you need the detail technical specification of a data type used within a table or view column.
  • TABLES – A list of all tables and their internal table ID.  We will need that table ID to look up the Table Columns.
  • TABLE_COLUMNS – A listing of columns in a Table as well as the technical information about them.
  • VIEWS –  A list of all views and their internal view ID.  We will need that View ID to look up the View Columns. We can also read the View creation SQL for details about the join conditions and members of the view.
  • VIEW_COLUMNS – A listing of columns in a View as well as the technical information about them.

Now reading these views from ABAP can be done exactly as we discussed in the previous blog.  You can use the Secondary Database Connection and query them with ABDC, for example. Here is the code I use to query the SCHEMAS view:

gr_sql_con = cl_sql_connection=>get_connection( gv_con_name ).
 create object gr_sql
 con_ref = gr_sql_con.
data lr_result type ref to cl_sql_result_set.
 lr_result = gr_sql->execute_query(
 |select * from schemas| ).
data lr_schema type ref to data.
 get reference of rt_schemas into lr_schema.
 lr_result->set_param_table( lr_schema ).
 lr_result->next_package( ).
 lr_result->close( ).

Personally I figured it might be useful to have one utility class which can read from any of these various catalog views.  You can download this class from here. Over the next few blogs in this series I will demonstrate exactly what I built up around this catalog utility.

ABAP Internal Tables from ABDC

I originally had the idea that I would read the TABLE_COLUMNS View from the HANA catalog and then use the technical field information to generate a corresponding ABAP RTTI and dynamic internal table. My goal was to make queries from tables which aren’t in the ABAP data dictionary much easier.  As it turns out, I didn’t need to directly read this information from the catalog views because the ADBC already had functionality to support this requirement.

The ADBC result set object (CL_SQL_RESULT_SET), has a method named GET_METADATA. This returns an ABAP internal table with all the metadata about which every object you just queried.  Therefore I could build a generic method which takes in any HANA Table or View and does a select single from it.  With the result set from this select single, I could then capture metadata for this object.

METHOD get_abap_type.
 DATA lr_result TYPE REF TO cl_sql_result_set.
 lr_result = gr_sql->execute_query(
 |select top 1 * from { obj_name_check( iv_table_name ) }| ).
 rt_meta = lr_result->get_metadata( ).
 lr_result->close( ).

For example if I run this method for my ABAP Schema on table SFLIGHT I get the following information back:

Of course the most value comes when you read an object which doesn’t exist in the ABAP Data Dictionary.  For example, I could also read one of the HANA Catalog Views: SCHEMAS

This metadata might not seem like much information, but its enough to in turn generate an ABAP RTTI (RunTime Type Information) object. From the RTTI, I now can generate an ABAP internal table for any HANA table or view in only a few lines of code:

 DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create(
 p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.

This all leads up to a simple method which can read from any HANA table and return an ABAP internal table with the results:

METHOD get_abap_itab.
*@78QImporting@  IV_MAX_ROWS  TYPE I  DEFAULT 1000
*@7BQReturning@  value( RT_DATA )  TYPE REF TO DATA
*@03QException@  CX_SQL_EXCEPTION
DATA lr_result TYPE REF TO cl_sql_result_set.
IF iv_max_rows IS SUPPLIED.
 lr_result = gr_sql->execute_query(
 |select top { iv_max_rows } * from { obj_name_check( iv_table_name ) }| ).
 lr_result = gr_sql->execute_query(
 |select * from { obj_name_check( iv_table_name ) }| ).
 DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create(
 p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.
 lr_result->set_param_table( rt_data ).
 lr_result->next_package( ).
 lr_result->close( ).


Between the HANA Catalog objects and the ADBC functionality to read type information, I’ve now got all the pieces I need to perform dynamic queries against any HANA table or view. Ultimately I could use this functionality to build all kinds of interesting tools. In fact I’m already playing around with a generic catalog/data browser; but that’s something to look forward to in a future blog.



Enterprise Geeks Podcast – Code Jam and SAPPHIRE/ASUG Annual Conference Prep

We are ready to rock to Van Halen at SAPPHIRE in our kilts

Season 4 Episode 13

Its luck episode number 13 of season 4 — if only we could have done the recording last Friday the 13th.  After several weeks of travel, we finally got most of the gang together to record a podcast.  We discuss the SAP Database and Mobility press conference from the previous week as well as upcoming plans for SAPPHIRE/ASUG Annual Conference. Craig also announces the new Code Jam events and talks about his vision for where this series of events will be headed.

Running Time: 45:23

If you have questions, suggestions, or topic ideas for future episodes, give us a shout here.

Developer’s Journal: ABAP/HANA Connectivity via Secondary Database Connection


In this first edition of this HANA Developer’s Journey I barely scratched the surface on some of the ways which a developer might begin their transition into the HANA world. Today I want to describe a scenario I’ve been studying quite a lot in the past few days: accessing HANA from ABAP in the current state.  By this, I mean what can be built today.  We all know that SAP has some exciting plans for ABAP specific functionality on top of HANA, but what everyone might not know is how much can be done today when HANA runs as a secondary database for your current ABAP based systems.  This is exactly how SAP is building the current HANA Accelerators, so it’s worth taking a little time to study how these are built and what development options within the ABAP environment support this scenario.

HANA as a Secondary Database

The scenario I’m describing is one that is quite common right now for HANA implementations.  You install HANA as a secondary database instead of a replacement for your current database.  You then use replication to move a copy of the data to the HANA system. Your ABAP applications can then be accelerated by reading data from the HANA copy instead of the local database. Throughout the rest of this blog I want to discuss the technical options for how you can perform that accelerated read.

ABAP Secondary Database Connection

ABAP has long had the ability to make a secondary database connection.  This allows ABAP programs to access a database system other than the local database. This secondary database connection can even be of a completely different DBMS vendor type. This functionality is extended to support SAP HANA for all the NetWeaver release levels from 7.00 and beyond. Service Note 1517236  (for SAP Internal) or Note  1597627  (for everyone) lists the preconditions and technical steps for connection to HANA systems and should always be the master guide for these preconditions, however I will summarize the current state at the time of publication of this blog.


  • SAP HANA Client is installed on each ABAP Application Server. ABAP Application Server Operating System must support the HANA Client (check Platform Availability Matrix for supported operating systems).
  • SAP HANA DBSL is installed (this is the Database specific library which is part of the ABAP Kernel)
  • The SAP HANA DBSL is only available for the ABAP Kernel 7.20
    • Kernel 7.20 is already the kernel for NetWeaver 7.02, 7.03, 7.20, 7.30 and 7.31
    • Kernel 7.20 is backward compatible and can also be applied to NetWeaver 7.00, 7.01, 7.10, and 7.11
  • Your ABAP system must be Unicode

Next, your ABAP system must be configured to connect to this alternative database. You have one central location where you maintain the database connection string, username and password.  Your applications then only need to specify the configuration key for the database making the connection information application independent.

This configuration can be done via table maintenance (Transaction SM30) for table DBCON. From the configuration screen you supply the DBMS type (HDB for HANA), the user name and password you want to use for all connections and the connection string. Be sure to include the port number for HANA systems. It should be 3<Instance Number>15. So if your HANA Database was instance 01, the port would be 30115.

DBCON can also be maintained via transaction DBACOCKPIT. Ultimately you end up with the same entry information as DBCON, but you get a little more information (such as the default Schema) and you can test the connection information from here.


Secondary Database Connection Via Open SQL

The easiest solution for performing SQL operations from ABAP to your secondary database connection is to use the same Open SQL statements which ABAP developers are already familiar with. If you supply the additional syntax of CONNECTION (dbcon), you can force the Open SQL statement to be performed against the alternative database connection.

For instance, let’s take a simple Select and perform it against our HANA database:

    INTO TABLE lt_sflight
   WHERE carrid = 'LH'.

The advantage of this approach is in its simplicity.  With one minor addition to existing SQL Statements you can instead redirect your operation to HANA. The downside is that the table or view you are accessing must exist in the ABAP Data Dictionary. That isn’t a huge problem for this Accelerator scenario considering the data all resides in the local ABAP DBMS and gets replicated to HANA. In this situation we will always have local copies of the tables in the ABAP Data Dictionary.  This does mean that you can’t access HANA specific artifacts like Analytic Views or Database Procedures. You also couldn’t access any tables which use HANA as their own/primary persistence.

Secondary Database Connection Via Native SQL

ABAP also has the ability to utilize Native SQL. In this situation you write you database specific SQL statements.  This allows you to access tables and other artifacts which only exist in the underlying database.  There is also syntax in Native SQL to allow you to call Database Procedures.  If we take the example from above, we can rewrite it using Native SQL:

    connect to 'AB1' as 'AB1'
    open dbcur for select * from sflight where mandt = :sy-mandt and carrid = 'LH'
      fetch next dbcur into :ls_sflight
    IF sy-subrc NE 0.
      APPEND ls_sflight TO lt_sflight.
    close dbcur
    disconnect 'AB1'

Its certainly more code than the Open SQL option and a little less elegant because we are working with database cursors to bring back an array of data.  However the upside is access to features we wouldn’t have otherwise. For example I can insert data into a HANA table and use the HANA database sequence for the number range or built in database functions like now().

      insert into "REALREAL"."realreal.db/ORDER_HEADER"
                   :lv_date,:lv_buyer,:lv_processor,:lv_amount,now() )
      insert into "REALREAL"."realreal.db/ORDER_ITEM" values((select max(ORDER_KEY)
        from "REALREAL"."realreal.db/ORDER_HEADER"),0,:lv_product,:lv_quantity,:lv_amount)

The other disadvantage to Native SQL via EXEC SQL is that there are little to no syntax checks on the SQL statements which you create. Errors aren’t caught until runtime and can lead to short dumps if the exceptions aren’t properly handled.  This makes testing absolutely essential.

Secondary Database Connection via Native SQL – ADBC

There is a third option that provides the benefits of the Native SQL connection via EXEC SQL, but also improves on some of the limitations.  This is the concept of ADBC – ABAP Database Connectivity.  Basically it is a series of classes (CL_SQL*) which simplify and abstract the EXEC SQL blocks. For example we could once again rewrite our SELECT * FROM SFLIGHT example:

****Create the SQL Connection and pass in the DBCON ID to state which Database Connection will be used
  DATA lr_sql TYPE REF TO cl_sql_statement.
      con_ref = cl_sql_connection=>get_connection( 'AB1' ).

****Execute a query, passing in the query string and receiving a result set object
  DATA lr_result TYPE REF TO cl_sql_result_set.
  lr_result = lr_sql->execute_query(

****All data (parameters in, results sets back) is done via data references
  DATA lr_sflight TYPE REF TO data.
  GET REFERENCE OF lt_sflight INTO lr_sflight.

****Get the result data set back into our ABAP internal table
  lr_result->set_param_table( lr_sflight ).
  lr_result->next_package( ).
  lr_result->close( ).

Here we at least remove the step-wise processing of the Database Cursor and instead read an entire package of data back into our internal table at once.  By default the initial package size will return all resulting records, but you can also specify any package size you wish thereby tuning processing for large return result sets.  Most importantly for HANA situations, however, is that ADBC also lets you access non-Data Dictionary artifacts including HANA Stored Procedures.  Given the advantages of ADBC over EXEC SQL, it is SAP’s recommendation that you always try to use the ADBC class based interfaces.


This is really just the beginning of what you could with this Accelerator approach to ABAP integration into SAP HANA. I’ve used very simplistic SQL statements in my examples on purpose so that I could instead focus on the details of how the technical integration works.  However, the real power comes when you execute more powerful statements (SELECT SUM … GROUP BY), access HANA specific artifacts (like OLAP Views upon OLTP tables), or database procedures.  These are all topics which I will explore more in future editions of this blog.

Shout out to BCO6181

While at the Mastering SAP Technologies event in Sydney last week, loyal Enterprise Geek Tony de Thomasis (otherwise known as c821311 on Twitter) asked a few of us to give some advice to his students in BCO6181. It sounds like Tony has a great group of students which are already well on their way to creating the next generation of Enterprise Geeks.