Archive | Developers Journal RSS for this section

SAP HANA, xcode, Cocoa and Swift

So as many of you know I work with SAP HANA quite a bit in terms of events, developer evangelism, etc.

Well the other day I decided I would give Swift a try, I wrote about it before. So I thought what better way to drive myself a bit nuts then to try and connect an existing SAP HANA XSODATA service.

The service is one I use in an IoT demo that basically takes the value from a temperature sensor and stores it in a SAP HANA table. The service when called returns a JSON formatted response that I can use in other applications to show active values for different sensors.

{"d": 
  {"results": [
      {
         "__metadata": 
         {
            "type": "sap.devs.demo.iot.IOTType",
            "uri": "http://52.1.35.56/sap/devs/demo/iot/services/iot.xsodata/IOT(6096)"
         },
         "SVALUE": "25.169"
       }
      ]
   }
}

Nothing fancy and outside of the whole demo really unimpressive, but for my purposes it was a perfect way to attempt my hand at something I’d not tried before despite reading about it and being extremely curious.

Therefore up comes my xcode environment on my Mac and following the wizard to create a basic Cocoa application.

Screen Shot 2016-03-24 at 14.45.28

With very basic settings and a project name.

Screen Shot 2016-03-24 at 14.45.45

From there it was a matter of adding a label or a two and a button to my Main Storyboard and then linking the label and button into the View controller.

Screen Shot 2016-03-24 at 14.47.27.png

Now inside my View Controller I added a timer control so I would be able to continuously update the value from the server itself.

timer = NSTimer.scheduledTimerWithTimeInterval(1.0, target: self, selector: #selector(self.doUpdate), userInfo: nil, repeats: true)

Then I added my doUpdate function that actually calls the service and retrieves the XSODATA value. Now I really just wanted to show how I was able to read the SAP HANA xsodata and I’m sure you OSX and iOS developers would be able to do this more efficiently but the results I think would be the same.

func doUpdate() {
        let url = NSURL(string: "\(proto)://\(server):\(port)\(path)\(param)")!
        let task = NSURLSession.sharedSession().dataTaskWithURL(url) { (data, response, error) -> Void in
            if let urlContent = data {
                do {

                    let json = try NSJSONSerialization.JSONObjectWithData(urlContent, options: []) as! NSMutableDictionary
                    let lv_value = json["d"]?["results"]! as? NSArray
                    let lv_temp_value = lv_value![0]["SVALUE"]! as! String
                    //print("lv_temp_value: ",lv_temp_value)
                    dispatch_async(dispatch_get_main_queue(), { () -> Void in
                        self.lblTemp.stringValue = "\(lv_temp_value) \(self.temp_unit)"
                    })
                    
                } catch {
                    print("error: \(error)")
                }
            }
        }
        task.resume()
    }
    
    override var representedObject: AnyObject? {
        didSet {
        // Update the view, if already loaded.
        }
    }

You’ll notice I also use some parameters for my url creation just because I thought it looked cleaner than the really long url string.

var proto = "http"
    var server = "xx.xx.xx.xx"
    var port = "80xx"
    var path = "/sap/devs/demo/iot/services/iot.xsodata/IOT"
    var param = "?$orderby=ID%20desc&$top=1&$select=SVALUE&$filter=SNAME%20eq%20%27Office1%27&$format=json"
    var temp_unit = "° C"

The parameters refer to the SAP HANA Developer Edition that you can find via the SAP Developers website.

The end result is just a quick little application that I can now expand to duplicate the existing SAP HANA XS application if I wanted to.

Screen Shot 2016-03-24 at 14.58.11

Sometimes a demo is just waiting for you to get over that first initial – what the hell moment! For me it was this.

let lv_value = json["d"]?["results"]! as? NSArray
let lv_temp_value = lv_value![0]["SVALUE"]! as! String

After I managed to get that bit of code and actually access the part of my JSON response I needed the rest was easy. Sometime I’ll share the larger demo now converted to OSX.

Developer’s Journal: ABAP Search Help For HANA Data

Introduction

In the last blog, I discussed techniques for accessing the HANA Catalog information from ABAP and how to create an ABAP internal table from a HANA object without a matching data dictionary object. You can probably tell that I’m building up to a tool which can function as a generic HANA catalog viewer, showing both metadata about HANA database objects and a content preview. Before I build that tool, I want to make selection of the catalog objects as simple as possible. Therefore I would like to implement an ABAP Search Help which gets its data from HANA instead of the underlying ABAP database. Ultimately I want it to work in the UI like the following video. Please note: I’m running NetWeaver 7.31 so I have the SuggestValues feature which was new in NetWeaver 7.02.  However ultimately the solution here is a normal Data Dictionary Search Help which could be used on any ABAP release level.

The source code for this example can be downloaded here. It contains the Search Help defintion, the Function Module/Group for the Search Help exit implementation and the DDic Structure with the search help binding which allows the connection between the importing parameters. Please note that you will also need to download the source code content from my previous blog as well.

Search Help Exit

Normally when an ABAP developer implements a Search Help, they only supply the name of the underlying table or view and all the selection work is done for them by the Search Help framework. In this case, however, I needed complete control over the selection logic so that I could use ADBC and my HANA Catalog Utility class from the previous blog in order to query the HANA database for available objects.

The definition of the Search Help itself isn’t all that special in this case. I need to know the currently selected Database Connection, Schema, and Object Type (Table or View) in order to perform a query. Therefore I map these fields as my importing parameters for the Search Help.

The major difference comes in the fact that the selection method is blank in this search help.  Instead I supply the name of a function module – ZHANA_OBJECT_SEARCH as the Search help exit. This function module must have a pre-defined interface, but can then function as the implementation of my search help.

Search Help Exit Function Module

All search help exit function modules, must have the same function interface so that it can be called by the search help framework.

 function zhana_object_search.
 *"----------------------------------------------------------------------
 *"*"Local Interface:
 *"  TABLES
 *"      SHLP_TAB TYPE  SHLP_DESCT
 *"      RECORD_TAB STRUCTURE  SEAHLPRES
 *"  CHANGING
 *"     VALUE(SHLP) TYPE  SHLP_DESCR
 *"     VALUE(CALLCONTROL) LIKE  DDSHF4CTRL STRUCTURE  DDSHF4CTRL
 *"----------------------------------------------------------------------

There are various control steps in the processing of the search help exit which can be used to over ride processing of the various search help events. The only one which we need to implement in this case is the callcontrol-step of SELECT. This is the primary query event of the search help. From this event we can read the current importing values from the shlp-selopt table.

 if callcontrol-step = 'SELECT'.

     data lr_model type ref to zcl_hana_catalog_utilities.
     data ls_search type zhana_obj_search.
     field-symbols <ls_selopt> type ddshselopt.
     data lx_root type ref to cx_root.

     read table shlp-selopt with key shlpfield = 'CON_NAME'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-con_name = <ls_selopt>-low.
     endif.
     read table shlp-selopt with key shlpfield = 'SCHEMA'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-schema = <ls_selopt>-low.
     endif.
     read table shlp-selopt with key shlpfield = 'OBJ_TYPE'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-obj_type = <ls_selopt>-low.
     endif.
     read table shlp-selopt with key shlpfield = 'OBJ_NAME'
         assigning <ls_selopt>.
     if sy-subrc = 0.
       ls_search-obj_name = <ls_selopt>-low.
     endif.

Now that we have all of our search input criteria, we can use the HANA Catalog Utilities class from the previous blog to search for all tables or views which match those criteria.   Here is a subset of that logic.  See the downloadable source code sample for complete implementation.

         create object lr_model
           exporting
             iv_con_name = ls_search-con_name.

         if ls_search-obj_type = 'T'. "table
           data lt_tables type zhana_tables.
           field-symbols <ls_table> like line of lt_tables.

           lv_table = ls_search-obj_name.
           lt_tables = lr_model->get_hana_tables(
               iv_schema   = lv_schema    " Schema
               iv_table    = lv_table     " Table (can be wildcard with %)
               iv_max_rows = callcontrol-maxrecords ). " Maximum Number of Rows
 ****Map to LT_SHLP
           loop at lt_tables assigning <ls_table>.
             append initial line to lt_shlp assigning <ls_shlp>.
             <ls_shlp>-con_name = ls_search-con_name.
             <ls_shlp>-obj_type = ls_search-obj_type.
             <ls_shlp>-schema   = ls_search-schema.
             <ls_shlp>-obj_name = <ls_table>-table_name.
           endloop.
         else.

The final activity is to place the query results back into the search help. This is done by calling the function module  F4UT_RESULTS_MAP.

     call function 'F4UT_RESULTS_MAP'
       exporting
         source_structure   = 'ZHANA_OBJ_SEARCH'
 *       apply_restrictions = abap_true
       tables
         shlp_tab           = shlp_tab
         record_tab         = record_tab
         source_tab         = lt_shlp
       changing
         shlp               = shlp
         callcontrol        = callcontrol
       exceptions
         illegal_structure  = 1
         others             = 2.
     if sy-subrc <> 0.
     endif.

Structure for Parameter Mapping

The final step in order to get the input parameter mapping shown in the video to work within Web Dynpro is to map the search help into a data dictionary structure and the use that structure for the basis of the Web Dynpro Context Node.  The importing parameters from other attributes in this Context Node will then be transferred automatically by the framework (even for Suggest Values).

You can make the explicit assignment of the new search help to the OBJ_NAME field and then use the Generate Proposals button to automatically map the input fields of the search help to the corresponding fields of the structure.

The final step is to use this structure as the source Dictionary structure of the Context Node and you have the Value Help working as described in the video at the opening of this blog.

 

 

 

Developer’s Journal: HANA Catalog Access from ABAP

Introduction

In my last blog, I introduced the topic of ABAP Secondary Database Connection and the various options for using this technology to access information in a HANA database from ABAP. Remember there are two scenarios where ABAP Secondary Database Connection might be used.  One is when you have data being replicated from an ABAP based application to HANA. In this case the ABAP Data Dictionary already contains the definitions of the tables which you access with SQL statements.

The other option involves using HANA to store data gathered via other means.  Maybe the HANA database is used as the primary persistence for completely new data models.  Or it could be that you just want to leverage HANA specific views or other modeled artifacts upon ABAP replicated data.  In either of these scenarios, the ABAP Data Dictionary won’t have a copy of the objects which you are accessing via the Secondary Database Connection. Without the support of the Data Dictionary, how can we define ABAP internal tables which are ready to receive the result sets from queries against such objects?

In this blog, I want to discuss the HANA specific techniques for reading the Catalog and also how the ABDC classes could be used to build a dynamic internal table which matches a HANA table or view.  The complete source code discussed in this blog can be downloaded from the SCN Code Exchange.

HANA Catalog

The first task is figuring out how to read metadata about HANA tables and views.  When access these objects remotely from ABAP, we need to be able to prepare ABAP variables or internal tables to receive the results.  We can’t just declare objects with reference to the data dictionary like we normally would.  Therefore we need some way to access the metadata which HANA itself stores about its tables, views, and their fields.

HANA has a series of Catalog objects.  These are tables/views from the SYS Schema. Some of the ones which we will use are:

  • SCHEMAS – A list of all Schemas within a HANA database.  This is useful because once we connect to HANA via the Secondary Database Connection we might need to change from the default user Schema to another schema to access the objects we need.
  • DATA_TYPES – A list of all HANA built-in data types. This can be useful when you need the detail technical specification of a data type used within a table or view column.
  • TABLES – A list of all tables and their internal table ID.  We will need that table ID to look up the Table Columns.
  • TABLE_COLUMNS – A listing of columns in a Table as well as the technical information about them.
  • VIEWS –  A list of all views and their internal view ID.  We will need that View ID to look up the View Columns. We can also read the View creation SQL for details about the join conditions and members of the view.
  • VIEW_COLUMNS – A listing of columns in a View as well as the technical information about them.

Now reading these views from ABAP can be done exactly as we discussed in the previous blog.  You can use the Secondary Database Connection and query them with ABDC, for example. Here is the code I use to query the SCHEMAS view:

gr_sql_con = cl_sql_connection=>get_connection( gv_con_name ).
 create object gr_sql
 exporting
 con_ref = gr_sql_con.
data lr_result type ref to cl_sql_result_set.
 lr_result = gr_sql->execute_query(
 |select * from schemas| ).
data lr_schema type ref to data.
 get reference of rt_schemas into lr_schema.
 lr_result->set_param_table( lr_schema ).
 lr_result->next_package( ).
 lr_result->close( ).

Personally I figured it might be useful to have one utility class which can read from any of these various catalog views.  You can download this class from here. Over the next few blogs in this series I will demonstrate exactly what I built up around this catalog utility.

ABAP Internal Tables from ABDC

I originally had the idea that I would read the TABLE_COLUMNS View from the HANA catalog and then use the technical field information to generate a corresponding ABAP RTTI and dynamic internal table. My goal was to make queries from tables which aren’t in the ABAP data dictionary much easier.  As it turns out, I didn’t need to directly read this information from the catalog views because the ADBC already had functionality to support this requirement.

The ADBC result set object (CL_SQL_RESULT_SET), has a method named GET_METADATA. This returns an ABAP internal table with all the metadata about which every object you just queried.  Therefore I could build a generic method which takes in any HANA Table or View and does a select single from it.  With the result set from this select single, I could then capture metadata for this object.

METHOD get_abap_type.
 DATA lr_result TYPE REF TO cl_sql_result_set.
 lr_result = gr_sql->execute_query(
 |select top 1 * from { obj_name_check( iv_table_name ) }| ).
 rt_meta = lr_result->get_metadata( ).
 lr_result->close( ).
ENDMETHOD.

For example if I run this method for my ABAP Schema on table SFLIGHT I get the following information back:

Of course the most value comes when you read an object which doesn’t exist in the ABAP Data Dictionary.  For example, I could also read one of the HANA Catalog Views: SCHEMAS

This metadata might not seem like much information, but its enough to in turn generate an ABAP RTTI (RunTime Type Information) object. From the RTTI, I now can generate an ABAP internal table for any HANA table or view in only a few lines of code:

 DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create(
 p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.

This all leads up to a simple method which can read from any HANA table and return an ABAP internal table with the results:

METHOD get_abap_itab.
*@78QImporting@  IV_TABLE_NAME  TYPE STRING
*@78QImporting@  IV_MAX_ROWS  TYPE I  DEFAULT 1000
*@7BQReturning@  value( RT_DATA )  TYPE REF TO DATA
*@03QException@  CX_SQL_EXCEPTION
DATA lr_result TYPE REF TO cl_sql_result_set.
IF iv_max_rows IS SUPPLIED.
 lr_result = gr_sql->execute_query(
 |select top { iv_max_rows } * from { obj_name_check( iv_table_name ) }| ).
 ELSE.
 lr_result = gr_sql->execute_query(
 |select * from { obj_name_check( iv_table_name ) }| ).
ENDIF.
 DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.
 lr_tabledescr = cl_abap_tabledescr=>create(
 p_line_type  = me->get_abap_structdesc( me->get_abap_type( iv_table_name ) ) ).
 CREATE DATA rt_data TYPE HANDLE lr_tabledescr.
 lr_result->set_param_table( rt_data ).
 lr_result->next_package( ).
 lr_result->close( ).
ENDMETHOD.

CLOSING

Between the HANA Catalog objects and the ADBC functionality to read type information, I’ve now got all the pieces I need to perform dynamic queries against any HANA table or view. Ultimately I could use this functionality to build all kinds of interesting tools. In fact I’m already playing around with a generic catalog/data browser; but that’s something to look forward to in a future blog.

 

 

Developer’s Journal: ABAP/HANA Connectivity via Secondary Database Connection

Introduction

In this first edition of this HANA Developer’s Journey I barely scratched the surface on some of the ways which a developer might begin their transition into the HANA world. Today I want to describe a scenario I’ve been studying quite a lot in the past few days: accessing HANA from ABAP in the current state.  By this, I mean what can be built today.  We all know that SAP has some exciting plans for ABAP specific functionality on top of HANA, but what everyone might not know is how much can be done today when HANA runs as a secondary database for your current ABAP based systems.  This is exactly how SAP is building the current HANA Accelerators, so it’s worth taking a little time to study how these are built and what development options within the ABAP environment support this scenario.

HANA as a Secondary Database

The scenario I’m describing is one that is quite common right now for HANA implementations.  You install HANA as a secondary database instead of a replacement for your current database.  You then use replication to move a copy of the data to the HANA system. Your ABAP applications can then be accelerated by reading data from the HANA copy instead of the local database. Throughout the rest of this blog I want to discuss the technical options for how you can perform that accelerated read.

ABAP Secondary Database Connection

ABAP has long had the ability to make a secondary database connection.  This allows ABAP programs to access a database system other than the local database. This secondary database connection can even be of a completely different DBMS vendor type. This functionality is extended to support SAP HANA for all the NetWeaver release levels from 7.00 and beyond. Service Note 1517236  (for SAP Internal) or Note  1597627  (for everyone) lists the preconditions and technical steps for connection to HANA systems and should always be the master guide for these preconditions, however I will summarize the current state at the time of publication of this blog.

Preconditions

  • SAP HANA Client is installed on each ABAP Application Server. ABAP Application Server Operating System must support the HANA Client (check Platform Availability Matrix for supported operating systems).
  • SAP HANA DBSL is installed (this is the Database specific library which is part of the ABAP Kernel)
  • The SAP HANA DBSL is only available for the ABAP Kernel 7.20
    • Kernel 7.20 is already the kernel for NetWeaver 7.02, 7.03, 7.20, 7.30 and 7.31
    • Kernel 7.20 is backward compatible and can also be applied to NetWeaver 7.00, 7.01, 7.10, and 7.11
  • Your ABAP system must be Unicode

Next, your ABAP system must be configured to connect to this alternative database. You have one central location where you maintain the database connection string, username and password.  Your applications then only need to specify the configuration key for the database making the connection information application independent.

This configuration can be done via table maintenance (Transaction SM30) for table DBCON. From the configuration screen you supply the DBMS type (HDB for HANA), the user name and password you want to use for all connections and the connection string. Be sure to include the port number for HANA systems. It should be 3<Instance Number>15. So if your HANA Database was instance 01, the port would be 30115.

DBCON can also be maintained via transaction DBACOCKPIT. Ultimately you end up with the same entry information as DBCON, but you get a little more information (such as the default Schema) and you can test the connection information from here.

 

Secondary Database Connection Via Open SQL

The easiest solution for performing SQL operations from ABAP to your secondary database connection is to use the same Open SQL statements which ABAP developers are already familiar with. If you supply the additional syntax of CONNECTION (dbcon), you can force the Open SQL statement to be performed against the alternative database connection.

For instance, let’s take a simple Select and perform it against our HANA database:

  SELECT * FROM sflight CONNECTION ('AB1')
    INTO TABLE lt_sflight
   WHERE carrid = 'LH'.

The advantage of this approach is in its simplicity.  With one minor addition to existing SQL Statements you can instead redirect your operation to HANA. The downside is that the table or view you are accessing must exist in the ABAP Data Dictionary. That isn’t a huge problem for this Accelerator scenario considering the data all resides in the local ABAP DBMS and gets replicated to HANA. In this situation we will always have local copies of the tables in the ABAP Data Dictionary.  This does mean that you can’t access HANA specific artifacts like Analytic Views or Database Procedures. You also couldn’t access any tables which use HANA as their own/primary persistence.

Secondary Database Connection Via Native SQL

ABAP also has the ability to utilize Native SQL. In this situation you write you database specific SQL statements.  This allows you to access tables and other artifacts which only exist in the underlying database.  There is also syntax in Native SQL to allow you to call Database Procedures.  If we take the example from above, we can rewrite it using Native SQL:

EXEC SQL.
    connect to 'AB1' as 'AB1'
  ENDEXEC.
  EXEC SQL.
    open dbcur for select * from sflight where mandt = :sy-mandt and carrid = 'LH'
  ENDEXEC.
  DO.
    EXEC SQL.
      fetch next dbcur into :ls_sflight
    ENDEXEC.
    IF sy-subrc NE 0.
      EXIT.
    ELSE.
      APPEND ls_sflight TO lt_sflight.
    ENDIF.
  ENDDO.
  EXEC SQL.
    close dbcur
  ENDEXEC.
  EXEC SQL.
    disconnect 'AB1'
  ENDEXEC.

Its certainly more code than the Open SQL option and a little less elegant because we are working with database cursors to bring back an array of data.  However the upside is access to features we wouldn’t have otherwise. For example I can insert data into a HANA table and use the HANA database sequence for the number range or built in database functions like now().

    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_HEADER"
       values("REALREAL"."realreal.db/ORDER_SEQ".NEXTVAL,
                   :lv_date,:lv_buyer,:lv_processor,:lv_amount,now() )
    ENDEXEC.
    EXEC SQL.
      insert into "REALREAL"."realreal.db/ORDER_ITEM" values((select max(ORDER_KEY)
        from "REALREAL"."realreal.db/ORDER_HEADER"),0,:lv_product,:lv_quantity,:lv_amount)
    ENDEXEC.

The other disadvantage to Native SQL via EXEC SQL is that there are little to no syntax checks on the SQL statements which you create. Errors aren’t caught until runtime and can lead to short dumps if the exceptions aren’t properly handled.  This makes testing absolutely essential.

Secondary Database Connection via Native SQL – ADBC

There is a third option that provides the benefits of the Native SQL connection via EXEC SQL, but also improves on some of the limitations.  This is the concept of ADBC – ABAP Database Connectivity.  Basically it is a series of classes (CL_SQL*) which simplify and abstract the EXEC SQL blocks. For example we could once again rewrite our SELECT * FROM SFLIGHT example:

****Create the SQL Connection and pass in the DBCON ID to state which Database Connection will be used
  DATA lr_sql TYPE REF TO cl_sql_statement.
  CREATE OBJECT lr_sql
    EXPORTING
      con_ref = cl_sql_connection=>get_connection( 'AB1' ).

****Execute a query, passing in the query string and receiving a result set object
  DATA lr_result TYPE REF TO cl_sql_result_set.
  lr_result = lr_sql->execute_query(
    |SELECT * FROM SFLIGHT WHERE MANDT = { sy-mandt } AND CARRID = 'LH'| ).

****All data (parameters in, results sets back) is done via data references
  DATA lr_sflight TYPE REF TO data.
  GET REFERENCE OF lt_sflight INTO lr_sflight.

****Get the result data set back into our ABAP internal table
  lr_result->set_param_table( lr_sflight ).
  lr_result->next_package( ).
  lr_result->close( ).

Here we at least remove the step-wise processing of the Database Cursor and instead read an entire package of data back into our internal table at once.  By default the initial package size will return all resulting records, but you can also specify any package size you wish thereby tuning processing for large return result sets.  Most importantly for HANA situations, however, is that ADBC also lets you access non-Data Dictionary artifacts including HANA Stored Procedures.  Given the advantages of ADBC over EXEC SQL, it is SAP’s recommendation that you always try to use the ADBC class based interfaces.

Closing

This is really just the beginning of what you could with this Accelerator approach to ABAP integration into SAP HANA. I’ve used very simplistic SQL statements in my examples on purpose so that I could instead focus on the details of how the technical integration works.  However, the real power comes when you execute more powerful statements (SELECT SUM … GROUP BY), access HANA specific artifacts (like OLAP Views upon OLTP tables), or database procedures.  These are all topics which I will explore more in future editions of this blog.

Developer’s Journal: First Steps into the SAP HANA World

Introduction

A long time ago when I first started blogging on SDN, I used to write frequently in the style of a developer journal. I was working for a customer and therefore able to just share my experiences as I worked on projects and learned new techniques. My goal with this series of blog postings is to return to that style but with a new focus on a journey to explore the new and exciting world of SAP HANA.

At the beginning of the year, I moved to the SAP HANA Product Management team and I am responsible for the developer persona for SAP HANA. In particular I focus on tools and techniques developers will need for the upcoming wave of transactional style applications for SAP HANA.

I come from an ABAP developer background having worked primarily on ERP; therefore my first impressions are to draw correlations back to what I understand from the ABAP development environment and to begin to analyze how development with HANA changes so many of the assumptions and approaches that ABAP developers have.

Transition Closer to the Database

My first thought after a few days working with SAP HANA is that I needed to seriously brush up on my SQL skills. Of course I have plenty of experience with SQL, but as an ABAP developer we tend to shy away from deeper aspects of SQL in favor of processing the data on the application server in ABAP. For ABAP developers reading this, when was the last time you used a sub-query or even a join in ABAP? Or even a select sum? As ABAP developers, we are taught from early on to abstract the database as much as possible and we tend to trust the processing on the application server where we have total control instead of the “black box” of the dbms. This situation has only been compounded in recent years as we have a larger number of tools in ABAP which will generate the SQL for us.

This approach has served ABAP developers well for many years. Let’s take the typical situation of loading supporting details from a foreign key table. In this case we want to load all flight details from SFLIGHT and also load the carrier details from SCARR. In ABAP we could of course write an inner join:

However many ABAP developers would take an alternative approach where they perform the join in memory on the application server via internal tables:

This approach can be especially beneficial when combined with the concept of ABAP table buffering. Keep in mind that I’m comparing developer design patterns here, not the actual technical merits of my specific examples. On my system the datasets weren’t actually large enough to show any statistically relevant performance different between these two approaches.

Now if we put SAP HANA into the mixture, how would the developer approach change? In HANA the developer should strive to push more of the processing into the database, but the question might be why?

Much of the focus on HANA is that it is an in-memory database. I think it’s pretty easy for most any developer to see the advantage of all your data being in fast memory as opposed to relatively slow disk based storage. However if this were the only advantage, we wouldn’t see a huge difference between processing in ABAP. After all ABAP has full table buffering. Ignoring the cost of updates, if we were to buffer both SFLIGHT and SCARR our ABAP table loop join would be pretty fast, but it still wouldn’t be as fast as HANA.

The other key points of HANA’s architecture is that in addition to being in-memory; it is also designed for columnar storage and for parallel processing. In the ABAP table loop, each record in the table has to be processed sequentially one record at a time. The current version of ABAP statements such as these just aren’t designed for parallel processing. Instead ABAP leverages multiple cores/CPUs by running different user sessions in separate work processes. HANA on the other hand has the potential to parallelize blocks of data within a single request. The fact that the data is all in memory only further supports this parallelization by making access from multiple CPUs more useful since data can be “fed” to the CPUs that much faster. After all parallization isn’t useful if the CPUs spend most of their cycles waiting on data to process.

The other technical aspect at play is the columnar architecture of SAP HANA. When a table is stored columnar, all data for a single column is stored together in memory. Row storage (as even ABAP internal tables are processed), places data a row at time in memory.

This means that for the join condition the CARRID column in each table can be scanned faster because of the arrangement of data. Scans over unneeded data in memory doesn’t have nearly the cost of performing the same operation on disk (because of the need to wait for platter rotation) but there is a cost all the same. Storing the data columnar reduces that cost when performing operations which scan one or more columns as well as optimizing compression routines.

For these reasons, developers (and especially ABAP developers) will need to begin to re-think their applications designs. Although SAP has made statements about having SAP HANA running as the database system for the ERP, to extract the maximum benefit of HANA we will also need to push more of the processing from ABAP down into the database. This will mean ABAP developers writing more SQL and interacting more often with the underlying database. The database will no longer be a “bit bucket” to be minimized and abstracted, but instead another tool in the developers’ toolset to be fully leveraged. Even the developer tools for HANA and ABAP will move closer together (but that’s a topic for another day).

With that change in direction in mind, I started reading some books on SQL this week. I want to grow my SQL skills beyond what is required in the typical ABAP environment as well as refresh my memory on things that can be done in SQL but perhaps I’ve not touched in a number of years. Right now I’m working through the O’Reilly Learning SQL 2nd Edition by Alan Beaulieu. I’ve found that I can study the SQL specification of HANA all day, but recreating exercises forces me to really use and think through the SQL usage. The book I’m currently studying actually lists all of its SQL examples formatted for MySQL. One of the more interesting aspects of this exercise has been adjusting these examples to run within SAP HANA and more importantly changing some of them to be better optimized for Columnar and In-Memory. I think I’m actually learning more by tweaking examples and seeing what happens than any other aspect.

What’s Next

There’s actually lots of aspects of HANA exploration that I can’t talk about yet. While learning the basics and mapping ABAP development aspects onto a future that includes HANA, I also get to work with functionality which is still in early stages of development. That said, I will try and share as much as I can via this blog over time. Already in the next installment I would like to focus on my next task for exploration – SQLScript.