Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member187859
Participant
2,136

Imagine this: your friend is a plant maintenance supervisor. His colleagues running the plant fill out hand-written forms identifying broken down equipment. Half those papers he loses somewhere in his disaster of a desk. The other half he has to mash out on his desktop keyboard over lunch. His job is fixing problems, but he's spending more time pushing papers.

Enter you. You're a phenomenally skilled SAP developer, known through the company for creative solutions. You hand your buddy an iPhone. On it runs an SAPUI5 app that lets him take a picture of the paper as soon as it's handed to him. The app interprets the things written on the paper and puts them in the appropriate fields on his screen. He gives it a once-over and hits "Submit" - ready to actually enjoy a distraction-free lunch break.

You are a hero to your friend and he tells everyone how awesome you are. The CEO gets wind of this innovation and makes you CTO. You retire wealthy and start that surf shop on the beach you always dreamed of.


I'm going to show you how to give your Gateway system “eyes” that can interpret content in a photo. In a ridiculously easy way. No promises on the surf shop, though.

Google recently made their Cloud Vision API available for anyone to try. I love when the big software companies give me things to play with. It means I can try out wild and crazy ideas from the comfort of my keyboard. So as soon as I could, I took some free time to tinker with the Vision API and mash it up with SAP Gateway.

I present here a simple prototype for using these two tools in tandem. There are about a billion ways this could be useful, so I hope my little slice of code helps someone along the way.

I’ll show you how to use Gateway to request Google Vision API processing. I picked a couple Vision abilities that I find awesome, but the API is capable of more.


Without further ado - let’s get started!


Setup


Before you write any code, you’ll need:

  • An SAP Gateway system. If you’re reading this blog and don’t know what that is, then I apologize because you’re probably really bored.
  • Configuration to allow that system to HTTP POST to an external internet API. See here for setting up STRUST to allow that.
  • A Google account, with the Cloud Vision API enabled. Be warned: if you use it more than 1,000 times a month, it’s not free. Just make sure it takes you less than 1,000 tries to get it right.
  • An API key set up in the Google account. I suggest using the browser API key for prototyping, and service accounts for productive use. Getting an API key is covered in the Google getting started guide.

Once you have the above configured, it’s time to cracking on the code.

Show Me The Code Already

Now that we have things ready to roll, fire up your Gateway system and go to t-code SEGW. I set up a very simple entity that will just hold the description of what Google thinks an image is. Just 3 fields:



Make sure to flag that entity as a "Media" entity:



And that’s it for our bare-bones service definition. You could get a lot more architecturally crazy and set up a bunch of entities and fields to capture every single thing that comes out of the Google API - but I just wanted to get it up and running to see what I could get.


Only two steps left in the setup: coding the model enhancement necessary for media entities and coding the CREATE_STREAM method for processing the file data.


First, the model enhancement. Navigate to the *MPC_EXT class in your project and do a redefinition of the DEFINE method. This code should get you what you need. It’s so short that it’s basically self-explanatory:



  METHOD define.

    super->define( ).

    DATA:
    lo_entity   TYPE REF TO /iwbep/if_mgw_odata_entity_typ,
    lo_property TYPE REF TO /iwbep/if_mgw_odata_property.

    lo_entity = model->get_entity_type( iv_entity_name = 'VisionDemo' ).

    IF lo_entity IS BOUND.
      lo_property = lo_entity->get_property( iv_property_name = 'ContentType' ).
      lo_property->set_as_content_type( ).
    ENDIF.

  ENDMETHOD.


The model is now ready to support media stuff (in our case pictures) coming in. The other side of the equation is to prepare the request to be sent to Google for processing. We’ll do that in the CREATE_STREAM method of the *DPC_EXT class that the SEGW project generated. Same deal as before, do a redefine of that method and put in the following code:



  METHOD /iwbep/if_mgw_appl_srv_runtime~create_stream.
    TYPES: BEGIN OF feature,
             type TYPE string,
             max_results TYPE i,
           END OF feature.

    TYPES: features TYPE STANDARD TABLE OF feature WITH DEFAULT KEY.

    TYPES: BEGIN OF image,
             content TYPE string,
           END OF image.

    TYPES: BEGIN OF request,
             image TYPE image,
             features TYPE features,
           END OF request.

    TYPES: requests TYPE STANDARD TABLE OF request WITH DEFAULT KEY.

    TYPES: BEGIN OF overall_request,
             requests TYPE requests,
           END OF overall_request.

    DATA overall_request TYPE overall_request.
    DATA requests TYPE TABLE OF request.
    DATA request TYPE request.
    DATA feature TYPE feature.
    DATA lv_b64_content TYPE string.
    DATA lo_http_client  TYPE REF TO if_http_client.
    DATA lv_response_data TYPE string.
    DATA lv_url TYPE string.
    DATA lv_request_json TYPE string.
    DATA lv_response_json TYPE string.
    DATA lo_descr TYPE REF TO cl_abap_structdescr.
    DATA lv_start TYPE i.
    DATA lv_end TYPE i.
    DATA lv_total_chars TYPE i.
    DATA ls_visiondemo TYPE zcl_zgoogle_vision_mpc=>ts_visiondemo.
    DATA lv_end_marker TYPE string.

    CALL FUNCTION 'SCMS_BASE64_ENCODE_STR'      EXPORTING        input  = is_media_resource-value      IMPORTING        output = lv_b64_content.

    lv_url = 'https://vision.googleapis.com/v1/images:annotate?key=GET_YOUR_OWN_KEY'.

    request-image-content = lv_b64_content.
    feature-type = iv_slug.
    feature-max_results = 1.
    APPEND feature TO request-features.
    APPEND request TO requests.
    overall_request-requests = requests.

    lo_descr ?= cl_abap_typedescr=>describe_by_data( overall_request ).

    lv_request_json = /ui2/cl_json=>dump( data = overall_request type_descr = lo_descr pretty_name = abap_true ).

    cl_http_client=>create_by_url(
      EXPORTING        url                = lv_url      IMPORTING        client             = lo_http_client ).

    lo_http_client->request->set_method( method = 'POST' ).
    lo_http_client->request->set_content_type( content_type = 'application/json' ).
    lo_http_client->request->append_cdata2( EXPORTING data = lv_request_json ).
    lo_http_client->send( ).
    lo_http_client->receive( ).
    lv_response_data = lo_http_client->response->get_cdata( ).

    IF iv_slug = 'LOGO_DETECTION'.
      lv_end_marker = '"score":'.
    ELSE.
      lv_end_marker = '"boundingPoly":'.
    ENDIF.

    SEARCH lv_response_data FOR '"description":'.
    lv_start = sy-fdpos + 16.

    SEARCH lv_response_data FOR lv_end_marker.
    lv_end = sy-fdpos.

    lv_total_chars = lv_end - lv_start.
    ls_visiondemo-id = 1.
    ls_visiondemo-description = lv_response_data+lv_start(lv_total_chars).

    copy_data_to_ref( EXPORTING is_data = ls_visiondemo                       CHANGING cr_data = er_entity ).

  ENDMETHOD.


Note the following about this code snippet:

  • I’m using the IV_SLUG parameter to control what kind of request (logo or text detection) I’m making to Google. This means using the “slug” header in an HTTP request, which I’ll show you below.
  • Google expects picture data to be base64 encoded, so the FM SCMS_BASE64_ENCODE_STR handles that for us.
  • Get your own API key - the string at the end of my URL will not work for you. Replace GET_YOUR_OWN_KEY with your actual key.
  • There are a number of ways to handle JSON type data in ABAP. I used the /ui2/cl_json method purely out of simplicity for a demo. For a more robust solution see how to use JSON with the XML parsing tools.
  • There is basically no error handling here. That’s the great thing about prototyping. :smile:
  • I know the way I pull the description out of the response is a total hack.

Try It Out

The easiest way to try this out is through the Gateway Client (/iwfnd/gw_client). Here’s how:


Navigate to /iwfnd/gw_client on your Gateway system. Enter the request parameters as seen here (assuming you’ve named things the same that I have):



The two possible values I verified for “slug” are TEXT_DETECTION and LOGO_DETECTION - though the API supports many more than that.


Next, put a picture in the body of the request by clicking “Add File”. If you choose TEXT_DETECTION as the slug, then make sure your image actually has text. Here’s what the result looks like if I put in a picture of my business card. Look at the “Description” field in the right hand text (and note that Google automatically puts in newline characters if there are line breaks in the picture):




And check it out if I put in a logo with the LOGO_DETECTION slug parameter (“Skunk Works” is the right answer for this picture):



Wrap It Up, Paul

So I’ve proved out that I can use the Google Cloud Vision API in conjunction with SAP Gateway - but I haven’t really done anything truly useful. However, I have some really exciting ideas for this and can’t wait to continue using these mega-powerful cloud services! I hope that my small example helps someone else dream big and make something amazing.

10 Comments
Labels in this area