Technology Blog Posts by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Jaswin
Discoverer
2,593

Hi all,

as you may have heared, SAP recently introduced RPT-1 (Relational Pre-Trained Transformer).  To explore how well the model will work and if it is able to return different outputs, I integrated it into a small SAP CAP application. My choosen dataset contains historical data with sample customer pizza orders (order day, order time, pizza type, order quantity, etc.).

The goal: predict each customer’s next order day and next pizza type 🍕.

RPT-1 in Action 

A standard Fiori Elements list report displays the historical data that serves as input. When the user selects a customer and clicks “Trigger Prediction,” the CAP backend sends data to the RPT-1 API and receives the predicted results. These are then shown in charts within a dialog.

Jaswin_0-1762806152469.png

Jaswin_1-1762806174656.png

Short program flow overview:

  1. CAPreads historical order data from (HANA-) DB
  2. Backend constructs the payload that RPT-1 expects. (API key and endpoint details can be found in the official documentation.)
  3. Model returns predicted columns (next_day, next_pizza). These fields must exist in the data structure as well
  4. Backend merges historical data and prediction results
  5. Frontend visualizes both predicted sets using VizFrame charts

I found it very useful to try out RPT-1 in the Playground first before building a full application. Here you can experiment with your dataset and understand the prediction behavior as well as how you might need to adjust your data set to get the desired outcomes.

If you are are curious like me how that works in CAP, I have listed my doings below:

Set up your environment - Use SAP Business Application Studio (BAS) or VS Code with the SAP Fiori and CAP extensions installed. Create a new CAP project or a Full-Stack application using the Productivity Tools

Prepare your data - Create a CSV file with columns like customer, order_day, pizza_type, quantity, etc., and deploy it to your database (SQLite or SAP HANA using cds deploy --to hana / sqllite: db.sqlite

Get access to RPT-1 - Use the API endpoint: https://rpt.cloud.sap/api/predict The API token is available on the SAP RPT-1 documentation page (after logging in with your S-user)

Prepare the payload & Call the RPT-1 API  - I tried to get the payload in the form like mentioned in the documentation

const historic = await SELECT.from(HistoricalData);  
const trainRows = historic
      // optionally focus payload on one customer:
      .filter(r => !focusCustomer || r.customer_id === focusCustomer)

      .map(r => ({
        customer_id: r.customer_id,
        pizza_name: r.pizza_name,
        order_day: r.order_day,
        next_order_day: r.next_order_day,
        next_pizza_name: r.next_pizza_name
      }));​

Screenshot from expected payload:

Jaswin_1-1762862096828.png

Do not forget to provide the data set which you want to predict (normally that is all in your initial data set and marked with "[PREDICT]", but to explicitly show it, I also hardcoded one line of data where I want to get the prediction results)

  const predictRows = [
      { customer_id: focusCustomer, pizza_name: "Margherita", order_day: "Thu", next_order_day: "[PREDICT]", next_pizza_name: "[PREDICT]" }
    ]

Constructed Payload in the service.js file

 const payload = {
      rows: [...trainRows, ...predictRows],
      index_column: "customer_id"
  };​

Once these preparation steps are done , I used axios to call the above described API route (remember to get you API Token)

    try {
      const response = await axios.post(API_URL, payload, {
        headers: {
          "Authorization": `Bearer ${TOKEN}`,
          "Content-Type": "application/json",
        },
      });
     console.log("Prediction result:", JSON.stringify(response.data, null, 2));
    } catch (error) {
      console.error("RPT-1 error:", error.response?.data || error.message);
      };
    }

Once the API call was successful, the API returns a json object (including your historical send data along with the prediction result). Here an extract of the payload I received: 

Prediction result: {
  "prediction": {
    "id": "782d5f80-0dd3-4b06-a589-985cfe5a41db",
    "metadata": {
      "num_columns": 4,
      "num_predict_rows": 3,
      "num_predict_tokens": 6,
      "num_rows": 24
    },
    "predictions": [
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Sat"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Seafood Special"
          }
        ]
      },
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Thu"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Margherita"
          }
        ]
      },
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Sun"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Margherita"
          }
        ]
      }
    ]
  },
  "delay": 272.4773660004139,
  "aiApiRequestPayload": {
    "prediction_config": {
      "target_columns": [
        {
          "name": "next_order_day",
          "placeholder_value": "[PREDICT]",
          "task_type": "classification"
        },
        {
          "name": "next_pizza_name",
          "placeholder_value": "[PREDICT]",
          "task_type": "classification"
        }
      ]
    },
    "rows": [ //rows which where present here in my historical data
      {
        "customer_id": "C004",
        "pizza_name": "Seafood Special",
        "order_day": "Sun",
        "next_order_day": "Sat",
        "next_pizza_name": "Margherita"
      }, ... // more rows

In case you care only about the preduction result (for further processing) you will find that in: 

predictions = response.data.prediction.predictions
"predictions": [
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Sat"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Seafood Special"
          }
        ]
      },
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Thu"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Margherita"
          }
        ]
      },
      {
        "customer_id": "C004",
        "next_order_day": [
          {
            "confidence": null,
            "prediction": "Sun"
          }
        ],
        "next_pizza_name": [
          {
            "confidence": null,
            "prediction": "Margherita"
          }
        ]
      }
    ]

Eventhough we have the predicted results,  in most of the cases we want to combine the hostorical data with predicted results, to display a kind of graphical trend / relation. 

In my case, to get the shown graphs which could be the next potential order day and the next potential pizza, I combined the data in an object and returned it to my Handler.js file. To get the desired outcome within the graphical illustration later in the front-end I returned the predicted and historical data like this: 

let returnObject = {
      customerId: focusCustomer,
      history: {
        byDay: historyByDay,       // e.g., bind to series “History”
        byPizza: historyByPizza
      },
      predictions: {
        rows: predsForCustomer,    // raw rows if you want a details table
        top: { day: topDay, pizza: topPizza }, // for highlight
        byDay: predictedByDay,     // e.g., 2nd series “Predicted” on the weekday chart
        byPizza: predictedByPizza  // e.g., overlay or stacked series on pizza chart
      }
    }

Visualize the results - The returned JSON I used as a basis to construct the graphical illustration in the front end. In the Handler.js I filled a JSON model with the data recieved from the backend (above JSON object) and wired the XML Fragement around it

Conclusion 

Including RPT-1  in a CAP app seemed very smooth. Especially all the  important information (like API Keys, API Route, Payload Structure) was easy to find. During the development I still had in the back of my mind when to use other SAP provided services which  seems to be similar (like Data Attribute Reccomondation for instance). I know other services are more focused on the full ML life cycle (i have my own model, training iterations, etc.) but sometimes I am wondering with all the available services, which tool to choose from the big toolbox SAP provides. 

From a first sight, I feel that with such a quick integration option like with RPT-1 I can cover 70-80% of use cases. At least to get accurate results in a very short time. Still I believe for data specific use cases, where I want to use my own trained model, Data Attribute Recommondation would be still my preferred option.

Please share your opinion as well for which use cases you use which kind of services

Since this is my first SAP Community article, I’d be happy to receive your feedback. Let me know in the comments how you see RPT-1 and the new doors it opens for enterprise AI. 

 

3 Comments
James123
Product and Topic Expert
Product and Topic Expert
0 Likes

I hope this becomes the new UI to BDC Intelligent Apps.

dirkpelz
Product and Topic Expert
Product and Topic Expert
0 Likes

Hi Jaswin,

great article and really helpful to see RPT-1 in action with a complete CAP integration! 👍

I noticed in your API response that all confidence scores return as "confidence: null"

How do you handle decision-making without confidence scores in practice? For example, if RPT-1 predicts "Margherita" for three different customers, how would you prioritize or differentiate between predictions that might be very certain vs. rather uncertain? I understand this is a technical demo, but I'm curious about your thoughts on production scenarios – especially for high-stakes decisions where we'd typically want to know: "How confident is the model?" Have you found any workarounds, or is this something you'd expect in future API versions?

Thanks for sharing your experience!

 

 

Jaswin
Discoverer
0 Likes

Hi @dirkpelz  

Thank you for your comment.

Yes, seeing "null" as a confidence level is not very convincing. Especially when we have (as you called it) high-stake decisions where you need to have a confidence value to decide whether you trust the prediction or not.

For this demo I manually cross-checked the given predictions against my data set (csv file) to see if the results made sense. But I must admit that this is not the right way to validate a model, and it doesn’t scale when working with larger datasets. For the next API Version, I would love to see the confidence level in the output. 

One approach for temporary workaround which came to my mind was "stability voting" - Sending the same data to RPT multiple times but with different ordering of the context rows. Ideally the predicted results should not change. That would give at least an indication if the model is reliable.

But as already mentioned, having the confidence level per predicted field for decision making would be great!