Hereβs a concise walkthrough for setting up and running a benchmarking evaluation using the Generative AI Hub service in SAP BTP:
1. Assign Role Collection for Generative AI Hub Access
To access the evaluation service under Generative AI Hub, assign the role collection ailaunchpad_genai_manager to the user of your BTP landscape.
2. Set Up Object Store for Evaluation
3. Create Object Store Secrets
4. Connect Local System to AWS S3 Object Store
Follow Step 9 from the official learning journey to connect your local environment to AWS S3 through BTP:
πSAP AI Core Data Management - Step 9
5. Upload Datasets to Object Store
Refer to Step 10 of the same tutorial to upload datasets like test data, orchestration configs, and custom metrics from your local system to the configured AWS S3 bucket.
6. Register Artifacts in AICore
Register all relevant artifacts (test datasets, orchestration config, custom metric files) in AICore.
Use the root directory name eval-data to register the artifcats, the sub-directory and its files are automatically registered.
Navigate to the Other Artifacts section in the AI Launchpad dashboard and use βAddβ to register the path.
Example path:
ai://err-res-genai-data/eval-data
ai://err-res-genai-data β Refers to your object store secret key.
eval-data β Root directory where you will create subfolders like /runs and /testdata.
7. Create & Monitor Evaluation Run
After registering the artifacts, create your Evaluation via the AICore interface.
Monitor the execution workflow under the Executions tab to track progress and results.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 5 | |
| 4 | |
| 4 | |
| 4 | |
| 2 | |
| 2 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |