Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
AndreasForster
Product and Topic Expert
Product and Topic Expert
With this tutorial you will learn how to train Machine Learning (ML) models in SAP HANA through Python code. Trigger predictive algorithms either from local Jupyter Notebooks or, even better, from Jupyter Notebooks within SAP Data Intelligence.

If you are using SAP HANA, you probably have valuable business data in that system. This data can also be a very valuable asset for Machine Learning tasks. Since SAP HANA contains predictive algorithms you can train ML models within SAP HANA on the existing information - without having to extract and duplicate the data! I like to call this the "push-down".

In case you are not familiar with Machine Learning or Python, this project can be a starting point. If you are already experienced with Machine Learning, you might be curious how to train ML models directly in SAP HANA from your preferred Python environment. That's right, leverage the power of SAP HANA without leaving your existing Python framework!

You can implement the scenario yourself using your own SAP HANA instance. Or if you just want to get an idea of the concept without getting hands-on, you can also just scroll through the Notebooks that are shared.

To get hands-on you need to:

  • have access to a SAP HANA system (version 2.0 SPS 03 or higher)

  • have a Python development environment, preferably JupyterLab

  • install the libraries to your Python environment, which are needed to connect and push-down calculation and training logic to SAP HANA

  • download a set of Jupyter Notebooks that have been prepared for you


If you have access to SAP Data Intelligence, you can get started quicker as SAP Data Intelligence already has JupyterLab integrated. Those who will work with SAP Data Intelligence can jump to the SAP Data Intelligence chapter in this blog after having read through the remainder of this chapter.

The notebooks will implement a typical Machine Learning scenario in which a regression model is trained using the Predictive Algorithm Library (PAL). You will estimate the price of a used vehicle, based on the car model, the year in which it was built as well as the car's mileage and other parameters.

For this scenario we are using a dataset of cars that were offered for sale at the end of 2016. This dataset was compiled by scraping offers on eBay and shared as "Ebay Used Car Sales Data" on Kaggle.

As the data is from 2016, any analysis or prediction refers to that time. In today's prices the used car's value would have reduced further. Unless you are looking at an old-timer, for which the price might rise over time...

Needless to say, this blog and the code and any advice that comes with it is not part of an official product of SAP. I am hoping you find the information useful to learn and create your own analysis on your own data, but there is no support for any of the content.

The official documentation for the components used in this blog are

A big "Thank you" goes to Thomas Bitterle, who was the first to test out this blog before publication! His feedback rounded off a number of areas, making it easier for everyone.

Install Python environment


You should be able to use your own Python environment, in case you already have one. In case you do not have Python installed yet, I suggest using the Anaconda Installer. Anaconda is a free and open-source distribution that installs everything that is typically needed to get started.

After the installation you can easily open the JupyterLabs environment from the Anaconda Navigator.



 

Alternatively, you can also start the JupyterLab environment from the "Anaconda Prompt" application with the command: jupyter lab



 

JupyterLabs provides a browser-based interface to work with multiple Jupyter Notebooks. A Jupyter Notebook allows to code and execute Python syntax from your browser. You can add nicely formatted comments to describe the code. And the Notebooks can display the output of the code, ie some text or charts that were produced. Having all this information in the same place makes it easier to keep track of your project.

Don't worry if this sounds complex. It doesn't take long to pick up and it is good fun to use! If you haven't worked with Jupyter Notebooks so far, this collection of very brief introductory videos is a good start.

 

SAP HANA access and configuration


The Python wrapper, which is facilitating the push-down, is supported beginning from SAP HANA 2.0 SPS 03. Should you not have such a system ready for testing, a quick way to get access can be to start SAP HANA Express on a hyperscaler, ie on Amazon Web Services, Microsoft Azure or Google Cloud Platform

For this blog I chose to use a 32 GB instance of SAP HANA Express 2.0 SPS 04 on AWS as outlined in this guide. Please keep a close look on the hosting costs. Do check daily to avoid surprises! I understand the SAP HANA Express on AWS is not covered by the AWS free tier. 

Once you have an appropriate SAP HANA available, you need to run some SQL statements to configure it to be used by the Python wrapper.

SQL syntax can be executed in different environments. In this blog I am using the SAP Web IDE for SAP HANA. If you are also using this interface, you must add your SAP HANA instance to the Web IDE. In the Web IDE's "Database Explorer" on the left hand side click the "+"-sign and choose:

  • Database Type: SAP HANA Database (Multitenant)

  • Host: Your SAP HANA's IP address or server name (hxehost if you set up SAP HANA Express on AWS)

  • Identifier: Instance Number: 90

  • Tenant database: HXE

  • User: SYSTEM

  • Password: [the password you specified]


Open the SQL Console for that connection and execute the following statements.

 

Create a user named ML, who will access SAP HANA to upload data, to analyse the data and to train Machine Learning models. On the risk of stating the obvious, please replace 'YOURPASSWORD' with a password of your own choosing.
CREATE USER ML Password "YOURPASSWORD";

 

Optionally, you can ensure that the user will never be prompted to change the password:
ALTER USER ML DISABLE PASSWORD LIFETIME;

 

Assign the user the necessary rights to trigger the Predictive Algorithm Library (PAL):
GRANT AFLPM_CREATOR_ERASER_EXECUTE TO ML;
GRANT AFL__SYS_AFL_AFLPAL_EXECUTE TO ML;

 

The Predictive Algorithm Library (PAL) requires that the index server is running on the tenant database. The index server can be activated through the System database. Therefore add the System database as additional connection to the Database Explorer in the SAP Web IDE for SAP HANA:

  • Database Type: SAP HANA Database (Multitenant)

  • Host: Your SAP HANA's IP address or server name (hxehost if you set up SAP HANA Express on AWS)

  • Identifier: Instance Number: 90

  • Database: System database

  • User: SYSTEM

  • Password: [the password you specified]


 

Now use this connection to start the index server on the HXE tenant database with this statement:
ALTER DATABASE HXE ADD 'scriptserver';

 

Later on we will need to know the SQL port of the HXE tenant. The port can be retrieved from the System database. Use the same connection to the System database to execute this SQL statement. Note down the SQL_PORT that is shown for our tenant HXE. Credit for that clever SQL statement goes to this developer tutorial!
SELECT DATABASE_NAME, SERVICE_NAME, PORT, SQL_PORT, (PORT + 2) HTTP_PORT 
FROM SYS_DATABASES.M_SERVICES
WHERE (SERVICE_NAME = 'indexserver' and COORDINATOR_TYPE = 'MASTER' )
OR SERVICE_NAME = 'xsengine';

 

Install the Python libraries for SAP HANA push-down


By now you have JupyterLabs installed and have access to a SAP HANA system. Now you need to install the wrapper, which allows Python to connect to SAP HANA and to push-down data calculations and the training of ML-models to SAP HANA.

Start JupyterLab as explained above, either through the Anaconda Navigator or from the Anaconda Prompt. Then create a new Jupyter Notebook and install these two libraries:

  1. The SAP HANA Python Client, which is the underlying connectivity from Python to SAP HANA:
    !pip install hdbcli​


  2. The Python wrapper, which facilitates the push-down to SAP HANA
    Update January 2020: This Python wrapper is now also available through pip!
    !pip install hana_ml​

    Since the library is now available through pip, the following manual installation steps are not needed anymore.

    The Python wrapper, which facilitates the push-down to SAP HANA, is currently (October 2019) not available through pip. You need to download and install a recent version of the SAP HANA 2.0 client (at least SAP HANA 2.0 SPS 04 Revision 42). After installation you find in "C:\Program Files\SAP\hdbclient" the file hana_ml-1.0.7.tar.gz. Install this library in the Jupyter Notebook with: !pip install "C:\Program Files\SAP\hdbclient\hana_ml-1.0.7.tar.gz"​


Test the installation with the following code, which should print the version of the hana_ml package. This hands-on guide requires you to have at least version 1.0.7.
import hana_ml
print(hana_ml.__version__)

 

In case the above import statement is throwing an error related to shapely, try the following

  1. Open the Anaconda prompt

  2. Install shapely through conda: conda install shapely

  3. Open your Jupyter Lab as usual: jupyter lab


 

Test the connection from JupyterLab to SAP HANA


Run a quick test whether the hana_ml package can indeed connect to your SAP HANA system. To keep things simple for the now, logon with the user name and password. The code connects to SAP HANA, executes a very simple SELECT statement and retrieves and displays the result in Python. You may need to change the server name the SAP HANA, "hxehost" is the name given in the above AWS guide.
import hana_ml.dataframe as dataframe

# Instantiate connection object
conn = dataframe.ConnectionContext("hxehost", 39015, "ML", "YOURPASSWORD")

# Send basic SELECT statement and display the result
sql = 'SELECT 12345 FROM DUMMY'
df_pushdown = conn.sql(sql)
print(df_pushdown.collect())

# Close connection
conn.close()

 

Running the cell should display the value 12345.



 

Should you receive an error, scroll to the end of the error message. Typically the last line of the error is the most helpful one.

 

Connect with secure password


In the previous cell it was convenient to write the SAP HANA password directly into the Python code. This is obviously not very secure, and you may want to take a more sophisticated approach. It would be better to save the logon parameters securely with the hdbuserstore application, which is part of the SAP HANA client.

Navigate in a command prompt (cmd.exe) to the folder that contains the hdbuserstore, ie
C:\Program Files\SAP\hdbclient

Then store the logon parameters in the hdbuserstore. In the example below the parameters are saved under a key called hana_hxe. You are free to chose your own name, but if you stick with hana_hxe you can execute the Jupyter Notebooks as they are. However, you have to modify this command though to include your own server name, SQL port and user name.

C:\Program Files\SAP\hdbclient>hdbuserstore -i SET hana_hxe “SERVER:SQL_PORT” YOURUSER

The password is not specified in the above command as you will be prompted for it.



 

Bear in mind that the above command only stores the logon parameters. It does not test whether they are valid.

Now that the logon credentials are securely saved, they can be leveraged by the hana_ml wrapper to logon to SAP HANA. Create a new cell in the Jupyter Notebook and repeat the earlier test, but now use the securely stored credentials.
import hana_ml.dataframe as dataframe

# Instantiate connection object
conn = dataframe.ConnectionContext(userkey="hana_hxe")

# Send basic SELECT statement and display the result
sql = 'SELECT 12345 FROM DUMMY'
df_pushdown = conn.sql(sql)
print(df_pushdown.collect())

# Close connection
conn.close()

 

You should see the familiar output of 12345.



 

Run the notebooks to trigger Machine Learning within SAP HANA


Everything is in place now to run the Notebooks that are shared with this blog. Download these Jupyter Notebooks and the data file from the hana_ml samples Github repository.

 

Save these file to a local folder. Now open JupyterLab as described in the first chapter above. In the File Browser on the left navigate to that folder. You should see something similar to:



 

We will now go through the notebooks in the order of their numbering. If you have implemented the above steps, you should be able to run the notebooks without modifications.

In case you have not saved the SAP HANA logon credentials in hdbuserstore, you need to change the ConnectionContext to the hardcoded logon approach as shown earlier in this blog.

All notebooks that are offered for download here are saved with the cell output that was produced when the notebooks were executed. Before running the notebooks yourself, you can remove that output so that you know for sure that all output was produced by yourself. To clear the previous output, right-click on the notebook and select "Clear All Outputs".

Within these notebooks you find additional comments and explanations. Therefore in the blog here only a high-level summary of the different notebooks is given.

Data upload


The notebook "00 Preparation" loads the data from autos.csv first into a local pandas data frame, does some data preparation before using the hana_ml wrapper to save the data to SAP HANA. This data will be used to train ML models. The notebook also creates a second table in SAP HANA which contains additional cars, on which we will apply the trained model to predict the price.



 

Introduction to Python wrapper


Before going into a longer and more realistic project, run the notebook "05 Introduction" to train a very simple model through the hana_ml wrapper. If you are comfortable with the steps in this notebooks, you already got the hang of it!



 

Exploratory Data Analysis


With notebook "10 Exploratory Data Analysis" you start a more comprehensive project. You will explore and filter the data. The transformed data is saved as a view to SAP HANA, which will be used in the following notebook. As the transformation is saved as a view, no data got duplicated!



 

Imputation and Model Training


With "20 Imputation and model training" the data gets transformed further, missing data is imputed. The data is split into train and test sets. These data sets are used to train different decision trees and to test the model's quality.

The best model is chosen and an error analysis is carried out to see how the model performed in different areas. The model is then saved to SAP HANA.



 

Apply the trained model / predict


In the first notebook ("00 Preparation") we created a table in SAP HANA with cars whose price we want to predict. The moment has come!

Run "30 Apply saved model" to load the model that was created in the previous notebook ("20 Imputation and model training"). Then apply the model on these cars and see how the difference in mileage affects the price of the cars.



 

Tidy up


Optionally, if you want to delete the tables and views that have been created you can run the Notebook "40 Tidy up".

 

SAP Data Intelligence


This chapter is only relevant for those who have access to SAP Data Intelligence. The above notebooks can also be executed within the interface of SAP Data Intelligence. Ideally you should already have some familiarity with SAP Data Intelligence, ie by having read through or even implemented your first ML Scenario.

You need to follow these steps to run the Notebooks in SAP Data Intelligence:

  • Prepare a SAP HANA system as described in the chapter "SAP HANA access and configuration".

  • In SAP Data Intelligence create a new connection of type HANA_DB to that SAP HANA instance. Name the connection "di_hana_hxe". The "Host" is the SAP HANA's IP address or server name. As "Port" enter the SQL port (see above for SQL_PORT). Specify the ML user and the corresponding password.

  • In the "ML Scenario Manager" create a new "ML Scenario" named "Car price prediction blog"

  • Create a Notebook called "dummy", just to open the JupyterLab interface. If you get prompted for a kernel, select "Python 3".

  • Import the five Notebooks and the data file autos.csv (in the "File Browser" on the left).

  • In each Notebook change the command that logons to SAP HANA since now the connection "di_hana_hxe" needs to be used. Replace this line:
    conn = dataframe.ConnectionContext(key = 'hana_hxe')​

    with
    from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext
    conn = NotebookConnectionContext(connectionId = 'di_hana_hxe')

    Only for Notebook "40 Tidy up" replace the existing command with
    from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext
    conn = NotebookConnectionContext(connectionId = 'di_hana_hxe').connection


  • And finally, you may need to update the hana_ml package. Currently (October 2019) the version that comes with SAP Data Intelligence is not at the version that is required for this blog's notebooks. Uninstall the current version with:
    !pip uninstall hana_ml --yes

    Install the latest version of the library with:
    !pip install hana_ml

    Restart the Python kernel through the JupyterLab menu (In the "Kernel" menu select "Restart Kernel...")


Now you should be good to run all Notebooks as explained above.

 

Deployment into production


Now we have a model that we can manually work with. To bring the predictions into an ongoing business process you can leverage SAP Data Intelligence to retrain and apply the model as needed.

With SAP Data Intelligence you can script in Jupyter Notebooks without having to install any components on your own computer. The code you want to bring into the productive process can be deployed through graphical pipelines, which help IT to run the code in a governed framework. This sounds like a topic for another day and another blog.

Update June 2, 2020: That day has come and here is the blog, which focusses on deploying HANA ML through graphical pipelines in SAP Data Intelligence.


Conclusion


Well done! If you have read the blog this far, you have an understanding of how Machine Learning can be carried out within SAP HANA. If you have followed this guide hands-on, you now even have experience with Machine Learning in SAP HANA and are ready to experiment with your own data!
29 Comments
0 Kudos
Great ! When the SAP Data Intelligence Will be available to try ?
Phillip_P
Product and Topic Expert
Product and Topic Expert
0 Kudos
You can find details about a SAP Data Intelligence trial here:

https://www.sap.com/cmp/dg/crm-ya19-pdm-ddmpmwbsdi/index.html

 
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hello Rodrigo, Does your company have a Cloud Platform Enterprise Agreement (CPEA)? This would allow you to start SAP Data Intelligence yourself. Alternatively, I suggest to reach out to your Account Executive at SAP.
lbreddemann
Active Contributor
0 Kudos
There's a small typo when testing for the installed hana_ml version.

Currently, it reads:
print(hana_ml.__version)

but it should be
print(hana_ml.__version__)

 

 
AndreasForster
Product and Topic Expert
Product and Topic Expert
Hello lbreddemann, Thank you for the heads up.
In the "00 Preparation" notebook the code seems fine. Where did you see this?

d027319
Product and Topic Expert
Product and Topic Expert
0 Kudos
Thank you for this really nice tutorial!
lbreddemann
Active Contributor
0 Kudos
That piece of code was from this very blog post. The tutorial files worked all fine for me
AndreasForster
Product and Topic Expert
Product and Topic Expert
Ah got it, Thank you for spotting this Lars.
It's now corrected.
akshinde
Participant
0 Kudos
HI Andreas , while i trying to install HANA Client Library using following command in Jupyter Notebook, i am getting following error.
!pip install hdbcli​

ERROR: Could not find a version that satisfies the requirement hdbcli (from hana-ml==1.0.7) (from versions: none) ERROR: No matching distribution found for hdbcli (from hana-ml==1.0.7)
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Aniruddha, A new version of hdbcli (2.4.171) has just been released on pypi.
https://pypi.org/project/hdbcli/#history
Maybe hana_ml 1.0.7 is not compatible with that release. Please try installing the hdbcli version that comes with the HANA client
!pip install "C:\Program Files\SAP\hdbclient\hdbcli-2.4.151.zip"
akshinde
Participant
0 Kudos
thanks Andreas ; i got error after trying to install hdbcli-2.4.151.zip. hdbcli does not support 32-bit (x86) python

It looks like hdbcli doesn't support 32bit python; do i uninstall my Anaconda 32bit and reinstall 64bit Anaconda.

thanks

Aniruddha
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Could it be that you downloaded the 64 bit HANA client? If that's the case you can try downloading the 32 bit HANA client. Or use 64 bit Anaconda.
angelo_minutella2
Discoverer
0 Kudos
Hi Andreas

 

This is one of the best SAP PA blog explaining all the details, and it worked from the very frst step. Many thanks, was a pleasure to go through the jupiter notebook files. Looking forward to your next blog !

 

KR

Angelo
former_member147380
Participant
0 Kudos
Hi Andreas,

 

Thanks for the detailed explanation. Could you please explain me without SAP DATA Intelligence, can I train the PAL machine learning models using jubiterLab and SAP HANA. Which you have explained initially..
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hello Jwala, Yes absolutely. The six Notebooks that you can download here are designed to be used in a locally installed JupyterLab environment, without SAP Data Intelligence.

The deployment of PAL algorithms through Data Intelligence is explained in this blog: https://blogs.sap.com/2020/04/21/sap-data-intelligence-deploy-your-first-hana-ml-pipelines/
kevin_jurke
Explorer
0 Kudos
Hi Andreas,

what approach for production deploment do you suggest when you have an algorithm in r (catboost) which is not shipped with PAL?

I think about an dedicated r-server but the problem in SAP Note 2185029, only version R3.2 ist supported - last SAP Note update 2018 - seems that SAP may not support RServe furthermore?

thanks

Kevin
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hello Kevin, The SAP Note 2185029 is referring to an R server that can be connected directly to HANA.
Data Intelligence also supports R, which is independent of the HANA + R option.
Currently DI is supporting "R 3.3.3 and 3.5.1"
https://help.sap.com/viewer/97fce0b6d93e490fadec7e7021e9016e/Cloud/en-US/30423dccf00a484f92fe1ca3a12...
The HANA data could then stream to DI on prem or cloud, the R code is executed and the results can be written to HANA or elsehwere.
Here are two blogs on using R in Data Intelligence
https://blogs.sap.com/2019/12/05/sap-data-intelligence-create-your-first-ml-scenario-with-r/
https://blogs.sap.com/2018/12/10/sap-data-hub-and-r-time-series-forecasting/
Andreas
kevin_jurke
Explorer
0 Kudos
Thanks for the answer. i should have mentioned before that SAP DI is current no fast option for us. maybe long term.

the only approach that comes in my mind is: over SAP ML API run the r-script on an dedicated server. so first transfer the hana data to the server where r-script runs. the output than came back to hana. the consequence: the control flow situated on the server whre r script is running.
former_member196080
Participant
0 Kudos
Hi Andreas,

I'm getting issue while running imputer function, its throwing error from your code.

I'm using below code and running on Python 3.7 and connected with Hana Database 2.0 via GCP.

 

import hana_ml.dataframe as dataframe
conn = dataframe.ConnectionContext("hxehost", 39015, "username", "pwd")
df_pushdown_history = conn.table(table = 'USEDCARPRICES_TOTRAIN', schema = 'ML_USER')

df_pushdown_history.filter('GEARBOX IS NULL').head(5).collect()

from hana_ml.algorithms.pal.preprocessing import Imputer
impute = Imputer(conn_context=conn, strategy = 'mean')
df_pushdown_imp = impute.fit_transform(data = df_pushdown_history,
strategy_by_col = [('GEARBOX', 'categorical_const', 'Gearbox unknown')])



But getting below error..its been while I'm trying to resolve but not able to. Can you please help?


TypeError: init() got an unexpected keyword argument 'conn_context'


Looks like init function is not accepting the conn as parameter.


Help will be appreciated.

Thanks Rubane

AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Rubane,
A new version of hana_ml was released recently, which is treating the connection context differently. I assume you are on the new version 2.5.x. As a quick fix, try uninstalling that version, then install with
pip install hana-ml==1.0.8.post11
Please let me know if this is working and I will add a comment to the blog.
Andreas
former_member196080
Participant
0 Kudos
Thanks Andreas.

 

Error is gone but imputer function is not replacing null values.

 

I ran the code for imputer as you decribed in blog but upon checking its still showing no change in ,other words it doesnt changed null values.

 

I checked at bakend too there are still null values there.Not sure why its not working. One observation that  below code doesnt filter null Gearboxes it needs to be changed to ==.

 

df_pushdown_history.filter('GEARBOX IS NULL').head(5).collect()

 

But anayways this is just for retrieving results nothing to do with imputer. Please let me know if you've any suggestions.

 

Thanks

 

AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Rubane, The describe() output shows only 0s in the nulls column. That means the hana_ml dataset does not contain null values anymore. It might be that the GEARBOX is an empty string. Imputing through hana_ml does not change the underlying data in the physical table. This is on purpose, as the original data might be used for other purposes. You can persist the changes, but that's an extra save command. Maybe during the upload to HANA the gearbox was already saved with an empty string, or maybe an earlier imputation was done on the hana_ml DataFrame, where nulls were replaced with that empty string?
former_member196080
Participant
0 Kudos
Hi Andreas,

 

I've not changed any dataset since imputer wasnt working earlier... I do remeber running imputer without conn_context but not sure whther it had done anything.

 

See I see lot of blank values in Hana still.

AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Rubane, This looks like an empty string, not a null value. It makes sense if hana_ml is not finding any nulls here.

Maybe the new hana_ml version is replacing nulls from the csv with empty strings during upload.

You should be able to continue the hands-on with that table. Or you can try uploading the csv again into a new table (different table name). Since you are now on 1.0.8.post11 you might get the null values as shown in the Notebooks.
former_member196080
Participant
0 Kudos
Thanks Andreas..I used the same data and it worked.Appreciate your help!

 

Please let me know if you've any other hands on blog on ML.  Its very useful for learning. Looks like we need to learn Python now for Data science
former_member772940
Discoverer
0 Kudos
Hi Andreas,

thank you for this post! I actually encountered problems installing the package. Please find below the error message in the snapshot:


Would really appreciate your kind assistance. Many thanks!

Alex
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Alex, there might be something in your base environment or your network that is preventing the install. In a new environment the library installs fine for me if you want to try this
conda create -n mytest python=3.6
conda activate mytest
conda install -c conda-forge jupyterlab
former_member772940
Discoverer
0 Kudos
Hi Andreas,

Thanks so much for your speedy reply!

I am using the Jupyter notebook with Python version= 3.8 in the anaconda environment to run my scripts. If I use the codes you posted, will it overwrite my latest python version?

Thanks again!

Alex
AndreasForster
Product and Topic Expert
Product and Topic Expert
0 Kudos
Hi Alex, creating a new environment in conda (like "mytest" in the previous post) doesn't change any of the other environments (like your "base"). With "conda activate mytest" or "conda activate base" you can jump between the environments. So once you are for instance in mytest and install something in mytest, your base environment remains the same.