Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
Former Member

For those of you that have started to explore HADOOP you might be familiarly with HUE (Hadoop User Interface)  or Linux command line for moving files to and from the HADOOP Distributed File system (HDFS).

As a bit of a novelty I thought it'd be interesting to build a prototype of an HDFS Explorer, using HANA XS & SAPUI5.

HUE File Browser                                                            HANA: HADOOP HDFS Explorer                        


HUE  is primarily built using PYTHON with an number of  PYTHON plugins, such as DJANGO and MAKO.

My HANA  HDFS Explorer is built using HANA XS,  SAPUI5 (Horizontal Splitter, Tree & Table controls), a sprinkling of custom Javascript & the HADOOP WebHDFS REST API (accessible via xshttpdest).  It's a very rough prototype, and is not indeed to reproduce all the same functionality of the HUE file browser.

NOTE: as standard the webHDFS REST service may run off ports 50070 & 50075 on your HADOOP cluster. I needed to redirect these to 50001 & 50013 respectively to enable xshttpdest to work. For Hana cloud platform the valid ports to use are documented here SAP HANA Cloud Platform

Here a few screen shots comparing the original and the new improved HANA version :razz:

Original HUE:

HANA XS Version (with Download option):

Select the file and then click the Download button:

The dataflow of HANA XS version is:

The HADOOP webHDFS Rest service makes uses of different url parameters  [operations (op)]  for interacting with HDFS.

I only used the following operations in this example:

FILESTATUS          Which acts like 'dir' or 'ls' commands on a specified HDFS directory

OPEN                    Which acts like an ftp 'get' command.


NOTE: OPEN is called twice. 1st on the HADOOP NameNode to get the DataNode the file is stored, then secondly on the specified DataNode to download the file.

The full code is available to download here:

For those interested in saving a file to HADOOP I've created an example here, using the CREATE operation:

Downloading Query Results to HADOOP HDFS

Labels in this area