With the emergence of tools like SAP Lumira end users can manipulate and interact with data without a reliance on IT. Tools like Hadoop let organizations store more data than ever before…but how can they get that information to the end user without creating the same IT-reliance as with first-gen BI tools?
Pulling massive datasets like those in Hadoop to the desktop isn’t a viable option. And strictly online connectivity means long lags as you build your visualizations—poor performance isn’t acceptable for business users.
Which is why I’m excited about the planned data wrangling functionality in SAP Lumira. Here’s a
tech preview how it works
(screenshots are from a development build and subject to change)…
- Take a sample of your Hive or Impala Hadoop data
2. Interact with the sample to understand your data thru agile visualizations
3. Once you’re satisfied with your SAP Lumira visualizations, schedule an oozie job to create a Hive table on the whole dataset—this is what the final visualization will be based on
4. The Hive table (with the only the information you need for your visualization) is brought into Lumira and you see your visualization on the whole dataset!
Voila—big data accessible by business users without the need to know HQL, how to write MapReduce scripts, and with acceptable speed performance.
This is a planned innovation and subject to change.
To see more visit the Big Data Analytics booth at Sapphire!