Additional Blogs by SAP
Showing results for 
Search instead for 
Did you mean: 
Active Participant

It was nearly 2 years ago when Jay Thoden Van Velzen, Tom Turchioe and I first started talking about a follow on COIL project after a successful effort to showcase a BI 3.5x deployment using Bluecoat WAN optimization. Prior to the release of BI 4, It was Tom who suggested we consider what it would take to determine how many concurrent users could use the BI 4 platform.  Jay was in ageeement as he felt that there was emerging evidence to suggest that more an more large scale implementations were beginning to occur. With the big data topics just starting to become persistent focal points for industry trade publications and analysts, it was not hard to imagine more and more people within organizations wanting access to analytic platforms and tools. We wanted the outcome of the project to tell us what was possible and to serve as a measure of confidence that the product could scale out and meet the needs of an ever growing poulation of users ranging from the once per quarter project manager looking for a specifc corelation and a way to visualize it, to the data scientist hammering the system day after day.

There were known theroretical limitations and a similar test effort to discover an upper end of concurrent use had been done using a scale up architecture with BI 3.5x, but it was certain that no discrete benchmark woulld surface before launch of the new release. From developing such a project in COIL, we wanted to ensure that a lack of infrastrucuture would not be the thing that determined how many concurrent users were possible. From taking an ecosystems-based approach, working with Intel, Supermicro, RedHat, F5 and Soasta we assembled a production quality environment and then leveraged the collective expertise to get after a very meaningful approach. The tacit knowledge exchange that stems from a co-innovaiton project is one of the best things derived from such effort and this team did an exceptional job capturing it using SAP Streamwork to manage the project.

As projects go, this one was large and complex in part not only because it required orchestrating time and resources spanning multiple participants inside and outside of SAP, but because we also attempted to go after multiple goals. At inception all projects have goals and objectives but over time, it often becomes neccessary to distinguish between what you must do, what you can do and what you want to do. You push for as much as possible, but that latter effort that falls into what you want often get reclassified as aspirational when the team thinks its worthwhile to pursue but in the end succombs to the reality of what is practical. For the Monsta team we stayed true to course in the effort to discover a significant concurrent use level for 4.0x and how we designed the tests needed to demonstrate concurrent use. Early on and in the interest of wanting to streamline the required architecture, we considered the virtualization of all the tiers. We then pulled this out of scope as way to reduce complexity and to minimize the number of bottlenecks we already knew would factor in related to computational elements (i.e., CPU, RAM), network, storage, etc. What we learned however, is of value to anyone implementing in either a classic Enterprise IT production environment or as a private cloud provider.

We also wanted to examine how a single large implementation could be optimized for energy efficiency. Taking advantage of Redhat RHEL 6 and Supermicro Energy Management features coupled with an atempt to measure the efficiency using the proposed SAPS/Kwatts power performance ratio ultimately became aspirational to the team once it had consumed more time chasing after the primary performance goals which resulted in missing the chance to include energy experts that could no longer remain involved in the project. This is par for the course with projects that form up with teams representing different groups with different motivations and commitments.

Nonetheless and despite other delays and resource constraints, COIL managed its way through all of the challenges and wound up deliverying a very useful outcome. Useful output from the project has brought benefit to SAP product and field teams and all of the participating partners extracted more knowledge about their products deployed in a BI 4 environment. The team established real proof points for BI 4 running over Sybase ASE and Sybase IQ (something that has yet to be commonplace and yet now proven. 

I won't steal the thunder from the first whitepaper we wrote detailing our testng and results but once the team had successfully ran in excess of 10,000 concurent users spanning a 80+ node system, you will find in the paper that we additionally explored consolidation of the hardware so that we could describe the minimum infrastructure needed to yield the same level of concurrent users. The paper further covers some very useful best practices and we've done our best to align with existing resources such as the BI 4 sizing guide available through service market place.  We've already had several folks take advantage of the content from the very first rough drafts of the paper and we are looking forward to sharing some of the best practices this year at TechEd in Las Vegas. There are some forthcoming publications such as a whiteboarding presentation, a video panel interview and a more discrete whitepaper examining BI 4 and ASE will be available in early October.

We measure the success of a COIL project by how useful the results are for the widest audience possible but since I like to believe that good work is rewared with the opportunity to do more good work, I cannot wait to dive in to the follow on "Son of Monsta" project that is just now forming where we get to explore the use of HANA within the same scaleout architecture. Just one of the many spillover effects that come from successful co-innovation project work.

1 Comment