Читать книгу QlikView Your Business - Troyansky Oleg - Страница 7

I
Getting Started
Chapter 2
Why Use Qlik for Data Discovery and Analytics?
The Evolution of BI

Оглавление

Software providing business intelligence (BI for short) has been around for several decades, but was affordable only to very large enterprises. And while the technology was expensive and labor-intensive, BI systems mostly delivered an underwhelming product: static reports. However, in recent years, fluid changes in economic conditions require that businesses adapt quickly to stay competitive. No longer satisfied with static reports and analysis, businesses increasingly demand more flexibility and insight from BI systems. The success of products like QlikView highlights the fading value of the static reporting system. Why? To fully appreciate the revolutionary change brought by QlikView to the BI market, it’s helpful to review where things started.

This story begins with a description of traditional business intelligence.

Traditional Business Intelligence (OLAP)

The term BI describes a system of tools and processes that transform transactional data into meaningful or actionable information for the business. Traditional BI systems are typically proprietary stacks consisting of specialized databases, scripting languages, and report writers – all for the purpose of extracting raw data and presenting some sort of summarized “intelligent” view for analysis purposes.

For many years, BI technology was led by OLAP (On-Line Analytical Processing), which is just a part of the broader category of business intelligence. In a nutshell, OLAP systems are in the business of loading massive amounts of pre-aggregated data into data structures that allow for fast reporting and analysis. The primary feature of all OLAP systems is the OLAP cube, which contains the logical definitions for all expected calculations (measures or facts), summarized at their expected levels of aggregation (dimensions).

OLAP systems are further categorized based on how the aggregated data is physically stored, whether in relational tables (ROLAP), proprietary multi-dimensional storage arrays (MOLAP), or some hybrid combination. With plenty of resources available that describe the different types and techniques of physical cube storage, we won’t go into them here. Suffice it to say that the different types are differentiated by their focus on query flexibility or query speed. For the end user, the type of physical cube storage is transparent. Front-end reporting tools are all somewhat similar, in that reports are based on a query that fetches data from the cube.

The main drawback for all OLAP systems is that the desired levels of detail (dimensions) and the desired calculations (measures) need to be predefined. Also, the labyrinth of architecture layers and supporting technologies make development complex and time consuming. OLAP systems are ideally suited for environments where changes occur relatively infrequently – adding a measure or dimension to a cube can require changes to many components and layers. For these reasons, OLAP projects tend to be very long and expensive.

Figure 2-1 shows a high-level map of a data ecosystem including OLAP BI.


Figure 2-1: OLAP in the data ecosystem


This diagram also shows a data warehouse, which is another luxury of the deep-pocketed large enterprise. Organizations that deploy OLAP without a data warehouse might instead use a snapshot copy of the transactional databases to source the BI layer.

With all of their technical complexity and dependencies, traditional BI systems usually find themselves under control of the IT department – and amid an environment of competing priorities, business users compete for often constrained IT resources. The result? BI systems quickly become obsolete – both in technology and in usefulness to the business.

While large enterprises were saddled with BI systems that were expensive and hard to change, small and mid-sized businesses were left out of the BI market altogether. One key difference between a large enterprise and a smaller one is the size and makeup of the IT staff – larger enterprises have highly specialized people managing specialized tools, while smaller enterprises are typically staffed by IT generalists. Another difference is just the sheer size and scale of data applications. Since traditional business intelligence systems were designed for large-scale projects requiring a highly trained and specialized technical staff, smaller businesses could not easily (or affordably) spin up a BI platform. The BI offerings from the stack vendors were not right-priced or right-sized for the less-than-large enterprise, so many opted for PC-based solutions such as Excel add-ons and Access databases. By the mid-2000s, the wait for right-sized BI was over.

Qlik’s Disruptive Approach to BI

Founded in Sweden in 1993, Qlik introduced a desktop product (eventually called QlikView) that could extract data from database systems, and then summarize and graphically present the data without requiring pre-aggregations or multi-dimensional cubes. This in itself was revolutionary – without pre-aggregations, how could QlikView complete the heavy computations required for summarized analysis in a reasonable time? It achieved this by loading all of the data into RAM and calculating aggregations on the fly, thereby avoiding the bottleneck of database I/O and the limitations from hard-coding the aggregations. If that wasn’t enough, QlikView also presented data in a new, associative way. Users unfamiliar with relational data structures could easily discover hierarchies and relationships among data elements, without having any knowledge of the underlying data architecture. This “natural” approach to analytics – more than its in-memory architecture – is what sets Qlik’s products far apart from its competitors.

Let us clarify some of the technical terms for the non-technical readers.

● RAM (random-access memory) is a computer component that we commonly call “memory.” Performing computations with data that resides in RAM is considerably faster than reading the same data from the hard drive. Qlik pioneered the in memory BI direction, and it remains the leader of in memory BI today.

● The term “database I/O” (input-output) refers to the database operations of reading and writing data from and to the database. I/O operations are considered to be the slowest among various computing operations. Therefore database-driven BI systems are comparatively slower than those operating in memory, and they have to build pre-defined OLAP cubes to allow faster processing.

By the mid-2000s, Qlik became a major disrupting force in the business intelligence market. With its rapid-development mantra, along with a new server-based platform, QlikView was uniquely suited to the mid-sized business. With some basic training, IT staff could quickly deploy reporting and analytic solutions to users, without building the over-scaled scaffolding required by other OLAP platforms.

By the late 2000s, more and more large enterprises were taking notice. Largely due to steady improvements in the product platform and a growing sales organization, enterprise adoption of QlikView accelerated. But there were also certain trends in the industry occurring at just the right time to highlight QlikView’s appeal:

● An embrace of more agile development practices

● The acceptance that IT, by nature, cannot keep pace with the constant changing needs of the business

● An increase in the demand for user-driven BI

● The prevalence of 64-bit hardware and software

With these trends, the ever-decreasing cost of server hardware and RAM, and a proven stable platform, QlikView was able to show the corporate user that big-company analytics didn’t have to be slow, cumbersome, and static. QlikView made it possible for some development to be done out in the business units and departments, instead of IT. This allowed organizations to control the pace of development, to better match the speed at which requirements were changing. The genie was out of the bottle.

Data Discovery Is the New Black

In the past, business users had to predict what questions they would ask so that IT could build a report to provide the answers. Lots of resources went in to researching and writing down what the business needed out of the BI system. IT was keen to have the business sign off on exactly what it wanted before the tedious and expensive efforts of development began. Of course, the problem with that approach is that the business was likely communicating requirements that it had in the past, not necessarily requirements that it anticipated for the future.

In classic “chicken-or-the-egg” form, IT would ask, “How do you want to see the data in your reports?” and the business would reply, “I don’t know; how can I see the data in the reports?” Being naturally very risk-averse, IT departments are not in the business of building applications as “suggestions” for the business, just to see what sticks. The risk is too great that the application could be rejected, and the project would be sent back to the drawing board having wasted precious time, resources, and reputations.

But the business has a valid question – “How can I see the data in reports?” means “What if my questions are ad hoc?” or “Can the system allow me to follow a path of ad hoc discovery, leading to previously undiscovered insight?” These types of questions require a robust analytical solution. No gigantic binder of month-end reports will serve this need. Static reports from BI systems are, in fact, the opposite of what is needed! Instead, analysis must be driven by the user, not the report-writer. This scenario exactly describes the concept of data discovery, sometimes referred to as business discovery. According to a 2013 report, technology research firm Gartner predicted that “by 2015, the majority of BI vendors will make data discovery their prime BI platform offering, shifting BI emphasis from reporting-centric to analysis-centric.” (This report, “Gartner Predicts Business Intelligence and Analytics Will Remain Top Focus for CIOs Through 2017,” is available on Gartner’s website at http://www.gartner.com/newsroom/id/2637615. More detailed information is available in the report “Predicts 2014: Business Intelligence and Analytics Will Remain CIO's Top Technology Priority” at http://www.gartner.com/document/2629220?ref=QuickSearch&sthkw=schulte%20AND%20BI%20AND%20%22predicts%22.)

Credited for pioneering the data discovery space, Qlik is well positioned to continue as a leader in this new market. With the release of Qlik Sense, Qlik is resetting the bar in offering user-friendly data discovery tools, while providing a well governed and scalable platform.

QlikView Your Business

Подняться наверх