OLTP and Decision Support
The purpose of this blog posting is to discuss strategies for handling decision support queries in Online Transaction Processing (OLTP) applications. First, I want to talk about the two classes of OLTP applications that I see in the marketplace.
The first is the traditional OLTP market that has been present for years, and is typified by purchasing Wall Street stocks. A collection of humans (stock brokers or end-users over the web) interact with an OLTP system to trade securities. The brokerage house (and end users for that matter) also want to run decision support queries to learn about historical trends as well as to identify what seems to be influencing what. Second, metrics on OLTP behavior are often required, such as throughput and average response time. Hence, the goal is to provide transaction processing as well as historical decision support. This style of OLTP application typifies most retail sales situations, and many other traditional applications.
However, there is another much newer OLTP market with somewhat different requirements that deals with processing high velocity feeds. For example, consider an electronic trading application. One large enterprise has electronic trading desks around the world, and wants to aggregate their enterprise wide position (long or short) for each traded security for risk management purposes. In this case, there is a need for high volume transactions to assemble the real-time enterprise position. However, there is an additional need for real-time (as opposed to historical) decision support for the risk management component.
Since these applications have different requirements, we treat them separately in this posting and consider the traditional market first.
Decision Support in Traditional OLTP
There are two possible models for supporting traditional OLTP and decision support requirements. The first model is to off-load the decision support queries to a companion data warehouse system. In effect, whenever a transaction is committed, its effect is sent over a network connection to another system. As such, the second system is a few seconds (or more) out-of-date. If there are data transformations required, then an Extract, Transform and Load (ETL) system is often used as shown in Figure 1.
The second alternative is to mix the two workloads on a single computer system. In my opinion, the first model is vastly preferred to the second one for the following reasons.
- Specialization. The OLTP system can run different software from the DW system. For example, a column store is wildly faster than a row store on DW and a main-memory DBMS is likewise differentiated from a disk-based system. Running two systems means each can beat a “one size fits all” engine by 1-2 orders of magnitude. The two-system solution means that 1-2 orders of magnitude less hardware is employed to support the application. Since, machine room space and power is at a premium, this could make a big difference.On the other hand, specialization means that more than one DBMS must be used by the enterprise. This clearly has training, management and personnel implications.
- Resource Contention. OLTP entails lots of small writes, while decision support is big (resource intensive) reads. Most system administrators choose to separate the workloads, because the big reads are resource intensive and tend to slow down the OLTP load. It takes a very sophisticated scheduler to mix the two workloads, while making OLTP quality of service (QOS) guarantees.
- Contention. Obviously big reads are difficult to run in parallel with small writes, while guaranteeing serializability. If one is using locking, then the write locks will stall the reads and vice-versa. A common tactic is run such reads with a lower consistency guarantee. In effect, the reads cannot be guaranteed to give a semantically correct answer. On the other hand, if one executes the reads on a separate system, then one can get a semantically correct result, which is a few seconds stale. Whether one prefers a correct (but elderly) answer to a current (but perhaps incorrect) answer is an application choice. However, my preference would always be the system that gives a semantically defendable answer.
- Two schemas. In many applications, the DW solution wants to use a star or snowflake schema. On the other hand, the OLTP system will invariably not want to view the data the same way. In a single system, there will be a single schema, and both needs cannot be supported. This may well lead to one task being sub-optimized to allow the needs of the other to prevail.In a two-system solution, there can be an OLTP schema and a DW schema, which are different. During transmission of transactions from one system to the other, the ETL package can be employed to make the schema translation. Hence, each system can have a schema oriented toward its needs.
For the above reasons, I am a firm believer in a two-system solution for traditional OLTP. As such, VoltDB is architected to be able to ship completed transactions to one or more “downstream” analysis systems. We have such an interface for Hadoop, as well as for relational DBMSs.
Decision Support in High Velocity OLTP
I now turn to high velocity OLTP applications. Here, there is a need for the same high volume transactions we saw earlier. In addition, real-time analytics are required to augment the historical ones.
Real-time analytics are required in the Wall Street risk management example above; they’re needed in calculating the leader board in a massive multi-player game; and necessary for calculating denial of service thresholds in a network monitoring system. You get the idea – real-time analytics are a core requirement for most high velocity data applications. It will not be practical to support such requirements from a companion historical data warehouse because the latency will be too high.
To deal with this real-time analysis requirement, an OLTP system must support SQL to support a broad range of queries, and materialized views to support them efficiently. Hence, real-time analytics must be performed on the OLTP engine, while historical queries can continue to use a companion data warehouse. In addition to real-time analytics, high velocity applications often require the OLTP database system to do more heavy lifting than is needed in traditional applications (see Figure 2):
- It may be optimal to perform ETL operations directly in the OLTP engine, rather than adding yet another moving part (YAMP).
- There may be an impedance mismatch between the (perhaps bursty) OLTP traffic and the ingest capability of the data warehouse. Hence, it will be important for the OLTP system to buffer such bursts for subsequent consumption by a downstream warehouse.
- In some applications, insights gained through deep analytics on the data warehouse may need to be brought back into the OLTP system to optimize how new transactions are processed.
Clearly, the requirements of high-velocity applications put more demands on OLTP engines than traditional ones. VoltDB has found acceptance in this market because of its ability to meet all of the above requirements.
Tags: OLAP and Hadoop