In an effort to decrease IT costs, increasingly, agencies are seeking to migrate mainframe applications to open systems and cloud-based platforms. Mainframes are traditionally licensed by the MIPS (million instructions per second) which is a measurement of how much the mainframe is used for processing. So, essentially the more an agency uses the mainframe, the more they pay.
While there are additional benefits beyond reduced costs to be had from migrating off mainframe applications, some agencies are still reluctant to make the switch. This is understandable, as many of these legacy applications run mission-critical services and, while they may be costly to maintain, for many years they have proven to be very reliable.
So what is an agency to do with this love-hate relationship they have with their mainframes? A strategy that many agencies follow is to develop new mission needs and applications on a modern environment and permit those applications to access the mainframe to perform read and write operations related to the data still on the mainframe. While this strategy can speed up the development of new systems, the mainframe load tends to increase over time rather than decrease due to additional users, transactions, new channels and new devices.
In addition to not decreasing mainframe loads, this approach creates new issues since data is often segregated on two different models—the legacy mainframe model and the new modernized model. Costly integration hooks must be written in the applications to convert from-and-to the “new data model” to-and-from “the legacy mainframe model.” This complexity often negates most of the savings that were projected by modernizing the new applications.
Fortunately, in-memory data management is uniquely poised to help federal agencies reduce cost while getting more from their legacy systems, especially mainframes. In-memory management is proven to reduce mainframe loads by up to 80 percent while improving response times by up to 99 percent. This dramatic improvement makes Web services, mobile and other queries far more useful to end-users, which increases their satisfaction with services delivered.
With in-memory data management, mainframe data can be offloaded and put into ultra-fast system memory called RAM. Inexpensive commodity hardware and traditional cloud infrastructures can be used to provide terabytes of in-memory storage and scale to even the largest datasets. Server arrays can be clustered to provide highly available distributed data stores. Finally, while the term “in-memory” is often synonymous with “transient” and “data loss” in most architect’s mind, Software AG’s Terracotta in-memory data management solution uses a proprietary feature called “Fast Restartable Store” to continuously back up all in-memory data to disk, allowing agencies to enjoy the reliability of a traditional database, coupled with the ultra-fast performance of a full in-memory data store.
By safely moving frequently-used data into memory, agencies not only reduce MIPS, but also establish a high-performance and reliable flexibility layer to securely make agency data available at faster speeds.
For more information about how deploying in-memory data management can help agencies modernize their network environments, download this Mainframe Offloading Whitepaper.