Banks and other financial institutions are adopting applications based on the decentralized database technology blockchain. Dealing with the resulting tsunami of data will require CIOs to optimize server capability—or risk a crash.
(Burbank, CA) May 21, 2018 — The distributed-ledger database technology blockchain is increasingly being used by financial institutions, and is being considered by enterprises in a number of other areas. While it has significant attractions, blockchain is slow and data-intensive, and will place a major burden on the existing IT capabilities of any enterprise in which it is adopted. James D’Arezzo, CEO of Condusiv Technologies, says that unless CIOs properly prepare for it, the overwhelming amount of blockchain-generated data will cause their systems to crash.
D’Arezzo, whose company is the world leader in I/O reduction and SQL database performance for virtual and physical server environments, notes that blockchain is best known for enabling the creation of virtual currencies such as Bitcoin and Ripple1. Its unique characteristics, however—it is a database structure that allows multiple ownership while preventing record falsification—are attracting interest not only in virtual currencies, but in a number of other areas, including under-automated sectors like supply chain logistics.2
But it is in global banking that the technology is currently finding the largest number of immediate applications. Accenture has estimated that the largest investment banks could save $8 billion to $12 billion per year by using blockchain to improve the efficiency of clearing and settlement; pilot projects are underway at the Australian Securities Exchange, Depository Trust and Clearing Corporation, and elsewhere. Central banks across the world are exploring the potential for shifting parts of their payments systems to blockchain. For trade finance, still mostly based on paper, blockchain is seen as an obvious potential solution. Identity—the verification of customers and counterparties—is vital to banking, and dozens of startups are working on building blockchain systems for customer identification. And in the area of syndicated loans—which, in the U.S., take an average of 19 days for a transaction to be settled by the banks—Credit Suisse and 18 other financial institutions have formed a consortium to start replacing the current system with blockchain.3
On the other hand, as has been widely noted, the technology has certain limitations, perhaps the most serious of which is transaction speed. In the most recent available study, the Bitcoin network—the largest and most widely tested application of blockchain technology—achieved maximum throughput of three to four transactions per second. PayPal, on the other hand, managed 193 transactions per second, and VisaNet reported that it was capable of processing more than 56,000 transactions per second.4
The speed problems associated with blockchain, D’Arezzo notes, will be exacerbated by the performance characteristics of Microsoft SQL server, which is the database used by a high—and growing—percentage of large-enterprise IT operations.5 SQL server is also cited, in a recent survey of global IT managers, as perhaps the major bottleneck hampering overall system performance—even without the added computational burden of processing blockchains.6
“Two basic issues emerge here,” says D’Arezzo. “One, blockchain is slow. Two, a blockchain is a database, which means it requires a good deal of input/output activity—interchanges between the computer’s CPU and storage, whether real or virtual. And that’s where the wear and tear takes place. As blockchain-based applications come in on top of the already staggering load of data handling required of IT in the financial sector today, the danger of major system slowdowns, and quite possibly system crashes, will increase dramatically.
Condusiv Technologies provides intelligent software that identifies, reduces, and streamlines the most heavily used I/O and greatly enhances SQL server throughput. The company has seen users obtain overall system performance improvements of 50% or more without installing new hardware. D’Arezzo has also seen cases where organizations deal with data center consolidation on a ‘forklift upgrade’ basis, simply dumping new storage and hardware into the system as a solution. Shortly thereafter, they often find that performance has degraded. A bottleneck has been created that needs to be handled with optimization—which might have made the costly hardware upgrade unnecessary in the first place.