Last week’s outage of JPMorgan Chase’s online banking site is an example of how maintaining absolute data integrity could end up creating big problems for businesses, a database analyst warned yesterday. seasoned.
The financial services company suffered intermittent issues on the site for three days earlier this month. At one point, Chase customers could not perform any online banking transactions for a period of more than 24 hours.
The bank initially blamed the disruption on a “technical glitch”, but later said the issues were related to a third-party database product used to authenticate customer logins.
Curt Monash, an analyst at Monash Research, said a source with knowledge of the incident told him the outage was traced to an Oracle database used by Chase to store user profiles and user data. authentication. Monash said the source, which he did not identify, said four files in the Oracle database were corrupted and the error was replicated in the mirror copy of the database that Chase maintained at backup and recovery purposes.
In total, automated clearinghouse transactions worth about $132 million were delayed by the disruption. Additionally, about 1,000 auto loan applications and another 1,000 student loan applications were lost due to the outage, Monash said in a blog post detailing his conversation with the source.
In an interview Thursday, Monash said Chase’s problems appear to have been exacerbated by the design of the database. He said the incident was likely prolonged because the bank had to restore a lot of data that did not need to be stored in the user authentication database.
Banking databases such as the one that was affected generally ensure that stored transaction data is ACID or atomicity compliant, consistent, isolated and durable, and are therefore designed to guarantee the integrity of transactions and the recoverability of data in the event of system failure.
Monash said much of the data stored on the crashed database appears to have been customer web usage data that was, but did not need to be, ACID compliant. Most of that data could have been stored in a separate system, he said. The excessive ACID-compliant data in this case affected the bank’s ability to recover quickly from the problem, he added.
“Everything in the user profile database didn’t need to be added via ACID transactions,” he said. It is likely that even if some of the web usage data had been lost, it would not have impinged on the integrity of the bank’s financial transactions, he said. “At a minimum, the recovery would have been much shorter if the data hadn’t been there,” he said.
The fact that issues with a single database affected the main web portal, its automated clearinghouse functions and loan applications suggests the product was a single point of failure for too many applications, Monash said.
“It was a large and complex database that when it went badly brought down many applications,” he said. It’s unclear whether the benefits of tying so many apps to a single database outweighed the risks in this case, he added.
Oracle did not immediately respond to a request for comment.
A Chase spokesperson also did not specifically respond to Monash’s comments. In an email, he said the “long recovery process” was caused by a system data corruption that disabled the bank’s “ability to process customer logins to chase.com.”
Jaikumar Vijayan covers data security and privacy issues, financial services security and electronic voting for computer world. Follow Jaikumar on Twitter at @jaivijayan, or subscribe to the Jaikumar RSS feed. His email address is [email protected]