Debunking BigData Myths write durability, data integrity and consistency - MyHowTo.org
Several times in the past I have heard a very strange thing about BigData storage systems - specifically Cassandra and Hadoop. People were praising their low cost (relative), scalability and open-source nature. Yet, the same people did say something like "for that price we are ok if some data loss is possible from time to time". Shocking? Or, more importantly, is this really something that BigData adopters have to tolerate in exchange for other benefits?
Funny, one of the recent dialogs involved Oracle a "more reliable" alternative.
I mean no disrespect - in many cases it was a clear misunderstanding of the difference between the data consistency and durability or the writes. And, generally, the ability of the storage system to preserve the data integrity over time.
First, about the software quality. To be fair, it is quite possible that Oracle has spent more time and money on testing Oracle database server and weeding out the bugs. Oracle software is used by millions of customers. Open-source software like Apache Cassandra is also used by many thousands of customers and many of them are also not very tolerant to software bugs. Not to mention that many open-source products are supported by commercial vendors who perform additional quality control. DataStax does it for Apache Cassandra, Cloudera, Hortonworks and others - for Hadoop and so on. Also it is important to mention that the source code for open-source products is publicly available and thousands of people contribute to it. Bottom line - I am not really buying the argument that the software origin (on average) makes a huge difference for data integrity and durability when comparing commercial and open-source products. Assuming that the latter are mature enough, of course.
Read full article from Debunking BigData Myths write durability, data integrity and consistency - MyHowTo.org
No comments:
Post a Comment