RDBMS vs Hadoop

RDBMS is relational database management system. Hadoop is node based flat structure. RDMS is generally used for OLTP processing whereas Hadoop is currently used for analytical and especially for BIG DATA processing. Any maintenance on storage, or data files, a downtime is needed for any available RDBMS. In standalone database systems, to add processing power such as more CPU, physical memory in non-virtualized environment, a downtime is needed for RDBMS such as DB2, Oracle, and SQL Server. However, Hadoop systems are individual independent nodes that can be added in an as needed basis. The database cluster uses the same data files stored in shared storage in RDBMS systems, whereas the storage data can be stored independently in each processing node. The performance tuning of an RDBMS can go nightmare. Even in proven environment. However, Hadoop enables hot tuning by adding extra nodes which will be self-managed.

Following are some more detailed comparison of Hadoop with SQL databases

  • Scale-out instead of scale-up – Scaling commercial relational databases is expensive because to run a bigger database you will need to buy a bigger machine. Hadoop is designed to be a scale-out architecture operating on a cluster of commodity PC machines, Adding more resources means adding more machines to the Hadoop cluster, Hadoop clusters with ten to hundreds of machines is standard.
  • Key/value pairs instead of relational tables – In RDBMS, data resides in tables having relational structure defined by a schema. Hadoop uses key/value pairs as its basic data unit, which is flexible enough to work with the less-structured data types. In Hadoop, data can originate in any form, but it eventually transforms into (key/value) pairs for the processing functions to work on.
  • Functional programming (mapreduce) instead of declarative queries (SQL) – Under SQL you have query statements; under MapReduce you have scripts and codes. MapReduce allows you to process data in a more general fashion than SQL queries. For example, you can build complex statistical models from your data or reformat your image data. SQL is not well designed for such tasks.
  • Offline batch processing instead of online transactions – Hadoop is designed for offline processing and analysis of large-scale data. It does not work for random reading and writing of a few records, which is the type of load for online transaction processing. Hadoop is best used as a write-once , read-many-times type of data store. In this aspect it is similar to data warehouses in the SQL world.
Hadoop Architecture
Vendor Comparison

Get industry recognized certification – Contact us

keyboard_arrow_up