XenonStack Recommends

Enterprise Data Management

Best Practices for Hadoop Storage Format

Chandan Gaur | 07 Dec 2021

Best Practices for Hadoop Storage Format

What is Hadoop Storage Format?

A storage format defines how information stored in a file or database. The extension of the file indicates this. Different data/file formats used by different Big data stack including Spark, Hadoop, Hive and others.This article will give an overview of different Hadoop Storage Formats.

A framework for storing large Data in distributed mode and distributed processing on that large datasets. Click to explore about, Apache Hadoop and Apache Spark with GPU

Standard Hadoop Storage File Formats

Some standard file formats are text files (CSV,XML) or binary files(images).

  • Text Data - These data come in the form of CSV or unstructured data such as twitters. CSV files commonly used for exchanging data between Hadoop and external systems.
  • Structure Text Data - This is a more specialized form of text files such as XML or JSON. JSON Processing is more challenging than XML because there are no tokens to mark the beginning or end of the record in JSON.
  • Sequence Files - These files store data in a binary format with a similar structure to CSV. Sequence files store only data. Sequence files do support block compression.

What is the format of Hadoop Storage Serialization?

Serialization is the process of turning data structure into byte stream either for storage or transmission over a network. De serialization is the process of converting byte stream into data structures. Writable is the main serialization format used by Hadoop. Writables are fast and compact but difficult to extend or use from languages other than Java. Some serialization frameworks are being used within the Hadoop ecosystem.

  • Thrift - Developed for implementing cross-language interfaces to services. It uses an IDL file to generate stub code to use in performing RPC clients and servers that communicate seamlessly across programming languages. It supports MapReduce.
  • Protocol Buffers (protobuf) - It provides facilities to exchange data between services written in different languages. also defined via an IDL like thrift. It is neither splittable not compressible and also doesn't support MapReduce like thrift.
  • Avro Files - It is a language-neutral data serialization system. Avro stores the metadata data with data. Avro supports the MapReduce. Avro data files blocked compressible and splittable. Avro files support the schema evolution which makes Avro better than sequence Sequence Files.

A distributed file system. Store all types of files in the Hadoop file system. It supports all standard formats such as Gifs, text, CSV, tsv, xls, etc. Click to explore about, Data Serialization in Apache Hadoop

Columnar Format

The columnar data format useful for recent modern Big Data application. Columnar format provides several benefits over row-oriented databases. When the need to accessing an only a small subset of columns then used columnar format database but when need to obtaining many columns then row-oriented database preferable.

There some columnar file formats:

  • RC Files - This is a first columnar file format used in Hadoop. The RCFile format was developed to provide efficient processing for MapReduce applications. It developed to provide fast data loading, quick query processing, and highly efficient storage space utilization. RC file right for query but writing an RC file requires more memory and computation than non-columnar file formats. It doesn't support schema evolution.
  • ORC Files - ORC files invented by HortonWorks to optimize performance in Hive. ORC files have the same benefits and limitations as RC files just done better for Hadoop. ORC files compress better than RC files, enables faster queries. It also doesn't support schema evolution.ORC specifically designed for Hive, cannot be used with non-Hive MapReduce interfaces such as Pig or Java or Impala. It contains a group of row data called stripes.
  • Parquet Files - It is a columnar data format which is suitable for different MapReduce interfaces such as Java, Hive and Pig. It is also ideal for other processing engines such as Impala and Spark. Parquet is good as RC and ORC in performance but slower to write that other column formats. Parquet support schema evolution. Parquet support functionality to add new columns added at the end of the structure. Currently, Hive and Impala support query to add new columns, but other tools like Pig may face challenges.

Why Adapting Hadoop Storage Format Matters?

A massive bottleneck for an HDFS-enabled application like Spark and MapReduce is time taken to find relevant data in a particular location and to write the data back to another location. Issues exacerbated with the difficulties managing large datasets, such as evolving storage constraints or schemas. Here is various file formats have involved as a way to ease these issues across some use cases.

Here are some significant benefits of choosing an appropriate file format -

  • Faster read times
  • Faster write times
  • Splittable files
  • Schema evolution support
  • Advanced compression support

Some file formats designed for general use (like Spark or MapReduce), others are designed for more specific use cases, and some designed with specific data characteristics in mind. So there is a lot of choices.


Data Catalog for Hadoop helps the users and analysts to use it as an inventory of available data, which can be further used to gain insights by analyzing data. Click to explore about, Data Catalog for Hadoop

How to Adopt File Format?

There three types of performance to consider when choosing a file format.

  • Write performance - how to write data fast.
  • Partial read performance - how to read individual column within a file.
  • Full read performance - How to read every data element in a file.

A columnar, compressed file format like ORC or Parquet used for partial and full read performance, but these are expensive of write performance. Uncompressed CSV files fast to write but lack column-orientation and slow compression slow for reads.

Many options can be used to store data. If need to storing intermediate data between MapReduce jobs, then Sequence files are preferred. If query performance is most important, then ORC or Parquet are optimal, but these files take longer to write. If the schema is going to change over time, then Avro is best, but query performance will be slower than ORC or Parquet. CSV files are best if extract data from Hadoop to bulk load into a database.

What are the benefits of Hadoop Storage Format?

The benefits of Hadoop Storage Format are listed below:

  • Faster read times
  • Faster write times
  • Split files
  • Schema evolution support (allowing you to change the fields in a data set).
  • Advanced compression support (compress the files with a compression codec without sacrificing these features).

What are the best practices of Hadoop File Storage?

  • When the need to accessing an only a small subset of columns then used a columnar data format.
  • When necessary to obtaining many columns then used a row-oriented database instead of a columnar database.
  • If schema changes over time then use Avro instead of ORC or Parquet.
  • If need to perform query then use ORC or Parquet instead of Avro.
  • If need to perform column add operation then use Parquet instead of ORC.

Java vs Kotlin
Our solutions cater to diverse industries with a focus on serving ever-changing marketing needs. Click here for our Big Data Managed Services and Solutions

Concluding File Storage

The evaluation of the famous storage techniques on Hadoop ecosystem Apache Avro has proven to be a fast universal encoder for structured data. Due to very efficient serialization and deserialization, this format guarantees outstanding performance whenever access to all the attributes of a record required at the same time – data transportation, staging areas, etc. Apache Parquet and Apache Kudu delivered excellent flexibility between fast Data Ingestion, quick random data lookup, and Scalable Data Analytics, ensuring at the same time a system simplicity – only one technology for storing the data. Kudu excels faster random lookup when Parquet excels faster data scans and ingestion. To learn more about Data Storage Management we recommend taking the following steps -