XenonStack Recommends

Enterprise Data Management

Apache Hive Security with Kerberos Authentication | A Quick Guide

Chandan Gaur | 09 March 2022

Introduction to Apache Hive

Apache Hive is a Hadoop component which is typically deployed by the analysts. It is a data warehouse system in an open Hadoop platform that is used for data analysis, summarization, and querying of the large data systems. It is also useful in the smooth execution and processing of a large volume of data as it converts SQL-like queries into MapReduce jobs. MapReduce scripts are also supported by the Hive that can be plugged into Queries.

Hive also helps in increasing the schema design flexibility and also data serialization and deserialization. The friendlier nature of the Hive makes this component more familiar to the users who are using SQL for querying of the data. Thus we can say that the Apache Hive provides the Database query of the Hadoop. But a significant drawback of this tool is we can use it for online transaction processing it is best suitable for Data Warehouse tasks.

An application based on J2EE and uses Lucene libraries internally to provide user-friendly search as well as to generate the indexes Click to explore about, Apache Solr Search Engine

What is the architecture of Apache Hive?

This shows us the Apache Hive structure in detail format along with all its components and working and tuning of all the above parts among themselves. The Apache Hive mainly consists of three components -
  • Hive Clients
  • Hive Services
  • Hive storage and computing

Hive Client

In Hive Client, we are getting different drivers for different applications. As we see in case of Thrift based applications, we are provided with Thrift client for the communications. For Java related applications it provides JDBC drivers. In case of any other requests, we are getting ODBC drivers. Apache Hive also supports all applications that are written in C++, Java, Python, etc. So this is the best choice for the clients that can write code in their preferred language.

Hive Services

Hive services help in performing interactions of the client with the Hive. Such in case the client wants to perform any query related operation, so he has to communicate through the hive services. DRIVER present in the up diagram communicates with all type of specific clients applications. The driver will process those requests from different applications to meta store and field systems for further processing. This provides services like the web interface, CLI, etc. for the query performing.
The primary purpose of Hudi is to decrease the data latency during ingestion with high efficiency. Click to explore about, Apache Hudi Architecture Tools

Apache Hive storage and computing

  • Metadata information of tables created in Hive is stored in Hive "Meta storage database."
  • Query results and data loaded in the tables are going to be stored in Hadoop cluster on HDFS.
  • Hadoop MapReduce framework is used internally by the Hive for the execution of the queries.
  • The Apache Hive uses underlying HDFS for the Distributed Storage.

How to secure Apache Hive with Kerberos?

Install the KDC server
apt-get install krb5-kdc krb5-admin-server
Now open KDC server configuration file as
nano /etc/krb5.conf
Set KDC and admin_server properties with FQDN of KDC server host as in this example
kdc = my.kdc.server
admin_server = my.kdc.server
Now create the Kerberos Database by the following utility
Now start the KDC server and KDC admin.
service krb5-kdc restart
service krb5-admin-server restart
Create a Kerberos Admin Create admin principal as -
 kadmin.local -q "addprinc admin/admin"
Admin principal must have permissions in KDC Acl Be sure there is an entry for the realm you are using like for an admin/admin@HADOOP.COM principal, and you should have an entry -
*/admin@HADOOP.COM *
Restart Kadmin server after saving kadm5.acl file
service krb5-admin-server restart

A Comprehensive Approach

Apache Hive provides a platform for summarizing, analyzing and querying large amounts of data.To understand more about Data Warehouses and Data Analysis we recommend taking the following steps -