Xenonstack Recommends

Apache Hive Data warehouse and Kerberos Authentication

Acknowledging Data Management
          Best Practices with DataOps


Introduction to Apache Hive

Apache Hive is a Hadoop component which is typically deployed by the analysts. It is a data warehouse system in an open Hadoop platform that is used for data analysis, summarization, and querying of the large data systems. It is also useful in the smooth execution and processing of a large volume of data as it converts SQL-like queries into MapReduce jobs. MapReduce scripts are also supported by the Hive that can be plugged into Queries. Hive also helps in increasing the schema design flexibility and also data serialization and deserialization. The friendlier nature of the Hive makes this component more familiar to the users who are using SQL for querying of the data. Thus we can say that the Apache Hive provides the Database query of the Hadoop. But a significant drawback of this tool is we can use it for online transaction processing it is best suitable for Data Warehouse tasks. Read more about Apache Solr Architecture here.

Apache Hive Architecture

This shows us the Apache Hive structure in detail format along with all its components and working and tuning of all the above parts among themselves. The Apache Hive mainly consists of three components -
  • Hive Clients
  • Hive Services
  • Hive storage and computing

Hive Client

In Hive Client, we are getting different drivers for different applications. As we see in case of Thrift based applications, we are provided with Thrift client for the communications. For Java related applications it provides JDBC drivers. In case of any other requests, we are getting ODBC drivers. Apache Hive also supports all applications that are written in C++, Java, Python, etc. So this is the best choice for the clients that can write code in their preferred language.

Hive Services

Hive services help in performing interactions of the client with the Hive. Such in case the client wants to perform any query related operation, so he has to communicate through the hive services. DRIVER present in the up diagram communicates with all type of specific clients applications. The driver will process those requests from different applications to meta store and field systems for further processing. This provides services like the web interface, CLI, etc. for the query performing.

Apache Hive storage and computing

  • Metadata information of tables created in Hive is stored in Hive "Meta storage database."
  • Query results and data loaded in the tables are going to be stored in Hadoop cluster on HDFS.
  • Hadoop MapReduce framework is used internally by the Hive for the execution of the queries.
  • The Apache Hive uses underlying HDFS for the Distributed Storage.

Securing Apache Hive with Kerberos

Install the KDC server
apt-get install krb5-kdc krb5-admin-server
Now open KDC server configuration file as
nano /etc/krb5.conf
Set KDC and admin_server properties with FQDN of KDC server host as in this example
kdc = my.kdc.server
admin_server = my.kdc.server
Now create the Kerberos Database by the following utility
Now start the KDC server and KDC admin.
service krb5-kdc restart
service krb5-admin-server restart
Create a Kerberos Admin Create admin principal as -
 kadmin.local -q "addprinc admin/admin"
Admin principal must have permissions in KDC Acl Be sure there is an entry for the realm you are using like for an admin/admin@HADOOP.COM principal, and you should have an entry -
*/admin@HADOOP.COM *
Restart Kadmin server after saving kadm5.acl file
service krb5-admin-server restart

A Comprehensive Approach

Apache Hive provides a platform for summarizing, analyzing and querying large amounts of data.To understand more about Data Warehouses and Data Analysis we recommend taking the following steps -

Related blogs and Articles

Data Center Migration : Strategy, Process and Solutions

Enterprise Data Management

Data Center Migration : Strategy, Process and Solutions

Data Center Migration Overview Data Center Migration is the process of deploying and migrating/relocating an existing data center from one operating environment to another without causing data loss, t his relocation process requires no physical movement and is logical. The Data centers are designed and deployed to provide storage for critical data and some applications of the organizations. ...