This project includes tools to build a Hive SerDe to index Hive tables to Solr.
-
Index Hive table data to Solr.
-
Read Solr index data to a Hive table.
-
Kerberos support for securing communication between Hive and Solr.
-
As of v2.2.4 of the SerDe, integration with Lucidworks Fusion is supported.
-
Fusion’s index pipelines can be used to index data to Fusion.
-
Fusion’s query pipelines can be used to query Fusion’s Solr instance for data to insert into a Hive table.
-
Note
|
This version of hive-solr supports Hive 3.0.0. For support for Hive 1.x, see the hive_1x branch.
|
Note
|
hive-solr should only be used with Solr 5.0 and higher.
|
This project has a dependency on the solr-hadoop-common
submodule (contained in a separate GitHub repository, https://github.com/lucidworks/solr-hadoop-common). This submodule must be initialized before building the SerDe .jar.
Warning
|
You must use Java 8 or higher to build the .jar. |
To initialize the submodule, pull this repo, then:
git submodule init
Once the submodule has been initialized, the command git submodule update
will fetch all the data from that project and check out the appropriate commit listed in the superproject. You must initialize and update the submodule before attempting to build the SerDe jar.
-
if a build is happening from a branch, please make sure that
solr-hadoop-common
is pointing to the correct SHA. (see https://github.com/blog/2104-working-with-submodules for more details)
hive-solr $ git checkout <branch-name> hive-solr $ cd solr-hadoop-common hive-solr/solr-hadoop-common $ git checkout <SHA> hive-solr/solr-hadoop-common $ cd ..
The build uses Gradle. However, you do not need to have Gradle already installed before attempting to build.
To build the jar files, run this command:
./gradlew clean shadowJar --info
This will build a single .jar file, solr-hive-serde/build/libs/solr-hive-serde-4.0.0.jar
, which can be used with Hive v3.0. Other Hive versions (such as v2.x) may work with this jar, but have not been tested.
If GitHub and SSH are not configured the following exception will be thrown:
Cloning into 'solr-hadoop-common'...
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
fatal: clone of '[email protected]:LucidWorks/solr-hadoop-common.git' into submodule path 'solr-hadoop-common' failed
To fix this error, use the generating an SSH key tutorial.
In order for the Hive SerDe to work with Solr, the SerDe jar must be added to Hive’s classpath using the hive.aux.jars.path
capability. There are several options for this, described below.
It’s considered a best practice to use a single directory for all auxiliary jars you may want to add to Hive so you only need to define a single path. However, you must then copy any jars you want to use to that path.
Note
|
The following options all assume you have created such a directory at /usr/hive/auxlib ; if you use another path, update the path in the examples accordingly.
|
-
If you use Hive with Ambari (as with the Hortonworks HDP distribution), go to menu:Hive[Configs > Advanced], and scroll down to menu:Advanced hive-env[hive-env template]. Find the section where the
HIVE_AUX_JARS_PATH
is defined, and add the path to each line which starts withexport
. What you want will end up looking like:# Folder containing extra libraries required for hive compilation/execution can be controlled by: if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then if [ -f "${HIVE_AUX_JARS_PATH}" ]; then export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH},/usr/hive/auxlib elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar,/usr/hive/auxlib fi elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar,/usr/hive/auxlib fi
-
If not using Ambari or similar cluster management tool, you can add the jar location to
hive/conf/hive-site.xml
:<property> <name>hive.aux.jars.path</name> <value>/usr/hive/auxlib</value> </property>
-
Another option is to launch Hive with the path defined with the
auxpath
variable:hive --auxpath /usr/hive/auxlib
There are also other approaches that could be used. Keep in mind, though, that the jar must be loaded into the classpath, adding it with the ADD JAR
function is not sufficient.
Indexing data to Solr or Fusion requires creating a Hive external table. An external table allows the data in the table to be used (read or write) by another system or application outside of Hive.
For integration with Solr, the external table allows you to have Solr read from and write to Hive.
To create an external table for Solr, you can use a command similar to below. The properties available are described after the example.
hive> CREATE EXTERNAL TABLE solr (id string, field1_s string, field2_i int) (1)
STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' (2)
LOCATION '/tmp/solr' (3)
TBLPROPERTIES('solr.server.url' = 'http://localhost:8888/solr', (4)
'solr.collection' = 'collection1',
'solr.query' = '*:*');
-
In this example, we have created an external table named "solr", and defined a set of fields and types for the data we will store in the table. See the section Defining Fields for Solr below for best practices when naming fields.
-
This defines a custom storage handler (
STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler'
), which is one of the classes included with the Hive SerDe jar. -
The LOCATION indicates the location in HDFS where the table data will be stored. In this example, we have chosen to use
/tmp/solr
. -
In the section TBLPROPERTIES, we define several parameters for Solr so the data can be indexed to the right Solr installation and collection. See the section Table Properties below for details about these parameters.
If the table needs to be dropped at a later time, you can use the DROP TABLE command in Hive. This will remove the metadata stored in the table in Hive, but will not modify the underlying data (in this case, the Solr index).
When defining the field names for the Hive table, keep in mind that the field types used to define the table in Hive are not sent to Solr when indexing data from a Hive table. The field names are sent, but not the field types. The field types must match or be compatible, however, for queries to complete properly.
The reason why this might be a problem is due to a Solr feature called schema guessing. This is Solr’s default mode, and when it is enabled Solr looks at incoming data and makes a best guess at the field type.
It can happen that Solr’s guess at the correct type may be different from the type defined in Hive, and if this happens you will get a ClassCastException
in response to queries.
To avoid this problem, you can use a Solr feature called dynamic fields. These direct Solr to use specific field types based on a prefix or suffix found on an incoming field name, which overrides Solr guessing at the type. Solr includes by default dynamic field rules for nearly all types it supports, so you only need to use the same suffix on your field names in your Hive tables for the correct type to be defined.
To illustrate this, note the field names in the table example above:
CREATE EXTERNAL TABLE solr (id string, field1_s string, field2_i int)
In this example, we have defined the id
field as a string, field1_s
as a string, and field2_i
as an integer. In Solr’s default schema, there is a dynamic field rule that any field with a _s
suffix should be a string. Similarly, there is another rule that any field with _i
as a suffix should be an integer. This allows us to make sure the field types match.
An alternative to this is to disable Solr’s field guessing altogether, but this would require you to create all of your fields in Solr before indexing any content from Hive.
For more information about these features and options, please see the following sections of the Apache Solr Reference Guide:
The following parameters can be set when defining the table properties:
solr.zkhost
-
The location of the ZooKeeper quorum if using LucidWorks in SolrCloud mode. If this property is set along with the
solr.server.url
property, thesolr.server.url
property will take precedence. solr.server.url
-
The location of the Solr instance if not using LucidWorks in SolrCloud mode. If this property is set along with the
solr.zkhost
property, this property will take precedence. solr.collection
-
The Solr collection for this table. If not defined, an exception will be thrown.
solr.query
-
The specific Solr query to execute to read this table. If not defined, a default of
*:*
will be used. This property is not needed when loading data to a table, but is needed when defining the table so Hive can later read the table. lww.commit.on.close
-
If true, inserts will be automatically committed when the connection is closed. True is the default.
lww.jaas.file
-
Used only when indexing to or reading from a Solr cluster secured with Kerberos.
This property defines the path to a JAAS file that contains a service principal and keytab location for a user who is authorized to read from and write to Solr and Hive.
The JAAS configuration file must be copied to the same path on every node where a Node Manager is running (i.e., every node where map/reduce tasks are executed). Here is a sample section of a JAAS file:
Client { --(1) com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/data/solr-indexer.keytab" --(2) storeKey=true useTicketCache=true debug=true principal="[email protected]"; --(3) };
-
The name of this section of the JAAS file. This name will be used with the
lww.jaas.appname
parameter. -
The location of the keytab file.
-
The service principal name. This should be a different principal than the one used for Solr, but must have access to both Solr and Hive.
-
lww.jaas.appname
-
Used only when indexing to or reading from a Solr cluster secured with Kerberos.
This property provides the name of the section in the JAAS file that includes the correct service principal and keytab path.
If you use Lucidworks Fusion, you can index data from Hive to Solr via Fusion’s index pipelines. These pipelines allow you several options for further transforming your data.
Tip
|
If you are using Fusion v3.0.x, you already have the Hive SerDe in Fusion’s If you are using Fusion 3.1.x and higher, you will need to download the Hive SerDe from http://lucidworks.com/connectors/. Choose the proper Hadoop distribution and the resulting .zip file will include the Hive SerDe. A 2.2.4 or higher jar built from this repository will also work with Fusion 2.4.x releases. |
This is an example Hive command to create an external table to index documents in Fusion and to query the table later.
hive> CREATE EXTERNAL TABLE fusion (id string, field1_s string, field2_i int)
STORED BY 'com.lucidworks.hadoop.hive.FusionStorageHandler'
LOCATION '/tmp/fusion'
TBLPROPERTIES('fusion.endpoints' = 'http://localhost:8764/api/apollo/index-pipelines/<pipeline>/collections/<collection>/index',
'fusion.fail.on.error' = 'false',
'fusion.buffer.timeoutms' = '1000',
'fusion.batchSize' = '500',
'fusion.realm' = 'KERBEROS',
'fusion.user' = '[email protected]',
'java.security.auth.login.config' = '/path/to/JAAS/file',
'fusion.jaas.appname' = 'FusionClient',
'fusion.query.endpoints' = 'http://localhost:8764/api/apollo/query-pipelines/pipeline-id/collections/collection-id',
'fusion.query' = '*:*');
In this example, we have created an external table named "fusion", and defined a custom storage handler (STORED BY 'com.lucidworks.hadoop.hive.FusionStorageHandler'
) that a class included with the Hive SerDe jar designed for use with Fusion.
Note that all of the same caveats about field types discussed in the section Defining Fields for Solr apply to Fusion as well. In Fusion, however, you have the option of using an index pipeline to perform specific field mapping instead of using dynamic fields.
The LOCATION indicates the location in HDFS where the table data will be stored. In this example, we have chosen to use /tmp/fusion
.
In the section TBLPROPERTIES, we define several properties for Fusion so the data can be indexed to the right Fusion installation and collection:
fusion.endpoints
-
The full URL to the index pipeline in Fusion. The URL should include the pipeline name and the collection data will be indexed to.
fusion.fail.on.error
-
If
true
, when an error is encountered, such as if a row could not be parsed, indexing will stop. This isfalse
by default. fusion.buffer.timeoutms
-
The amount of time, in milliseconds, to buffer documents before sending them to Fusion. The default is 1000. Documents will be sent to Fusion when either this value or
fusion.batchSize
is met. fusion.batchSize
-
The number of documents to batch before sending the batch to Fusion. The default is 500. Documents will be sent to Fusion when either this value or
fusion.buffer.timeoutms
is met. fusion.realm
-
This is used with
fusion.user
andfusion.password
to authenticate to Fusion for indexing data. Two options are supported,KERBEROS
orNATIVE
.Kerberos authentication is supported with the additional definition of a JAAS file. The properties
java.security.auth.login.config
andfusion.jaas.appname
are used to define the location of the JAAS file and the section of the file to use.Native authentication uses a Fusion-defined username and password. This user must exist in Fusion, and have the proper permissions to index documents.
fusion.user
-
The Fusion username or Kerberos principal to use for authentication to Fusion. If a Fusion username is used (
'fusion.realm' = 'NATIVE'
), thefusion.password
must also be supplied. fusion.password
-
This property is not shown in the example above. The password for the
fusion.user
when thefusion.realm
isNATIVE
. java.security.auth.login.config
-
This property defines the path to a JAAS file that contains a service principal and keytab location for a user who is authorized to read from and write to Fusion and Hive.
The JAAS configuration file must be copied to the same path on every node where a Node Manager is running (i.e., every node where map/reduce tasks are executed). Here is a sample section of a JAAS file:
Client { --(1) com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/data/fusion-indexer.keytab" --(2) storeKey=true useTicketCache=true debug=true principal="[email protected]"; --(3) };
-
The name of this section of the JAAS file. This name will be used with the
fusion.jaas.appname
parameter. -
The location of the keytab file.
-
The service principal name. This should be a different principal than the one used for Fusion, but must have access to both Fusion and Hive. This name is used with the
fusion.user
parameter described above.
-
fusion.jaas.appname
-
Used only when indexing to or reading from Fusion when it is secured with Kerberos.
This property provides the name of the section in the JAAS file that includes the correct service principal and keytab path.
fusion.query.endpoints
-
The full URL to a query pipeline in Fusion. The URL should include the pipeline name and the collection data will be read from. You should also specify the request handler to be used.
If you do not intend to query your Fusion data from Hive, you can skip this parameter.
fusion.query
-
The query to run in Fusion to select records to be read into Hive. This is
*:*
by default, which selects all records in the index.If you do not intend to query your Fusion data from Hive, you can skip this parameter.
Once the table is configured, any syntactically correct Hive query will be able to query the index.
For example, to select three fields named "id", "field1_s", and "field2_i" from the "solr" table, you would use a query such as:
hive> SELECT id, field1_s, field2_i FROM solr;
Replace the table name as appropriate to use this example with your data.
To join data from tables, you can make a request such as:
hive> SELECT id, field1_s, field2_i FROM solr left
JOIN sometable right
WHERE left.id = right.id;
And finally, to insert data to a table, simply use the Solr table as the target for the Hive INSERT statement, such as:
hive> INSERT INTO solr
SELECT id, field1_s, field2_i FROM sometable;
Solr includes a small number of sample documents for use when getting started. One of these is a CSV file containing book metadata. This file is found in your Solr installation, at $SOLR_HOME/example/exampledocs/books.csv
.
Using the sample books.csv
file, we can see a detailed example of creating a table, loading data to it, and indexing that data to Solr.
CREATE TABLE books (id STRING, cat STRING, title STRING, price FLOAT, in_stock BOOLEAN, author STRING, series STRING, seq INT, genre STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; --(1)
LOAD DATA LOCAL INPATH '/solr/example/exampledocs/books.csv' OVERWRITE INTO TABLE books; --(2)
CREATE EXTERNAL TABLE solr (id STRING, cat_s STRING, title_s STRING, price_f FLOAT, in_stock_b BOOLEAN, author_s STRING, series_s STRING, seq_i INT, genre_s STRING) --(3)
STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' --(4)
LOCATION '/tmp/solr' --(5)
TBLPROPERTIES('solr.zkhost' = 'zknode1:2181,zknode2:2181,zknode3:2181/solr',
'solr.collection' = 'gettingstarted',
'solr.query' = '*:*', --(6)
'lww.jaas.file' = '/data/jaas-client.conf'); --(7)
INSERT OVERWRITE TABLE solr SELECT b.* FROM books b;
-
Define the table
books
, and provide the field names and field types that will make up the table. -
Load the data from the
books.csv
file. -
Create an external table named
solr
, and provide the field names and field types that will make up the table. These will be the same field names as in your local Hive table, so we can index all of the same data to Solr. -
Define the custom storage handler provided by the
solr-hive-serde-4.0.0.jar
. -
Define storage location in HDFS.
-
The query to run in Solr to read records from Solr for use in Hive.
-
Define the location of Solr (or ZooKeeper if using SolrCloud), the collection in Solr to index the data to, and the query to use when reading the table. This example also refers to a JAAS configuration file that will be used to authenticate to the Kerberized Solr cluster.
-
Fork this repo i.e. <username|organization>/hadoop-solr, following the fork a repo/ tutorial. Then, clone the forked repo on your local machine:
$ git clone https://github.com/<username|organization>/hadoop-solr.git
-
Configure remotes with the configuring remotes tutorial.
-
Create a new branch:
$ git checkout -b new_branch $ git push origin new_branch
Use the creating branches tutorial to create the branch from GitHub UI if you prefer.
-
Develop on
new_branch
branch only, do not mergenew_branch
to your master. Commit changes tonew_branch
as often as you like:$ git add <filename> $ git commit -m 'commit message'
-
Push your changes to GitHub.
$ git push origin new_branch
-
Repeat the commit & push steps until your development is complete.
-
Before submitting a pull request, fetch upstream changes that were done by other contributors:
$ git fetch upstream
-
And update master locally:
$ git checkout master $ git pull upstream master
-
Merge master branch into
new_branch
in order to avoid conflicts:$ git checkout new_branch $ git merge master
-
If conflicts happen, use the resolving merge conflicts tutorial to fix them:
-
Push master changes to
new_branch
branch$ git push origin new_branch
-
Add jUnits, as appropriate, to test your changes.
-
When all testing is done, use the create a pull request tutorial to submit your change to the repo.
Note
|
Please be sure that your pull request sends only your changes, and no others. Check it using the command:
|