Python client for the Impala distributed query engine.
Fully implemented:
-
Lightweight,
pip
-installable package for connecting to Impala databases -
Fully DB API 2.0 (PEP 249)-compliant Python client (similar to sqlite or MySQL clients) supporting Python 2 and Python 3.
-
Runs on HiveServer2 and Beeswax; runs with Kerberos
-
Converter to pandas
DataFrame
, allowing easy integration into the Python data stack (including scikit-learn and matplotlib)
In various phases of maturity:
-
SQLAlchemy connector; integration with Blaze
-
BigDataFrame
abstraction for performingpandas
-style analytics on large datasets (similar to Spark's RDD abstraction); computation is pushed into the Impala engine. -
scikit-learn
-flavored wrapper for MADlib-style prediction, allowing for large-scale, distributed machine learning (see the Impala port of MADlib) -
Compiling UDFs written in Python into low-level machine code for execution by Impala (powered by Numba/LLVM)
Required for DB API connectivity:
-
Python 2.6+ or 3.3+
-
six
-
thrift>=0.8
(Python package only; no need for code-gen) for Python 2, orthriftpy
for Python 3 -
thrift_sasl
Required for UDFs:
-
numba<=0.13.4
(which has a few requirements, like LLVM) -
boost
(becauseudf.h
depends onboost/cstdint.hpp
)
Required for SQLAlchemy integration (and Blaze):
sqlalchemy
Required for BigDataFrame
:
pandas
Required for Kerberos support:
python-sasl
(for Python 3 support, requires laserson/python-sasl@cython)
Required for utilizing automated shipping/registering of code/UDFs/BDFs/etc:
hdfs[kerberos]
(a Python client that wraps WebHDFS; kerberos is optional)
For manipulating results as pandas DataFrame
s, we recommend installing pandas
regardless.
Generally, we recommend installing all the libraries above; the UDF libraries
will be the most difficult, and are not required if you will not use any Python
UDFs. Interacting with Impala using the ImpalaContext
will simplify shipping
data and will perform cleanup on temporary data/tables.
This project is installed with setuptools
.
Install the latest release (0.10.0
) with pip
:
pip install impyla
For the latest (dev) version, clone the repo:
git clone https://github.com/cloudera/impyla.git
cd impyla
make # optional: only for Numba-compiled UDFs; requires LLVM/clang
python setup.py install
impyla uses the pytest toolchain, and depends on the following environment variables:
export IMPALA_HOST=your.impalad.com
# beeswax might work here too
export IMPALA_PORT=21050
export IMPALA_PROTOCOL=hiveserver2
# needed to push data to the cluster
export NAMENODE_HOST=bottou01-10g.pa.cloudera.com
export WEBHDFS_PORT=50070
To run the maximal set of tests, run
py.test --dbapi-compliance path/to/impyla/impala/tests
Leave out the --dbapi-compliance
option to skip tests for DB API compliance.
Add a --udf
option to only run local UDF compilation tests.
Impyla implements the Python DB API v2.0 (PEP 249) database interface (refer to it for API details):
from impala.dbapi import connect
conn = connect(host='my.host.com', port=21050)
cursor = conn.cursor()
cursor.execute('SELECT * FROM mytable LIMIT 100')
print cursor.description # prints the result set's schema
results = cursor.fetchall()
Note: if connecting to Impala through the HiveServer2 service, make sure to set the port to the HiveServer2 port (defaults to 21050 in CM), not Beeswax (defaults to 21000) which is what the Impala shell uses.
The Cursor
object also exposes the iterator interface, which is buffered
(controlled by cursor.arraysize
):
cursor.execute('SELECT * FROM mytable LIMIT 100')
for row in cursor:
process(row)
You can also get back a pandas DataFrame object
from impala.util import as_pandas
df = as_pandas(cur)
# carry df through scikit-learn, for example