Hive Plugin Developer Kit
This page explains Apache Hive's Plugin Developer Kit, or PDK. This allows developers to build and test Hive plugins without having to set up a Hive source build; only a Hive binary release is needed.
Currently, the PDK is only targeted at user defined functions (including UDAF's and UDTF'S), although it may be possible to use it for building other kinds of plugins such as serdes, input/output formats, storage handlers and index handlers. The PDK's test framework currently only supports automated testing of UDF's.
To demonstrate the PDK in action, the Hive release includes an
examples/test-plugin directory. You can build the test plugin by changing to that directory and running
This will create a
build subdirectory containing the compiled plugin:
pdk-test-udf-0.1.jar. There's also a
build/metadata directory containing
add-jar.sql (demonstrating the command to use to load the plugin jar) and
class-registration.sql (demonstrating the commands to use for loading the UDF's from the plugin). The .sql files can be passed via the Hive CLI's
-i command-line parameter in order to be run as initialization scripts.
You can run the tests associated with the plugin via
If all is well, you should see output like
The example plugin is also built and tested as part of the main Hive build in order to verify that the PDK is operating as expected.
Your Own Plugin
To create your own plugin, you can follow the patterns from the example plugin. Let's take a closer look at it. First, the
All this buildfile does is define some variable settings and then import a build script from the PDK, which does the rest (including defining the package and test targets used for building and testing the plugin). So for your own plugin, change the variable settings accordingly, and set hive.install.dir to the location where you've installed the Hive release.
The imported PDK buildfile assumes a few things about the structure of your plugin source structure:
- Java source files
- any datafiles needed by your tests
For the example plugin, a datafile onerow.txt contains a single row of data; setup.sql creates a table named onerow and loads the datafile, whereas cleanup.sql drops the onerow table. The onerow table is convenient for testing UDF's.
Now let's take a look at the source code for a UDF.
The annotations are interpreted by the PDK as follows:
- @Description: provides metadata to Hive about a UDF's syntax and usage. Only classes with this annotation will be included in the generated class-registration.sql
- @HivePdkUnitTests: enumerates one or more test cases, and also specifies optional setup and cleanup commands to run before and after the test cases.
- @HivePdkUnitTest: specifies one test case, consisting of the query to run and the expected result
Annotations allow the code and tests to be kept close together. This is good for small tests; if your tests are very complicated, you may want to set up your own scripting around the Hive CLI.
The PDK executes tests as follows:
- Run top-level cleanup.sql (in case a previous test failed in the middle)
- Run top-level setup.sql
- For each class with @HivePdkUnitTests annotation
- Run class cleanup (if any)
- Run class setup (if any)
- For each @HivePdkUnitTest annotation, run query and verify that actual result matches expected result
- Run class cleanup (if any)
- Run top-level cleanup.sql
If you encounter problems during test execution, look in the file
TEST-org.apache.hive.pdk.PluginTest.txt for details.
- support annotations for other plugin types
- add more annotations for automatically validating function parameters at runtime (instead of requiring the developer to write imperative Java code for this)
- add Eclipse support
- move Hive builtins to use PDK for more convenient testing (HIVE-2523 )
- command-line option for invoking a single testcase