Currently Data Fusion that based purely on CDAP platfrom, represents a variety of the desired features, extending the basic functionality within a particular plugins.
When you need to explore your plugin and use it in a dedicated pipeline or stream source, plugin essentially needs to be deployed to the parent cdap-data-pipeline
or cdap-data-streams
artifacts respectively.
Basically Artifact is as packaged file produced by some software development process, which contains application related properties and dependencies, furthermore it should contain unique identifiers like groupId
, artifactId
, version
.
Looking at the issue you've reported, I would recommend to start research from the first build phase, compiling code and packaging it to JAR
and JSON
files. The main sensitive contributor here is pom.xml
file, as it contains the significant information about the project and configuration details used by Maven to build the project, knowing also as POM.
A few things that can be checked:
What was a custom configuration applied to the origin http-plugin
source code, and whether you consider any changes caused to the
native code by reflecting them to pom.xml
file;
Check if the correct repository is specified in the pom.xml
from
which the package was downloaded originally;
- How did you compile the source code in Maven, i.e:
mvn clean
install
or mvn clean package
?;
- Check Maven compilation output, seeking for any suspicious outputs.
I did a quick test, cloning HTTP sink plugin repo and following the implementation steps in the guide section; I built JAR and JSON files and was able to successfully deploy http-plugins 1.3.0-SNAPSHOT
in my Data Fusion instance.