I have a project that uses both spark and hadoop-aws (for resolving s3a in hadoop 2.6, I think a lot of project uses this configuration). However, they have severe conflict in transitive dependency. Namely spark 1.3.1 uses jackson-databind 2.4.4, and hadoop-aws for hadoop 2.6 uses jackson-databind 2.2.3, and the worst thing is that: they both refuse to run on each other's version, API of jackson has changed a lot within 2 major upgrades.
I know I can manually append hadoop-aws jar only in deployment phase and avoid using it in compilation/testing/packaging. But this seems to be an 'inelegant' solution - the best practice for software engineering is to let maven handle everything, and test all features before shipping. Is there a maven configuration that allows me to do this?