-
|
Hi all, Our application is built with pyspark and our dependencies are dicatated by For example:
I'd like to keep the number of dependencies we shade to a minimum to avoid bloat To solve this, I tried to model the libraries shipped in pyspark as a cross scala After publishing all cross versions, I added the bom to the $ sh mill 'hail[].compile'
[196/196, 1 failed] ============================== hail[].compile ==============================
1 tasks failed
hail[dataproc-2.3.x].resolvedMvnDeps java.lang.RuntimeException: Cannot find org.json4s:json4s-jackson_2.12: in projectCacheWhen I use I'd like to use published boms for a couple of reasons:
I hope this makes sense... Thanks so much for your help in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
When using object hail extends CrossSbtModule {
override def repositories = super.repositories() ++ Seq(
coursier.LocalRepositories.ivy2Local
)
override def bomMvnDeps = Seq(
`pyspark-bom` :: crossSparkVersion
)
}Alternatively, publish using Maven format with |
Beta Was this translation helpful? Give feedback.
When using
bomMvnDepswith a locally published BOM, Mill needs the local repository explicitly configured. Add the Ivy local repo to your module's repositories:Alternatively, publish using Maven format with
LocalM2Publisherinstead ofpublishLocal(Ivy format).