edit: for the solution, see my comment below
I’m trying to package a go application (beszel) that bundles a bunch of html stuff built with bun (think, npm).
The html is generated by running bun install
and bun run
and then embedded in the go binary with //go:embed
.
Being completely ignorant of the javascript ecosystem, my first idea was to just replicate what they do in the Makefile
postConfigure = ''
bun install --cwd ./site
bun run --cwd ./site build
''
but, since bun install
downloads dependencies from the net, that fails.
I guess the “clean” solution would be to look for buildNpmPackage
or similar (assuming that exists) and let nix manage all the dependencies, but… it’s some 800+ dependencies (at least, bun install ... --dry-run
lists 800+ things) so that’s a hard pass.
I then tried to look at how buildGoPackage
handles the vendoring of dependencies, with the idea of replicating that (it dowloads what’s needed and then compare a hash of what was downloaded with a hash provided in the nix package definition), but… I can’t for the life of me decipher how nixpkgs’ pkgs/build-support/go/module.nix works.
Do you know how to implement this kind of vendoring in a nix derivation?
Maybe someone else will have a better answer but in similar situations I’ve seen the derivation simply downloads a compiled release directly.
I ran into the same issue trying to package silverbullet which uses deno and I gave up, later I saw it was added to nixpkgs by just downloading the github release.
Found the solution (I think): basically it should just work as expected if you just add
outputHashAlgo
,outputHashMode
andoutputHash
to your derivation.documentation
article
That will only work if it is reproducible. Given that it downloads random shit from the internet, that’s unlikely.
To package this properly, you need to build a derivation that can use a lock file to bundle the deps into some sort of stable format. This is how go’s vendoring works.
You seem to trust the javascript ecosystem just as much as I do :)
Jokes aside, the repo has a lock file so it should actually be fine (time will tell)
Having a record which defines exactly what to fetch is the necessary condition, not the sufficient condition.
The actual artifacts fetched to disk must be stable, not just the record.
Until someone rewrites git history and screws up your build
That’d hit the source fetcher just as much. That’s an issue on a different layer.