lib.attrset.attrByPathlib.attrsets.hasAttrByPathlib.attrsets.setAttrByPathlib.attrsets.getAttrFromPathlib.attrsets.attrValslib.attrsets.attrValueslib.attrsets.catAttrslib.attrsets.filterAttrslib.attrsets.filterAttrsRecursivelib.attrsets.foldAttrslib.attrsets.collectlib.attrsets.nameValuePairlib.attrsets.mapAttrslib.attrsets.mapAttrs'lib.attrsets.mapAttrsToListlib.attrsets.mapAttrsRecursivelib.attrsets.mapAttrsRecursiveCondlib.attrsets.genAttrslib.attrsets.isDerivationlib.attrsets.toDerivationlib.attrsets.optionalAttrslib.attrsets.zipAttrsWithNameslib.attrsets.zipAttrsWithlib.attrsets.zipAttrslib.attrsets.recursiveUpdateUntillib.attrsets.recursiveUpdatelib.strings.concatStringslib.strings.concatMapStringslib.strings.concatImapStringslib.strings.intersperselib.strings.concatStringsSeplib.strings.concatMapStringsSeplib.strings.concatImapStringsSeplib.strings.makeSearchPathlib.strings.makeSearchPathOutputlib.strings.makeLibraryPathlib.strings.makeBinPathlib.strings.optionalStringlib.strings.hasPrefixlib.strings.hasSuffixlib.strings.hasInfixlib.strings.stringToCharacterslib.strings.stringAsCharslib.strings.escapelib.strings.escapeShellArglib.strings.escapeShellArgslib.strings.escapeNixStringlib.strings.toLowerlib.strings.toUpperlib.strings.addContextFromlib.strings.splitStringlib.strings.removePrefixlib.strings.removeSuffixlib.strings.versionOlderlib.strings.versionAtLeastlib.strings.getVersionlib.strings.nameFromURLlib.strings.enableFeaturelib.strings.enableFeatureAslib.strings.withFeaturelib.strings.withFeatureAslib.strings.fixedWidthStringlib.strings.fixedWidthNumberlib.strings.isCoercibleToStringlib.strings.isStorePathlib.strings.toIntlib.strings.readPathsFromFilelib.strings.fileContentslib.trivial.idlib.trivial.constlib.trivial.concatlib.trivial.orlib.trivial.andlib.trivial.bitAndlib.trivial.bitOrlib.trivial.bitXorlib.trivial.bitNotlib.trivial.boolToStringlib.trivial.mergeAttrslib.trivial.fliplib.trivial.mapNullablelib.trivial.versionlib.trivial.releaselib.trivial.codeNamelib.trivial.versionSuffixlib.trivial.revisionWithDefaultlib.trivial.inNixShelllib.trivial.minlib.trivial.maxlib.trivial.modlib.trivial.comparelib.trivial.splitByAndComparelib.trivial.importJSONlib.trivial.setFunctionArgslib.trivial.functionArgslib.trivial.isFunctionlib.lists.singletonlib.lists.foldrlib.lists.foldlib.lists.foldllib.lists.foldl'lib.lists.imap0lib.lists.imap1lib.lists.concatMaplib.lists.flattenlib.lists.removelib.lists.findSinglelib.lists.findFirstlib.lists.anylib.lists.alllib.lists.countlib.lists.optionallib.lists.optionalslib.lists.toListlib.lists.rangelib.lists.partitionlib.lists.groupBy'lib.lists.zipListsWithlib.lists.zipListslib.lists.reverseListlib.lists.listDfslib.lists.toposortlib.lists.sortlib.lists.compareListslib.lists.naturalSortlib.lists.takelib.lists.droplib.lists.sublistlib.lists.lastlib.lists.initlib.lists.crossListslib.lists.uniquelib.lists.intersectListslib.lists.subtractListslib.lists.mutuallyExclusivelib.debug.traceIflib.debug.traceValFnlib.debug.traceVallib.debug.traceSeqlib.debug.traceSeqNlib.debug.traceValSeqFnlib.debug.traceValSeqlib.debug.traceValSeqNFnlib.debug.traceValSeqNlib.debug.runTestslib.debug.testAllTruelib.options.isOptionlib.options.mkOptionlib.options.mkEnableOptionlib.options.mkSinkUndeclaredOptionslib.options.mergeEqualOptionlib.options.getValueslib.options.getFileslib.options.scrubOptionValuelib.options.literalExamplelib.options.showOptionmath.h not foundpython setup.py bdist_wheel cannot create .whlinstall_data / data_files problemsconfiguration.nix?./result/bin/)outPath attribute name.lib.attrsets.mapAttrsRecursivecond is truecond is falselib.strings.concatStrings usage examplelib.strings.concatMapStrings usage examplelib.strings.concatImapStrings usage examplelib.strings.intersperse usage examplelib.strings.concatStringsSep usage examplelib.strings.concatMapStringsSep usage examplelib.strings.concatImapStringsSep usage examplelib.strings.makeSearchPath usage examplelib.strings.makeSearchPathOutput usage examplelib.strings.makeLibraryPath usage examplelib.strings.makeBinPath usage examplelib.strings.optionalString usage examplelib.strings.hasPrefix usage examplelib.strings.hasSuffix usage examplelib.strings.hasInfix usage examplelib.strings.stringToCharacters usage examplelib.strings.stringAsChars usage examplelib.strings.escape usage examplelib.strings.escapeShellArg usage examplelib.strings.escapeShellArgs usage examplelib.strings.escapeNixString usage examplelib.strings.toLower usage examplelib.strings.toUpper usage examplelib.strings.addContextFrom usage examplelib.strings.splitString usage examplelib.strings.removePrefix usage examplelib.strings.removeSuffix usage examplelib.strings.versionOlder usage examplelib.strings.versionAtLeast usage examplelib.strings.getVersion usage examplelib.strings.nameFromURL usage examplelib.strings.enableFeature usage examplelib.strings.enableFeatureAs usage examplelib.strings.withFeature usage examplelib.strings.withFeatureAs usage examplelib.strings.fixedWidthString usage examplelib.strings.fixedWidthNumber usage examplelib.strings.isStorePath usage examplelib.strings.toInt usage examplelib.strings.readPathsFromFile usage examplelib.strings.fileContents usage examplelib.trivial.const usage examplelib.trivial.concat usage examplelib.trivial.mergeAttrs usage examplelib.trivial.flip usage examplelib.trivial.mapNullable usage examplelib.trivial.mod usage examplelib.trivial.splitByAndCompare usage examplelib.lists.singleton usage examplelib.lists.foldr usage examplelib.lists.foldl usage examplelib.lists.imap0 usage examplelib.lists.imap1 usage examplelib.lists.concatMap usage examplelib.lists.flatten usage examplelib.lists.remove usage examplelib.lists.findSingle usage examplelib.lists.findFirst usage examplelib.lists.any usage examplelib.lists.all usage examplelib.lists.count usage examplelib.lists.optional usage examplelib.lists.optionals usage examplelib.lists.toList usage examplelib.lists.range usage examplelib.lists.partition usage examplelib.lists.groupBy' usage examplelib.lists.zipListsWith usage examplelib.lists.zipLists usage examplelib.lists.reverseList usage examplelib.lists.listDfs usage examplelib.lists.toposort usage examplelib.lists.sort usage examplelib.lists.compareLists usage examplelib.lists.naturalSort usage examplelib.lists.take usage examplelib.lists.drop usage examplelib.lists.sublist usage examplelib.lists.last usage examplelib.lists.init usage examplelib.lists.crossLists usage examplelib.lists.unique usage examplelib.lists.intersectLists usage examplelib.lists.subtractLists usage examplelib.debug.traceIf usage examplelib.debug.traceValFn usage examplelib.debug.traceVal usage examplelib.debug.traceSeq usage examplelib.debug.traceSeqN usage examplelib.debug.testAllTrue usage examplelib.options.isOption usage examplelib.options.mkOption usage examplelib.options.mkEnableOption usage examplelib.options.getValues usage examplelib.options.getFiles usage examplelib.options.showOption usage examplebower.jsongulpfile.js)default.nixThe Nix Packages collection (Nixpkgs) is a set of thousands of packages for the Nix package manager, released under a permissive MIT/X11 license. Packages are available for several platforms, and can be used with the Nix package manager on most GNU/Linux distributions as well as NixOS.
This manual primarily describes how to write packages for the Nix Packages collection (Nixpkgs). Thus it’s mainly for packagers and developers who want to add packages to Nixpkgs. If you like to learn more about the Nix package manager and the Nix expression language, then you are kindly referred to the Nix manual.
Nix expressions describe how to build packages from source and are collected in the nixpkgs repository. Also included in the collection are Nix expressions for NixOS modules. With these expressions the Nix package manager can build binary packages.
    Packages, including the Nix packages collection, are distributed through
    channels.
    The collection is distributed for users of Nix on non-NixOS distributions
    through the channel nixpkgs. Users of NixOS generally
    use one of the nixos-* channels, e.g.
    nixos-16.03, which includes all packages and modules for
    the stable NixOS 16.03. Stable NixOS releases are generally only given
    security updates. More up to date packages and modules are available via
    the nixos-unstable channel.
   
    Both nixos-unstable and nixpkgs
    follow the master branch of the Nixpkgs repository,
    although both do lag the master branch by generally
    a couple of days.
    Updates to a channel are distributed as soon as all tests for that channel
    pass, e.g.
    this
    table shows the status of tests for the nixpkgs
    channel.
   
    The tests are conducted by a cluster called
    Hydra, which also builds
    binary packages from the Nix expressions in Nixpkgs for
    x86_64-linux, i686-linux and
    x86_64-darwin. The binaries are made available via a
    binary cache.
   
    The current Nix expressions of the channels are available in the
    nixpkgs-channels
    repository, which has branches corresponding to the available channels.
   
To add a package to Nixpkgs:
Checkout the Nixpkgs source tree:
$ git clone https://github.com/NixOS/nixpkgs $ cd nixpkgs
      Find a good place in the Nixpkgs tree to add the Nix expression for your
      package. For instance, a library package typically goes into
      pkgs/development/libraries/,
      while a web browser goes into
      pkgnamepkgs/applications/networking/browsers/.
      See Section 13.3, “File naming and organisation” for some hints on the tree
      organisation. Create a directory for your package, e.g.
pkgname
$ mkdir pkgs/development/libraries/libfoo
      In the package directory, create a Nix expression — a piece of
      code that describes how to build the package. In this case, it should be
      a function that is called with the package
      dependencies as arguments, and returns a build of the package in the Nix
      store. The expression should usually be called
      default.nix.
$ emacs pkgs/development/libraries/libfoo/default.nix $ git add pkgs/development/libraries/libfoo/default.nix
      You can have a look at the existing Nix expressions under
      pkgs/ to see how it’s done. Here are some
      good ones:
      
         GNU Hello:
         pkgs/applications/misc/hello/default.nix.
         Trivial package, which specifies some meta
         attributes which is good practice.
        
         GNU cpio:
         pkgs/tools/archivers/cpio/default.nix.
         Also a simple package. The generic builder in
         stdenv does everything for you. It has no
         dependencies beyond stdenv.
        
         GNU Multiple Precision arithmetic library (GMP):
         pkgs/development/libraries/gmp/5.1.x.nix.
         Also done by the generic builder, but has a dependency on
         m4.
        
         Pan, a GTK-based newsreader:
         pkgs/applications/networking/newsreaders/pan/default.nix.
         Has an optional dependency on gtkspell, which is
         only built if spellCheck is
         true.
        
         Apache HTTPD:
         pkgs/servers/http/apache-httpd/2.4.nix.
         A bunch of optional features, variable substitutions in the configure
         flags, a post-install hook, and miscellaneous hackery.
        
         Thunderbird:
         pkgs/applications/networking/mailreaders/thunderbird/default.nix.
         Lots of dependencies.
        
         JDiskReport, a Java utility:
         pkgs/tools/misc/jdiskreport/default.nix
         (and the
         builder).
         Nixpkgs doesn’t have a decent stdenv for
         Java yet so this is pretty ad-hoc.
        
         XML::Simple, a Perl module:
         pkgs/top-level/perl-packages.nix
         (search for the XMLSimple attribute). Most Perl
         modules are so simple to build that they are defined directly in
         perl-packages.nix; no need to make a separate
         file for them.
        
         Adobe Reader:
         pkgs/applications/misc/adobe-reader/default.nix.
         Shows how binary-only packages can be supported. In particular the
         builder
         uses patchelf to set the RUNPATH and ELF
         interpreter of the executables so that the right libraries are found
         at runtime.
        
Some notes:
         All meta attributes are
         optional, but it’s still a good idea to provide at least the
         description, homepage and
         license.
        
         You can use nix-prefetch-url
         url to get the SHA-256 hash of source
         distributions. There are similar commands as
         nix-prefetch-git and
         nix-prefetch-hg available in
         nix-prefetch-scripts package.
        
         A list of schemes for mirror:// URLs can be found
         in
         pkgs/build-support/fetchurl/mirrors.nix.
        
The exact syntax and semantics of the Nix expression language, including the built-in function, are described in the Nix manual in the chapter on writing Nix expressions.
      Add a call to the function defined in the previous step to
      pkgs/top-level/all-packages.nix
      with some descriptive name for the variable, e.g.
      libfoo.
$ emacs pkgs/top-level/all-packages.nix
The attributes in that file are sorted by category (like “Development / Libraries”) that more-or-less correspond to the directory structure of Nixpkgs, and then by attribute name.
To test whether the package builds, run the following command from the root of the nixpkgs source tree:
$ nix-build -A libfoo
      where libfoo should be the variable name defined in
      the previous step. You may want to add the flag -K to
      keep the temporary build directory in case something fails. If the build
      succeeds, a symlink ./result to the package in the
      Nix store is created.
     
If you want to install the package into your profile (optional), do
$ nix-env -f . -iA libfoo
      Optionally commit the new package and open a pull request, or send a
      patch to
      https://groups.google.com/forum/#!forum/nix-devel.
     
   The standard build environment in the Nix Packages collection provides an
   environment for building Unix packages that does a lot of common build tasks
   automatically. In fact, for Unix packages that use the standard
   ./configure; make; make install build interface, you
   don’t need to write a build script at all; the standard environment
   does everything automatically. If stdenv doesn’t
   do what you need automatically, you can easily customise or override the
   various build phases.
  
    To build a package with the standard environment, you use the function
    stdenv.mkDerivation, instead of the primitive built-in
    function derivation, e.g.
stdenv.mkDerivation {
  name = "libfoo-1.2.3";
  src = fetchurl {
    url = http://example.org/libfoo-1.2.3.tar.bz2;
    sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
  };
}
    (stdenv needs to be in scope, so if you write this in a
    separate Nix expression from pkgs/all-packages.nix,
    you need to pass it as a function argument.) Specifying a
    name and a src is the absolute
    minimum you need to do. Many packages have dependencies that are not
    provided in the standard environment. It’s usually sufficient to
    specify those dependencies in the buildInputs attribute:
stdenv.mkDerivation {
  name = "libfoo-1.2.3";
  ...
  buildInputs = [libbar perl ncurses];
}
    This attribute ensures that the bin subdirectories of
    these packages appear in the PATH environment variable
    during the build, that their include subdirectories
    are searched by the C compiler, and so on. (See
    Section 3.7, “Package setup hooks” for details.)
   
Often it is necessary to override or modify some aspect of the build. To make this easier, the standard environment breaks the package build into a number of phases, all of which can be overridden or modified individually: unpacking the sources, applying patches, configuring, building, and installing. (There are some others; see Section 3.5, “Phases”.) For instance, a package that doesn’t supply a makefile but instead has to be compiled “manually” could be handled like this:
stdenv.mkDerivation {
  name = "fnord-4.5";
  ...
  buildPhase = ''
    gcc foo.c -o foo
  '';
  installPhase = ''
    mkdir -p $out/bin
    cp foo $out/bin
  '';
}
    (Note the use of ''-style string literals, which are
    very convenient for large multi-line script fragments because they
    don’t need escaping of " and
    \, and because indentation is intelligently removed.)
   
There are many other attributes to customise the build. These are listed in Section 3.4, “Attributes”.
While the standard environment provides a generic builder, you can still supply your own build script:
stdenv.mkDerivation {
  name = "libfoo-1.2.3";
  ...
  builder = ./builder.sh;
}where the builder can do anything it wants, but typically starts with
source $stdenv/setup
    to let stdenv set up the environment (e.g., process the
    buildInputs). If you want, you can still use
    stdenv’s generic builder:
source $stdenv/setup
buildPhase() {
  echo "... this is my custom build phase ..."
  gcc foo.c -o foo
}
installPhase() {
  mkdir -p $out/bin
  cp foo $out/bin
}
genericBuild
The standard environment provides the following packages:
The GNU C Compiler, configured with C and C++ support.
GNU coreutils (contains a few dozen standard Unix commands).
GNU findutils (contains find).
GNU diffutils (contains diff, cmp).
GNU sed.
GNU grep.
GNU awk.
GNU tar.
gzip, bzip2 and xz.
GNU Make. It has been patched to provide “nested” output that can be fed into the nix-log2xml command and log2html stylesheet to create a structured, readable output of the build steps performed by Make.
Bash. This is the shell used for all builders in the Nix Packages collection. Not using /bin/sh removes a large source of portability problems.
The patch command.
    On Linux, stdenv also includes the
    patchelf utility.
   
    As described in the Nix manual, almost any *.drv store
    path in a derivation's attribute set will induce a dependency on that
    derivation. mkDerivation, however, takes a few
    attributes intended to, between them, include all the dependencies of a
    package. This is done both for structure and consistency, but also so that
    certain other setup can take place. For example, certain dependencies need
    their bin directories added to the PATH. That is built-in,
    but other setup is done via a pluggable mechanism that works in conjunction
    with these dependency attributes. See Section 3.7, “Package setup hooks”
    for details.
   
Dependencies can be broken down along three axes: their host and target platforms relative to the new derivation's, and whether they are propagated. The platform distinctions are motivated by cross compilation; see Chapter 5, Cross-compilation for exactly what each platform means. [1] But even if one is not cross compiling, the platforms imply whether or not the dependency is needed at run-time or build-time, a concept that makes perfect sense outside of cross compilation. For now, the run-time/build-time distinction is just a hint for mental clarity, but in the future it perhaps could be enforced.
    The extension of PATH with dependencies, alluded to above,
    proceeds according to the relative platforms alone. The process is carried
    out only for dependencies whose host platform matches the new derivation's
    build platform i.e. dependencies which run on the platform where the new
    derivation will be built.
    [2]
    For each dependency dep of those dependencies,
    dep/binPATH environment variable.
   
The dependency is propagated when it forces some of its other-transitive (non-immediate) downstream dependencies to also take it on as an immediate dependency. Nix itself already takes a package's transitive dependencies into account, but this propagation ensures nixpkgs-specific infrastructure like setup hooks (mentioned above) also are run as if the propagated dependency.
It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. The exact rules for dependency propagation can be given by assigning to each dependency two integers based one how its host and target platforms are offset from the depending derivation's platforms. Those offsets are given below in the descriptions of each dependency list attribute. Algorithmically, we traverse propagated inputs, accumulating every propagated dependency's propagated dependencies and adjusting them to account for the "shift in perspective" described by the current dependency's platform offsets. This results in sort a transitive closure of the dependency relation, with the offsets being approximately summed when two dependency links are combined. We also prune transitive dependencies whose combined offsets go out-of-bounds, which can be viewed as a filter over that transitive closure removing dependencies that are blatantly absurd.
We can define the process precisely with Natural Deduction using the inference rules. This probably seems a bit obtuse, but so is the bash code that actually implements it! [3] They're confusing in very different ways so... hopefully if something doesn't make sense in one presentation, it will in the other!
let mapOffset(h, t, i) = i + (if i <= 0 then h else t - 1)
propagated-dep(h0, t0, A, B)
propagated-dep(h1, t1, B, C)
h0 + h1 in {-1, 0, 1}
h0 + t1 in {-1, 0, 1}
-------------------------------------- Transitive property
propagated-dep(mapOffset(h0, t0, h1),
               mapOffset(h0, t0, t1),
               A, C)
let mapOffset(h, t, i) = i + (if i <= 0 then h else t - 1)
dep(h0, _, A, B)
propagated-dep(h1, t1, B, C)
h0 + h1 in {-1, 0, 1}
h0 + t1 in {-1, 0, -1}
----------------------------- Take immediate dependencies' propagated dependencies
propagated-dep(mapOffset(h0, t0, h1),
               mapOffset(h0, t0, t1),
               A, C)
propagated-dep(h, t, A, B) ----------------------------- Propagated dependencies count as dependencies dep(h, t, A, B)
    Some explanation of this monstrosity is in order. In the common case, the
    target offset of a dependency is the successor to the target offset:
    t = h + 1. That means that:
let f(h, t, i) = i + (if i <= 0 then h else t - 1) let f(h, h + 1, i) = i + (if i <= 0 then h else (h + 1) - 1) let f(h, h + 1, i) = i + (if i <= 0 then h else h) let f(h, h + 1, i) = i + h
This is where "sum-like" comes in from above: We can just sum all of the host offsets to get the host offset of the transitive dependency. The target offset is the transitive dependency is simply the host offset + 1, just as it was with the dependencies composed to make this transitive one; it can be ignored as it doesn't add any new information.
    Because of the bounds checks, the uncommon cases are h =
    t and h + 2 = t. In the former case, the
    motivation for mapOffset is that since its host and
    target platforms are the same, no transitive dependency of it should be
    able to "discover" an offset greater than its reduced target offsets.
    mapOffset effectively "squashes" all its transitive
    dependencies' offsets so that none will ever be greater than the target
    offset of the original h = t package. In the other case,
    h + 1 is skipped over between the host and target
    offsets. Instead of squashing the offsets, we need to "rip" them apart so
    no transitive dependencies' offset is that one.
   
    Overall, the unifying theme here is that propagation shouldn't be
    introducing transitive dependencies involving platforms the depending
    package is unaware of. The offset bounds checking and definition of
    mapOffset together ensure that this is the case.
    Discovering a new offset is discovering a new platform, and since those
    platforms weren't in the derivation "spec" of the needing package, they
    cannot be relevant. From a capability perspective, we can imagine that the
    host and target platforms of a package are the capabilities a package
    requires, and the depending package must provide the capability to the
    dependency.
   
depsBuildBuild
     
       A list of dependencies whose host and target platforms are the new
       derivation's build platform. This means a -1 host and
       -1 target offset from the new derivation's platforms.
       These are programs and libraries used at build time that produce
       programs and libraries also used at build time. If the dependency
       doesn't care about the target platform (i.e. isn't a compiler or similar
       tool), put it in nativeBuildInputs instead. The most
       common use of this buildPackages.stdenv.cc, the
       default C compiler for this role. That example crops up more than one
       might think in old commonly used C libraries.
      
       Since these packages are able to be run at build-time, they are always
       added to the PATH, as described above. But since these
       packages are only guaranteed to be able to run then, they shouldn't
       persist as run-time dependencies. This isn't currently enforced, but
       could be in the future.
      
nativeBuildInputs
     
       A list of dependencies whose host platform is the new derivation's build
       platform, and target platform is the new derivation's host platform.
       This means a -1 host offset and 0
       target offset from the new derivation's platforms. These are programs
       and libraries used at build-time that, if they are a compiler or similar
       tool, produce code to run at run-time—i.e. tools used to build
       the new derivation. If the dependency doesn't care about the target
       platform (i.e. isn't a compiler or similar tool), put it here, rather
       than in depsBuildBuild or
       depsBuildTarget. This could be called
       depsBuildHost but
       nativeBuildInputs is used for historical continuity.
      
       Since these packages are able to be run at build-time, they are added to
       the PATH, as described above. But since these packages
       are only guaranteed to be able to run then, they shouldn't persist as
       run-time dependencies. This isn't currently enforced, but could be in
       the future.
      
depsBuildTarget
     
       A list of dependencies whose host platform is the new derivation's build
       platform, and target platform is the new derivation's target platform.
       This means a -1 host offset and 1
       target offset from the new derivation's platforms. These are programs
       used at build time that produce code to run with code produced by the
       depending package. Most commonly, these are tools used to build the
       runtime or standard library that the currently-being-built compiler will
       inject into any code it compiles. In many cases, the
       currently-being-built-compiler is itself employed for that task, but
       when that compiler won't run (i.e. its build and host platform differ)
       this is not possible. Other times, the compiler relies on some other
       tool, like binutils, that is always built separately so that the
       dependency is unconditional.
      
This is a somewhat confusing concept to wrap one’s head around, and for good reason. As the only dependency type where the platform offsets are not adjacent integers, it requires thinking of a bootstrapping stage two away from the current one. It and its use-case go hand in hand and are both considered poor form: try to not need this sort of dependency, and try to avoid building standard libraries and runtimes in the same derivation as the compiler produces code using them. Instead strive to build those like a normal library, using the newly-built compiler just as a normal library would. In short, do not use this attribute unless you are packaging a compiler and are sure it is needed.
       Since these packages are able to run at build time, they are added to
       the PATH, as described above. But since these packages
       are only guaranteed to be able to run then, they shouldn't persist as
       run-time dependencies. This isn't currently enforced, but could be in
       the future.
      
depsHostHost
     
       A list of dependencies whose host and target platforms match the new
       derivation's host platform. This means a 0 host
       offset and 0 target offset from the new derivation's
       host platform. These are packages used at run-time to generate code also
       used at run-time. In practice, this would usually be tools used by
       compilers for macros or a metaprogramming system, or libraries used by
       the macros or metaprogramming code itself. It's always preferable to use
       a depsBuildBuild dependency in the derivation being
       built over a depsHostHost on the tool doing the
       building for this purpose.
      
buildInputs
     
       A list of dependencies whose host platform and target platform match the
       new derivation's. This means a 0 host offset and a
       1 target offset from the new derivation's host
       platform. This would be called depsHostTarget but for
       historical continuity. If the dependency doesn't care about the target
       platform (i.e. isn't a compiler or similar tool), put it here, rather
       than in depsBuildBuild.
      
These are often programs and libraries used by the new derivation at run-time, but that isn't always the case. For example, the machine code in a statically-linked library is only used at run-time, but the derivation containing the library is only needed at build-time. Even in the dynamic case, the library may also be needed at build-time to appease the linker.
depsTargetTarget
     
       A list of dependencies whose host platform matches the new derivation's
       target platform. This means a 1 offset from the new
       derivation's platforms. These are packages that run on the target
       platform, e.g. the standard library or run-time deps of standard library
       that a compiler insists on knowing about. It's poor form in almost all
       cases for a package to depend on another from a future stage [future
       stage corresponding to positive offset]. Do not use this attribute
       unless you are packaging a compiler and are sure it is needed.
      
depsBuildBuildPropagated
     
       The propagated equivalent of depsBuildBuild. This
       perhaps never ought to be used, but it is included for consistency [see
       below for the others].
      
propagatedNativeBuildInputs
     
       The propagated equivalent of nativeBuildInputs. This
       would be called depsBuildHostPropagated but for
       historical continuity. For example, if package Y has
       propagatedNativeBuildInputs = [X], and package
       Z has buildInputs = [Y], then
       package Z will be built as if it included package
       X in its nativeBuildInputs. If
       instead, package Z has nativeBuildInputs =
       [Y], then Z will be built as if it included
       X in the depsBuildBuild of package
       Z, because of the sum of the two
       -1 host offsets.
      
depsBuildTargetPropagated
     
       The propagated equivalent of depsBuildTarget. This is
       prefixed for the same reason of alerting potential users.
      
depsHostHostPropagated
     
       The propagated equivalent of depsHostHost.
      
propagatedBuildInputs
     
       The propagated equivalent of buildInputs. This would
       be called depsHostTargetPropagated but for historical
       continuity.
      
depsTargetTargetPropagated
     
       The propagated equivalent of depsTargetTarget. This
       is prefixed for the same reason of alerting potential users.
      
stdenv initialisationNIX_DEBUG
     
       A natural number indicating how much information to log. If set to 1 or
       higher, stdenv will print moderate debugging
       information during the build. In particular, the gcc
       and ld wrapper scripts will print out the complete
       command line passed to the wrapped tools. If set to 6 or higher, the
       stdenv setup script will be run with set
       -x tracing. If set to 7 or higher, the gcc
       and ld wrapper scripts will also be run with
       set -x tracing.
      
enableParallelBuilding
     
       If set to true, stdenv will pass
       specific flags to make and other build tools to
       enable parallel building with up to build-cores
       workers.
      
       Unless set to false, some build systems with good
       support for parallel building including cmake,
       meson, and qmake will set it to
       true.
      
passthru
     This is an attribute set which can be filled with arbitrary values. For example:
passthru = {
  foo = "bar";
  baz = {
    value1 = 4;
    value2 = 5;
  };
}
       Values inside it are not passed to the builder, so you can change them
       without triggering a rebuild. However, they can be accessed outside of a
       derivation directly, as if they were set inside a derivation itself,
       e.g. hello.baz.value1. We don't specify any usage or
       schema of passthru - it is meant for values that
       would be useful outside the derivation in other parts of a Nix
       expression (e.g. in other derivations). An example would be to convey
       some specific dependency of your derivation which contains a program
       with plugins support. Later, others who make derivations with plugins
       can use passed-through dependency to ensure that their plugin would be
       binary-compatible with built program.
      
passthru.updateScript
     
       A script to be run by
       maintainers/scripts/update.nix when the package is
       matched. It needs to be an executable file, either on the file system:
passthru.updateScript = ./update.sh;
or inside the expression itself:
passthru.updateScript = writeScript "update-zoom-us" '' #!/usr/bin/env nix-shell #!nix-shell -i bash -p curl pcre common-updater-scripts set -eu -o pipefail version="$(curl -sI https://zoom.us/client/latest/zoom_x86_64.tar.xz | grep -Fi 'Location:' | pcregrep -o1 '/(([0-9]\.?)+)/')" update-source-version zoom-us "$version" '';
The attribute can also contain a list, a script followed by arguments to be passed to it:
passthru.updateScript = [ ../../update.sh pname "--requested-release=unstable" ];
Note that the update scripts will be run in parallel by default; you should avoid running git commit or any other commands that cannot handle that.
For information about how to run the updates, execute
nix-shell  [
         maintainers/scripts/update.nix
        ]
.
The generic builder has a number of phases. Package builds are split into phases to make it easier to override specific parts of the build (e.g., unpacking the sources or installing the binaries). Furthermore, it allows a nicer presentation of build logs in the Nix build farm.
    Each phase can be overridden in its entirety either by setting the
    environment variable
    namePhasenamePhasepostInstall or preFixup, as skipping
    some of the default actions may have unexpected consequences.
   
There are a number of variables that control what phases are executed and in what order:
phases
       
         Specifies the phases. You can change the order in which phases are
         executed, or add new phases, by setting this variable. If it’s
         not set, the default value is used, which is $prePhases
         unpackPhase patchPhase $preConfigurePhases configurePhase
         $preBuildPhases buildPhase checkPhase $preInstallPhases installPhase
         fixupPhase $preDistPhases distPhase $postPhases.
        
         Usually, if you just want to add a few phases, it’s more
         convenient to set one of the variables below (such as
         preInstallPhases), as you then don’t specify
         all the normal phases.
        
prePhases
       Additional phases executed before any of the default phases.
preConfigurePhases
       Additional phases executed just before the configure phase.
preBuildPhases
       Additional phases executed just before the build phase.
preInstallPhases
       Additional phases executed just before the install phase.
preFixupPhases
       Additional phases executed just before the fixup phase.
preDistPhases
       Additional phases executed just before the distribution phase.
postPhases
       Additional phases executed after any of the default phases.
     The unpack phase is responsible for unpacking the source code of the
     package. The default implementation of unpackPhase
     unpacks the source files listed in the src environment
     variable to the current directory. It supports the following files by
     default:
     
         These can optionally be compressed using gzip
         (.tar.gz, .tgz or
         .tar.Z), bzip2
         (.tar.bz2, .tbz2 or
         .tbz) or xz
         (.tar.xz, .tar.lzma or
         .txz).
        
         Zip files are unpacked using unzip. However,
         unzip is not in the standard environment, so you
         should add it to nativeBuildInputs yourself.
        
         These are simply copied to the current directory. The hash part of the
         file name is stripped, e.g.
         /nix/store/1wydxgby13cz...-my-sources would be
         copied to my-sources.
        
     Additional file types can be supported by setting the
     unpackCmd variable (see below).
    
srcs / src
      The list of source files or directories to be unpacked or copied. One of these must be set.
sourceRoot
      
        After running unpackPhase, the generic builder
        changes the current directory to the directory created by unpacking the
        sources. If there are multiple source directories, you should set
        sourceRoot to the name of the intended directory.
       
setSourceRoot
      
        Alternatively to setting sourceRoot, you can set
        setSourceRoot to a shell command to be evaluated by
        the unpack phase after the sources have been unpacked. This command
        must set sourceRoot.
       
preUnpack
      Hook executed at the start of the unpack phase.
postUnpack
      Hook executed at the end of the unpack phase.
dontMakeSourcesWritable
      
        If set to 1, the unpacked sources are
        not made writable. By default, they are made
        writable to prevent problems with read-only sources. For example,
        copied store directories would be read-only without this.
       
unpackCmd
      
        The unpack phase evaluates the string $unpackCmd for
        any unrecognised file. The path to the current source file is contained
        in the curSrc variable.
       
     The patch phase applies the list of patches defined in the
     patches variable.
    
patches
      
        The list of patches. They must be in the format accepted by the
        patch command, and may optionally be compressed
        using gzip (.gz),
        bzip2 (.bz2) or
        xz (.xz).
       
patchFlags
      
        Flags to be passed to patch. If not set, the
        argument -p1 is used, which causes the leading
        directory component to be stripped from the file names in each patch.
       
prePatch
      Hook executed at the start of the patch phase.
postPatch
      Hook executed at the end of the patch phase.
     The configure phase prepares the source tree for building. The default
     configurePhase runs ./configure
     (typically an Autoconf-generated script) if it exists.
    
configureScript
      
        The name of the configure script. It defaults to
        ./configure if it exists; otherwise, the configure
        phase is skipped. This can actually be a command (like perl
        ./Configure.pl).
       
configureFlags
      A list of strings passed as additional arguments to the configure script.
configureFlagsArray
      
        A shell array containing additional arguments passed to the configure
        script. You must use this instead of configureFlags
        if the arguments contain spaces.
       
dontAddPrefix
      
        By default, the flag --prefix=$prefix is added to
        the configure flags. If this is undesirable, set this variable to true.
       
prefix
      
        The prefix under which the package must be installed, passed via the
        --prefix option to the configure script. It defaults
        to $out.
       
prefixKey
      
        The key to use when specifying the prefix. By default, this is set to
        --prefix= as that is used by the majority of packages.
       
dontAddDisableDepTrack
      
        By default, the flag --disable-dependency-tracking
        is added to the configure flags to speed up Automake-based builds. If
        this is undesirable, set this variable to true.
       
dontFixLibtool
      
        By default, the configure phase applies some special hackery to all
        files called ltmain.sh before running the
        configure script in order to improve the purity of Libtool-based
        packages
        [4]
        . If this is undesirable, set this variable to true.
       
dontDisableStatic
      
        By default, when the configure script has
        --enable-static, the option
        --disable-static is added to the configure flags.
       
If this is undesirable, set this variable to true.
configurePlatforms
      
        By default, when cross compiling, the configure script has
        --build=... and --host=... passed.
        Packages can instead pass [ "build" "host" "target"
        ] or a subset to control exactly which platform flags are
        passed. Compilers and other tools can use this to also pass the target
        platform.
        [5]
       
preConfigure
      Hook executed at the start of the configure phase.
postConfigure
      Hook executed at the end of the configure phase.
     The build phase is responsible for actually building the package (e.g.
     compiling it). The default buildPhase simply calls
     make if a file named Makefile,
     makefile or GNUmakefile exists
     in the current directory (or the makefile is explicitly
     set); otherwise it does nothing.
    
dontBuild
      Set to true to skip the build phase.
makefile
      The file name of the Makefile.
makeFlags
      
        A list of strings passed as additional flags to
        make. These flags are also used by the default
        install and check phase. For setting make flags specific to the build
        phase, use buildFlags (see below).
makeFlags = [ "PREFIX=$(out)" ];
The flags are quoted in bash, but environment variables can be specified by using the make syntax.
makeFlagsArray
      
        A shell array containing additional arguments passed to
        make. You must use this instead of
        makeFlags if the arguments contain spaces, e.g.
preBuild = '' makeFlagsArray+=(CFLAGS="-O0 -g" LDFLAGS="-lfoo -lbar") '';
        Note that shell arrays cannot be passed through environment variables,
        so you cannot set makeFlagsArray in a derivation
        attribute (because those are passed through environment variables): you
        have to define them in shell code.
       
buildFlags / buildFlagsArray
      
        A list of strings passed as additional flags to
        make. Like makeFlags and
        makeFlagsArray, but only used by the build phase.
       
preBuild
      Hook executed at the start of the build phase.
postBuild
      Hook executed at the end of the build phase.
     You can set flags for make through the
     makeFlags variable.
    
     Before and after running make, the hooks
     preBuild and postBuild are called,
     respectively.
    
     The check phase checks whether the package was built correctly by running
     its test suite. The default checkPhase calls
     make check, but only if the doCheck
     variable is enabled.
    
doCheck
      
        Controls whether the check phase is executed. By default it is skipped,
        but if doCheck is set to true, the check phase is
        usually executed. Thus you should set
doCheck = true;
        in the derivation to enable checks. The exception is cross compilation.
        Cross compiled builds never run tests, no matter how
        doCheck is set, as the newly-built program won't run
        on the platform used to build it.
       
makeFlags / makeFlagsArray / makefile
      See the build phase for details.
checkTarget
      
        The make target that runs the tests. Defaults to
        check.
       
checkFlags / checkFlagsArray
      
        A list of strings passed as additional flags to
        make. Like makeFlags and
        makeFlagsArray, but only used by the check phase.
       
checkInputs
      
        A list of dependencies used by the phase. This gets included in
        nativeBuildInputs when doCheck is
        set.
       
preCheck
      Hook executed at the start of the check phase.
postCheck
      Hook executed at the end of the check phase.
     The install phase is responsible for installing the package in the Nix
     store under out. The default
     installPhase creates the directory
     $out and calls make install.
    
makeFlags / makeFlagsArray / makefile
      See the build phase for details.
installTargets
      
        The make targets that perform the installation. Defaults to
        install. Example:
installTargets = "install-bin install-doc";
installFlags / installFlagsArray
      
        A list of strings passed as additional flags to
        make. Like makeFlags and
        makeFlagsArray, but only used by the install phase.
       
preInstall
      Hook executed at the start of the install phase.
postInstall
      Hook executed at the end of the install phase.
     The fixup phase performs some (Nix-specific) post-processing actions on
     the files installed under $out by the install phase.
     The default fixupPhase does the following:
     
        It moves the man/, doc/ and
        info/ subdirectories of $out to
        share/.
       
It strips libraries and executables of debug information.
        On Linux, it applies the patchelf command to ELF
        executables and libraries to remove unused directories from the
        RPATH in order to prevent unnecessary runtime
        dependencies.
       
        It rewrites the interpreter paths of shell scripts to paths found in
        PATH. E.g., /usr/bin/perl will be
        rewritten to
        /nix/store/
        found in some-perl/bin/perlPATH.
       
dontStrip
      If set, libraries and executables are not stripped. By default, they are.
dontStripHost
      
        Like dontStripHost, but only affects the
        strip command targetting the package's host
        platform. Useful when supporting cross compilation, but otherwise feel
        free to ignore.
       
dontStripTarget
      
        Like dontStripHost, but only affects the
        strip command targetting the packages' target
        platform. Useful when supporting cross compilation, but otherwise feel
        free to ignore.
       
dontMoveSbin
      
        If set, files in $out/sbin are not moved to
        $out/bin. By default, they are.
       
stripAllList
      List of directories to search for libraries and executables from which all symbols should be stripped. By default, it’s empty. Stripping all symbols is risky, since it may remove not just debug symbols but also ELF information necessary for normal execution.
stripAllFlags
      
        Flags passed to the strip command applied to the
        files in the directories listed in stripAllList.
        Defaults to -s (i.e. --strip-all).
       
stripDebugList
      
        List of directories to search for libraries and executables from which
        only debugging-related symbols should be stripped. It defaults to
        lib bin sbin.
       
stripDebugFlags
      
        Flags passed to the strip command applied to the
        files in the directories listed in stripDebugList.
        Defaults to -S (i.e. --strip-debug).
       
dontPatchELF
      
        If set, the patchelf command is not used to remove
        unnecessary RPATH entries. Only applies to Linux.
       
dontPatchShebangs
      
        If set, scripts starting with #! do not have their
        interpreter paths rewritten to paths in the Nix store.
       
forceShare
      
        The list of directories that must be moved from
        $out to $out/share. Defaults
        to man doc info.
       
setupHook
      
        A package can export a setup
        hook by setting this variable. The setup hook, if defined, is
        copied to $out/nix-support/setup-hook. Environment
        variables are then substituted in it using
        substituteAll.
       
preFixup
      Hook executed at the start of the fixup phase.
postFixup
      Hook executed at the end of the fixup phase.
separateDebugInfo
      
        If set to true, the standard environment will enable
        debug information in C/C++ builds. After installation, the debug
        information will be separated from the executables and stored in the
        output named debug. (This output is enabled
        automatically; you don’t need to set the
        outputs attribute explicitly.) To be precise, the
        debug information is stored in
        debug/lib/debug/.build-id/XX/YYYY…XXYYYY… is the
        build ID of the binary — a SHA-1 hash
        of the contents of the binary. Debuggers like GDB use the build ID to
        look up the separated debug information.
       
For example, with GDB, you can add
set debug-file-directory ~/.nix-profile/lib/debug
        to ~/.gdbinit. GDB will then be able to find debug
        information installed via nix-env -i.
       
     The installCheck phase checks whether the package was installed correctly
     by running its test suite against the installed directories. The default
     installCheck calls make
     installcheck.
    
doInstallCheck
      
        Controls whether the installCheck phase is executed. By default it is
        skipped, but if doInstallCheck is set to true, the
        installCheck phase is usually executed. Thus you should set
doInstallCheck = true;
        in the derivation to enable install checks. The exception is cross
        compilation. Cross compiled builds never run tests, no matter how
        doInstallCheck is set, as the newly-built program
        won't run on the platform used to build it.
       
installCheckTarget
      
        The make target that runs the install tests. Defaults to
        installcheck.
       
installCheckFlags / installCheckFlagsArray
      
        A list of strings passed as additional flags to
        make. Like makeFlags and
        makeFlagsArray, but only used by the installCheck
        phase.
       
installCheckInputs
      
        A list of dependencies used by the phase. This gets included in
        buildInputs when doInstallCheck
        is set.
       
preInstallCheck
      Hook executed at the start of the installCheck phase.
postInstallCheck
      Hook executed at the end of the installCheck phase.
     The distribution phase is intended to produce a source distribution of the
     package. The default distPhase first calls
     make dist, then it copies the resulting source tarballs
     to $out/tarballs/. This phase is only executed if the
     attribute doDist is set.
    
distTarget
      
        The make target that produces the distribution. Defaults to
        dist.
       
distFlags / distFlagsArray
      Additional flags passed to make.
tarballs
      
        The names of the source distribution files to be copied to
        $out/tarballs/. It can contain shell wildcards.
        The default is *.tar.gz.
       
dontCopyDist
      
        If set, no files are copied to $out/tarballs/.
       
preDist
      Hook executed at the start of the distribution phase.
postDist
      Hook executed at the end of the distribution phase.
The standard environment provides a number of useful functions.
makeWrapper executable wrapperfile args
     Constructs a wrapper for a program with various possible arguments. For example:
# adds `FOOBAR=baz` to `$out/bin/foo`’s environment
makeWrapper $out/bin/foo $wrapperfile --set FOOBAR baz
# prefixes the binary paths of `hello` and `git`
# Be advised that paths often should be patched in directly
# (via string replacements or in `configurePhase`).
makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello git ]}
       There’s many more kinds of arguments, they are documented in
       nixpkgs/pkgs/build-support/setup-hooks/make-wrapper.sh.
      
       wrapProgram is a convenience function you probably
       want to use most of the time.
      
substitute infile outfile subs
     
       Performs string substitution on the contents of
       infile, writing the result to
       outfile. The substitutions in
       subs are of the following form:
       
--replace s1 s2
         
           Replace every occurrence of the string s1
           by s2.
          
--subst-var varName
         
           Replace every occurrence of
           @ by the
           contents of the environment variable
           varName@varName. This is useful for generating
           files from templates, using
           @ in the template
           as placeholders.
          ...@
--subst-var-by varName s
         
           Replace every occurrence of
           @ by the
           string varName@s.
          
Example:
substitute ./foo.in ./foo.out \
    --replace /usr/bin/bar $bar/bin/bar \
    --replace "a string containing spaces" "some other text" \
    --subst-var someVar
       substitute is implemented using the
       replace
       command. Unlike with the sed command, you
       don’t have to worry about escaping special characters. It
       supports performing substitutions on binary files (such as executables),
       though there you’ll probably want to make sure that the
       replacement string is as long as the replaced string.
      
substituteInPlace file subs
     
       Like substitute, but performs the substitutions in
       place on the file file.
      
substituteAll infile outfile
     
       Replaces every occurrence of
       @, where
       varName@varName is any environment variable, in
       infile, writing the result to
       outfile. For instance, if
       infile has the contents
#! @bash@/bin/sh PATH=@coreutils@/bin echo @foo@
       and the environment contains
       bash=/nix/store/bmwp0q28cf21...-bash-3.2-p39 and
       coreutils=/nix/store/68afga4khv0w...-coreutils-6.12,
       but does not contain the variable foo, then the
       output will be
#! /nix/store/bmwp0q28cf21...-bash-3.2-p39/bin/sh PATH=/nix/store/68afga4khv0w...-coreutils-6.12/bin echo @foo@
That is, no substitution is performed for undefined variables.
       Environment variables that start with an uppercase letter or an
       underscore are filtered out, to prevent global variables (like
       HOME) or private variables (like
       __ETC_PROFILE_DONE) from accidentally getting
       substituted. The variables also have to be valid bash
       “names”, as defined in the bash manpage (alphanumeric or
       _, must not start with a number).
      
substituteAllInPlace file
     
       Like substituteAll, but performs the substitutions
       in place on the file file.
      
stripHash path
     
       Strips the directory and hash part of a store path, outputting the name
       part to stdout. For example:
# prints coreutils-8.24 stripHash "/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
If you wish to store the result in another variable, then the following idiom may be useful:
name="/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24" someVar=$(stripHash $name)
wrapProgram executable makeWrapperArgs
     
       Convenience function for makeWrapper that
       automatically creates a sane wrapper file It takes all the same
       arguments as makeWrapper, except for
       --argv0.
      
It cannot be applied multiple times, since it will overwrite the wrapper file.
Nix itself considers a build-time dependency as merely something that should previously be built and accessible at build time—packages themselves are on their own to perform any additional setup. In most cases, that is fine, and the downstream derivation can deal with its own dependencies. But for a few common tasks, that would result in almost every package doing the same sort of setup work—depending not on the package itself, but entirely on which dependencies were used.
In order to alleviate this burden, the setup hook mechanism was written, where any package can include a shell script that [by convention rather than enforcement by Nix], any downstream reverse-dependency will source as part of its build process. That allows the downstream dependency to merely specify its dependencies, and lets those dependencies effectively initialize themselves. No boilerplate mirroring the list of dependencies is needed.
The setup hook mechanism is a bit of a sledgehammer though: a powerful feature with a broad and indiscriminate area of effect. The combination of its power and implicit use may be expedient, but isn't without costs. Nix itself is unchanged, but the spirit of added dependencies being effect-free is violated even if the letter isn't. For example, if a derivation path is mentioned more than once, Nix itself doesn't care and simply makes sure the dependency derivation is already built just the same—depending is just needing something to exist, and needing is idempotent. However, a dependency specified twice will have its setup hook run twice, and that could easily change the build environment (though a well-written setup hook will therefore strive to be idempotent so this is in fact not observable). More broadly, setup hooks are anti-modular in that multiple dependencies, whether the same or different, should not interfere and yet their setup hooks may well do so.
    The most typical use of the setup hook is actually to add other hooks which
    are then run (i.e. after all the setup hooks) on each dependency. For
    example, the C compiler wrapper's setup hook feeds itself flags for each
    dependency that contains relevant libraries and headers. This is done by
    defining a bash function, and appending its name to one of
    envBuildBuildHooks`, envBuildHostHooks`,
    envBuildTargetHooks`, envHostHostHooks`,
    envHostTargetHooks`, or
    envTargetTargetHooks`. These 6 bash variables correspond to
    the 6 sorts of dependencies by platform (there's 12 total but we ignore the
    propagated/non-propagated axis).
   
    Packages adding a hook should not hard code a specific hook, but rather
    choose a variable relative to how they are included.
    Returning to the C compiler wrapper example, if the wrapper itself is an
    n dependency, then it only wants to accumulate flags
    from n + 1 dependencies, as only those ones match the
    compiler's target platform. The hostOffset variable is
    defined with the current dependency's host offset
    targetOffset with its target offset, before its setup hook
    is sourced. Additionally, since most environment hooks don't care about the
    target platform, that means the setup hook can append to the right bash
    array by doing something like
addEnvHooks "$hostOffset" myBashFunction
The existence of setups hooks has long been documented and packages inside Nixpkgs are free to use this mechanism. Other packages, however, should not rely on these mechanisms not changing between Nixpkgs versions. Because of the existing issues with this system, there's little benefit from mandating it be stable for any period of time.
    First, let’s cover some setup hooks that are part of Nixpkgs default
    stdenv. This means that they are run for every package built using
    stdenv.mkDerivation. Some of these are platform
    specific, so they may run on Linux but not Darwin or vice-versa.
    
move-docs.sh
      
        This setup hook moves any installed documentation to the
        /share subdirectory directory. This includes the
        man, doc and info directories. This is needed for legacy programs that
        do not know how to use the share subdirectory.
       
compress-man-pages.sh
      This setup hook compresses any man pages that have been installed. The compression is done using the gzip program. This helps to reduce the installed size of packages.
strip.sh
      This runs the strip command on installed binaries and libraries. This removes unnecessary information like debug symbols when they are not needed. This also helps to reduce the installed size of packages.
patch-shebangs.sh
      
        This setup hook patches installed scripts to use the full path to the
        shebang interpreter. A shebang interpreter is the first commented line
        of a script telling the operating system which program will run the
        script (e.g #!/bin/bash). In Nix, we want an exact
        path to that interpreter to be used. This often replaces
        /bin/sh with a path in the Nix store.
       
audit-tmpdir.sh
      This verifies that no references are left from the install binaries to the directory used to build those binaries. This ensures that the binaries do not need things outside the Nix store. This is currently supported in Linux only.
multiple-outputs.sh
      
        This setup hook adds configure flags that tell packages to install
        files into any one of the proper outputs listed in
        outputs. This behavior can be turned off by setting
        setOutputFlags to false in the derivation
        environment. See Chapter 4, Multiple-output packages for more
        information.
       
move-sbin.sh
      This setup hook moves any binaries installed in the sbin subdirectory into bin. In addition, a link is provided from sbin to bin for compatibility.
move-lib64.sh
      This setup hook moves any libraries installed in the lib64 subdirectory into lib. In addition, a link is provided from lib64 to lib for compatibility.
set-source-date-epoch-to-latest.sh
      
        This sets SOURCE_DATE_EPOCH to the modification time
        of the most recent file.
       
The Bintools Wrapper wraps the binary utilities for a bunch of miscellaneous purposes. These are GNU Binutils when targetting Linux, and a mix of cctools and GNU binutils for Darwin. [The "Bintools" name is supposed to be a compromise between "Binutils" and "cctools" not denoting any specific implementation.] Specifically, the underlying bintools package, and a C standard library (glibc or Darwin's libSystem, just for the dynamic loader) are all fed in, and dependency finding, hardening (see below), and purity checks for each are handled by the Bintools Wrapper. Packages typically depend on CC Wrapper, which in turn (at run time) depends on the Bintools Wrapper.
        The Bintools Wrapper was only just recently split off from CC Wrapper,
        so the division of labor is still being worked out. For example, it
        shouldn't care about about the C standard library, but just take a
        derivation with the dynamic loader (which happens to be the glibc on
        linux). Dependency finding however is a task both wrappers will
        continue to need to share, and probably the most important to
        understand. It is currently accomplished by collecting directories of
        host-platform dependencies (i.e. buildInputs and
        nativeBuildInputs) in environment variables. The
        Bintools Wrapper's setup hook causes any lib and
        lib64 subdirectories to be added to
        NIX_LDFLAGS. Since the CC Wrapper and the Bintools
        Wrapper use the same strategy, most of the Bintools Wrapper code is
        sparsely commented and refers to the CC Wrapper. But the CC Wrapper's
        code, by contrast, has quite lengthy comments. The Bintools Wrapper
        merely cites those, rather than repeating them, to avoid falling out of
        sync.
       
        A final task of the setup hook is defining a number of standard
        environment variables to tell build systems which executables fulfill
        which purpose. They are defined to just be the base name of the tools,
        under the assumption that the Bintools Wrapper's binaries will be on
        the path. Firstly, this helps poorly-written packages, e.g. ones that
        look for just gcc when CC isn't
        defined yet clang is to be used. Secondly, this
        helps packages not get confused when cross-compiling, in which case
        multiple Bintools Wrappers may simultaneously be in use.
        [6]
        BUILD_- and TARGET_-prefixed versions of
        the normal environment variable are defined for additional Bintools
        Wrappers, properly disambiguating them.
       
        A problem with this final task is that the Bintools Wrapper is honest
        and defines LD as ld. Most packages,
        however, firstly use the C compiler for linking, secondly use
        LD anyways, defining it as the C compiler, and thirdly,
        only so define LD when it is undefined as a fallback.
        This triple-threat means Bintools Wrapper will break those packages, as
        LD is already defined as the actual linker which the package won't
        override yet doesn't want to use. The workaround is to define, just for
        the problematic package, LD as the C compiler. A good
        way to do this would be preConfigure = "LD=$CC".
       
The CC Wrapper wraps a C toolchain for a bunch of miscellaneous purposes. Specifically, a C compiler (GCC or Clang), wrapped binary tools, and a C standard library (glibc or Darwin's libSystem, just for the dynamic loader) are all fed in, and dependency finding, hardening (see below), and purity checks for each are handled by the CC Wrapper. Packages typically depend on the CC Wrapper, which in turn (at run-time) depends on the Bintools Wrapper.
        Dependency finding is undoubtedly the main task of the CC Wrapper. This
        works just like the Bintools Wrapper, except that any
        include subdirectory of any relevant dependency is
        added to NIX_CFLAGS_COMPILE. The setup hook itself
        contains some lengthy comments describing the exact convoluted
        mechanism by which this is accomplished.
       
        Similarly, the CC Wrapper follows the Bintools Wrapper in defining
        standard environment variables with the names of the tools it wraps,
        for the same reasons described above. Importantly, while it includes a
        cc symlink to the c compiler for portability, the
        CC will be defined using the compiler's "real name"
        (i.e. gcc or clang). This helps
        lousy build systems that inspect on the name of the compiler rather
        than run it.
       
Here are some more packages that provide a setup hook. Since the list of hooks is extensible, this is not an exhaustive list the mechanism is only to be used as a last resort, it might cover most uses.
        Adds the lib/site_perl subdirectory of each build
        input to the PERL5LIB environment variable. For
        instance, if buildInputs contains Perl, then the
        lib/site_perl subdirectory of each input is added
        to the PERL5LIB environment variable.
       
        Adds the lib/${python.libPrefix}/site-packages
        subdirectory of each build input to the PYTHONPATH
        environment variable.
       
        Adds the lib/pkgconfig and
        share/pkgconfig subdirectories of each build input
        to the PKG_CONFIG_PATH environment variable.
       
        Adds the share/aclocal subdirectory of each build
        input to the ACLOCAL_PATH environment variable.
       
        The autoreconfHook derivation adds
        autoreconfPhase, which runs autoreconf, libtoolize
        and automake, essentially preparing the configure script in
        autotools-based builds. Most autotools-based packages come with the
        configure script pre-generated, but this hook is necessary for a few
        packages and when you need to patch the package’s configure
        scripts.
       
        Adds every file named catalog.xml found under the
        xml/dtd and xml/xsl
        subdirectories of each build input to the
        XML_CATALOG_FILES environment variable.
       
        Adds the share/texmf-nix subdirectory of each
        build input to the TEXINPUTS environment variable.
       
        Sets the QTDIR environment variable to Qt’s path.
       
        Exports GDK_PIXBUF_MODULE_FILE environment variable to
        the builder. Add librsvg package to buildInputs to
        get svg support.
       
Creates a temporary package database and registers every Haskell build input in it (TODO: how?).
        Adds the GStreamer plugins subdirectory of each build input to the
        GST_PLUGIN_SYSTEM_PATH_1_0 or
        GST_PLUGIN_SYSTEM_PATH environment variable.
       
        This is a special setup hook which helps in packaging proprietary
        software in that it automatically tries to find missing shared library
        dependencies of ELF files based on the given
        buildInputs and
        nativeBuildInputs.
       
        You can also specify a runtimeDependencies environment
        variable which lists dependencies that are unconditionally added to all
        executables.
       
This is useful for programs that use dlopen(3) to load libraries at runtime.
        In certain situations you may want to run the main command
        (autoPatchelf) of the setup hook on a file or a set
        of directories instead of unconditionally patching all outputs. This
        can be done by setting the dontAutoPatchelf environment
        variable to a non-empty value.
       
        The autoPatchelf command also recognizes a
        --no-recurse command line flag,
        which prevents it from recursing into subdirectories.
       
        This hook will make a build pause instead of stopping when a failure
        happens. It prevents nix from cleaning up the build environment
        immediately and allows the user to attach to a build environment using
        the cntr command. Upon build error it will print
        instructions on how to use cntr. Installing cntr and
        running the command will provide shell access to the build sandbox of
        failed build. At /var/lib/cntr the sandboxed
        filesystem is mounted. All commands and files of the system are still
        accessible within the shell. To execute commands from the sandbox use
        the cntr exec subcommand. Note that cntr also needs
        to be executed on the machine that is doing the build, which might not
        be the case when remote builders are enabled. cntr
        is only supported on Linux-based platforms. To use it first add
        cntr to your
        environment.systemPackages on NixOS or alternatively
        to the root user on non-NixOS systems. Then in the package that is
        supposed to be inspected, add breakpointHook to
        nativeBuildInputs.
         nativeBuildInputs = [ breakpointHook ];
       
        When a build failure happens there will be an instruction printed that
        shows how to attach with cntr to the build sandbox.
       
        A few libraries automatically add to NIX_LDFLAGS
        their library, making their symbols automatically available to the
        linker. This includes libiconv and libintl (gettext). This is done to
        provide compatibility between GNU Linux, where libiconv and libintl are
        bundled in, and other systems where that might not be the case.
        Sometimes, this behavior is not desired. To disable this behavior, set
        dontAddExtraLibs.
       
Overrides the default configure phase to run the CMake command. By default, we use the Make generator of CMake. In addition, dependencies are added automatically to CMAKE_PREFIX_PATH so that packages are correctly detected by CMake. Some additional flags are passed in to give similar behavior to configure-based packages. You can disable this hook’s behavior by setting configurePhase to a custom value, or by setting dontUseCmakeConfigure. cmakeFlags controls flags passed only to CMake. By default, parallel building is enabled as CMake supports parallel building almost everywhere. When Ninja is also in use, CMake will detect that and use the ninja generator.
Overrides the build and install phases to run the “xcbuild” command. This hook is needed when a project only comes with build files for the XCode build system. You can disable this behavior by setting buildPhase and configurePhase to a custom value. xcbuildFlags controls flags passed only to xcbuild.
Overrides the configure phase to run meson to generate Ninja files. You can disable this behavior by setting configurePhase to a custom value, or by setting dontUseMesonConfigure. To run these files, you should accompany meson with ninja. mesonFlags controls only the flags passed to meson. By default, parallel building is enabled as Meson supports parallel building almost everywhere.
Overrides the build, install, and check phase to run ninja instead of make. You can disable this behavior with the dontUseNinjaBuild, dontUseNinjaInstall, and dontUseNinjaCheck, respectively. Parallel building is enabled by default in Ninja.
This setup hook will allow you to unzip .zip files specified in $src. There are many similar packages like unrar, undmg, etc.
Overrides the configure, build, and install phases. This will run the "waf" script used by many projects. If waf doesn’t exist, it will copy the version of waf available in Nixpkgs wafFlags can be used to pass flags to the waf script.
Overrides the build, install, and check phases. This uses the scons build system as a replacement for make. scons does not provide a configure phase, so everything is managed at build and install time.
[measures taken to prevent dependencies on packages outside the store, and what you can do to prevent them]
    GCC doesn't search in locations such as /usr/include.
    In fact, attempts to add such directories through the -I
    flag are filtered out. Likewise, the linker (from GNU binutils) doesn't
    search in standard locations such as /usr/lib.
    Programs built on Linux are linked against a GNU C Library that likewise
    doesn't search in the default system locations.
   
    There are flags available to harden packages at compile or link-time. These
    can be toggled using the stdenv.mkDerivation parameters
    hardeningDisable and hardeningEnable.
   
    Both parameters take a list of flags as strings. The special
    "all" flag can be passed to
    hardeningDisable to turn off all hardening. These flags
    can also be used as environment variables for testing or development
    purposes.
   
    The following flags are enabled by default and might require disabling with
    hardeningDisable if the program to package is
    incompatible.
   
format
     
       Adds the -Wformat -Wformat-security
       -Werror=format-security compiler options. At present, this
       warns about calls to printf and
       scanf functions where the format string is not a
       string literal and there are no format arguments, as in
       printf(foo);. This may be a security hole if the
       format string came from untrusted input and contains
       %n.
      
This needs to be turned off or fixed for errors similar to:
/tmp/nix-build-zynaddsubfx-2.5.2.drv-0/zynaddsubfx-2.5.2/src/UI/guimain.cpp:571:28: error: format not a string literal and no format arguments [-Werror=format-security]
         printf(help_message);
                            ^
cc1plus: some warnings being treated as errors
    stackprotector
     
       Adds the -fstack-protector-strong --param
       ssp-buffer-size=4 compiler options. This adds safety checks
       against stack overwrites rendering many potential code injection attacks
       into aborting situations. In the best case this turns code injection
       vulnerabilities into denial of service or into non-issues (depending on
       the application).
      
This needs to be turned off or fixed for errors similar to:
bin/blib.a(bios_console.o): In function `bios_handle_cup':
/tmp/nix-build-ipxe-20141124-5cbdc41.drv-0/ipxe-5cbdc41/src/arch/i386/firmware/pcbios/bios_console.c:86: undefined reference to `__stack_chk_fail'
    fortify
     
       Adds the -O2 -D_FORTIFY_SOURCE=2 compiler options.
       During code generation the compiler knows a great deal of information
       about buffer sizes (where possible), and attempts to replace insecure
       unlimited length buffer function calls with length-limited ones. This is
       especially useful for old, crufty code. Additionally, format strings in
       writable memory that contain '%n' are blocked. If an application depends
       on such a format string, it will need to be worked around.
      
       Additionally, some warnings are enabled which might trigger build
       failures if compiler warnings are treated as errors in the package
       build. In this case, set NIX_CFLAGS_COMPILE to
       -Wno-error=warning-type.
      
This needs to be turned off or fixed for errors similar to:
malloc.c:404:15: error: return type is an incomplete type
malloc.c:410:19: error: storage size of 'ms' isn't known
    
strdup.h:22:1: error: expected identifier or '(' before '__extension__'
    
strsep.c:65:23: error: register name not specified for 'delim'
    
installwatch.c:3751:5: error: conflicting types for '__open_2'
    
fcntl2.h:50:4: error: call to '__open_missing_mode' declared with attribute error: open with O_CREAT or O_TMPFILE in second argument needs 3 arguments
    pic
     
       Adds the -fPIC compiler options. This options adds
       support for position independent code in shared libraries and thus
       making ASLR possible.
      
Most notably, the Linux kernel, kernel modules and other code not running in an operating system environment like boot loaders won't build with PIC enabled. The compiler will is most cases complain that PIC is not supported for a specific build.
This needs to be turned off or fixed for assembler errors similar to:
ccbLfRgg.s: Assembler messages:
ccbLfRgg.s:33: Error: missing or invalid displacement expression `private_key_len@GOTOFF'
    strictoverflow
     
       Signed integer overflow is undefined behaviour according to the C
       standard. If it happens, it is an error in the program as it should
       check for overflow before it can happen, not afterwards. GCC provides
       built-in functions to perform arithmetic with overflow checking, which
       are correct and faster than any custom implementation. As a workaround,
       the option -fno-strict-overflow makes gcc behave as if
       signed integer overflows were defined.
      
This flag should not trigger any build or runtime errors.
relro
     
       Adds the -z relro linker option. During program load,
       several ELF memory sections need to be written to by the linker, but can
       be turned read-only before turning over control to the program. This
       prevents some GOT (and .dtors) overwrite attacks, but at least the part
       of the GOT used by the dynamic linker (.got.plt) is still vulnerable.
      
       This flag can break dynamic shared object loading. For instance, the
       module systems of Xorg and OpenCV are incompatible with this flag. In
       almost all cases the bindnow flag must also be
       disabled and incompatible programs typically fail with similar errors at
       runtime.
      
bindnow
     
       Adds the -z bindnow linker option. During program load,
       all dynamic symbols are resolved, allowing for the complete GOT to be
       marked read-only (due to relro). This prevents GOT
       overwrite attacks. For very large applications, this can incur some
       performance loss during initial load while symbols are resolved, but
       this shouldn't be an issue for daemons.
      
This flag can break dynamic shared object loading. For instance, the module systems of Xorg and PHP are incompatible with this flag. Programs incompatible with this flag often fail at runtime due to missing symbols, like:
intel_drv.so: undefined symbol: vgaHWFreeHWRec
    
    The following flags are disabled by default and should be enabled with
    hardeningEnable for packages that take untrusted input
    like network services.
   
pie
     
       Adds the -fPIE compiler and -pie
       linker options. Position Independent Executables are needed to take
       advantage of Address Space Layout Randomization, supported by modern
       kernel versions. While ASLR can already be enforced for data areas in
       the stack and heap (brk and mmap), the code areas must be compiled as
       position-independent. Shared libraries already do this with the
       pic flag, so they gain ASLR automatically, but binary
       .text regions need to be build with pie to gain ASLR.
       When this happens, ROP attacks are much harder since there are no static
       locations to bounce off of during a memory corruption attack.
      
For more in-depth information on these hardening flags and hardening in general, refer to the Debian Wiki, Ubuntu Wiki, Gentoo Wiki, and the Arch Wiki.
[1] The build platform is ignored because it is a mere implementation detail of the package satisfying the dependency: As a general programming principle, dependencies are always specified as interfaces, not concrete implementation.
[2] 
      Currently, this means for native builds all dependencies are put on the
      PATH. But in the future that may not be the case for sake
      of matching cross: the platforms would be assumed to be unique for native
      and cross builds alike, so only the depsBuild* and
      nativeBuildInputs would be added to the
      PATH.
     
[3] 
      The findInputs function, currently residing in
      pkgs/stdenv/generic/setup.sh, implements the
      propagation logic.
     
[4] 
          It clears the
          sys_lib_
          variables in the Libtool script to prevent Libtool from using
          libraries in *search_path/usr/lib and such.
         
[5] Eventually these will be passed building natively as well, to improve determinism: build-time guessing, as is done today, is a risk of impurity.
[6] Each wrapper targets a single platform, so if binaries for multiple platforms are needed, the underlying binaries must be wrapped multiple times. As this is a property of the wrapper itself, the multiple wrappings are needed whether or not the same underlying binaries can target multiple platforms.
The Nix language allows a derivation to produce multiple outputs, which is similar to what is utilized by other Linux distribution packaging systems. The outputs reside in separate Nix store paths, so they can be mostly handled independently of each other, including passing to build inputs, garbage collection or binary substitution. The exception is that building from source always produces all the outputs.
The main motivation is to save disk space by reducing runtime closure sizes; consequently also sizes of substituted binaries get reduced. Splitting can be used to have more granular runtime dependencies, for example the typical reduction is to split away development-only files, as those are typically not needed during runtime. As a result, closure sizes of many packages can get reduced to a half or even much less.
The reduction effects could be instead achieved by building the parts in completely separate derivations. That would often additionally reduce build-time closures, but it tends to be much harder to write such derivations, as build systems typically assume all parts are being built at once. This compromise approach of single source package producing multiple binary packages is also utilized often by rpm and deb.
    When installing a package via systemPackages or
    nix-env you have several options:
   
      You can install particular outputs explicitly, as each is available in
      the Nix language as an attribute of the package. The
      outputs attribute contains a list of output names.
     
      You can let it use the default outputs. These are handled by
      meta.outputsToInstall attribute that contains a list
      of output names.
     
TODO: more about tweaking the attribute, etc.
      NixOS provides configuration option
      environment.extraOutputsToInstall that allows adding
      extra outputs of environment.systemPackages atop the
      default ones. It's mainly meant for documentation and debug symbols, and
      it's also modified by specific options.
     
       At this moment there is no similar configurability for packages
       installed by nix-env. You can still use approach from
       Section 6.5, “Modify packages via packageOverrides” to override
       meta.outputsToInstall attributes, but that's a rather
       inconvenient way.
      
    In the Nix language the individual outputs can be reached explicitly as
    attributes, e.g. coreutils.info, but the typical case is
    just using packages as build inputs.
   
    When a multiple-output derivation gets into a build input of another
    derivation, the dev output is added if it exists,
    otherwise the first output is added. In addition to that,
    propagatedBuildOutputs of that package which by default
    contain $outputBin and $outputLib are
    also added. (See Section 4.4.2, “File type groups”.)
   
Here you find how to write a derivation that produces multiple outputs.
    In nixpkgs there is a framework supporting multiple-output derivations. It
    tries to cover most cases by default behavior. You can find the source
    separated in
    <nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh>;
    it's relatively well-readable. The whole machinery is triggered by defining
    the outputs attribute to contain the list of desired
    output names (strings).
   
outputs = [ "bin" "dev" "out" "doc" ];
    Often such a single line is enough. For each output an equally named
    environment variable is passed to the builder and contains the path in nix
    store for that output. Typically you also want to have the main
    out output, as it catches any files that didn't get
    elsewhere.
   
     There is a special handling of the debug output,
     described at 
       separateDebugInfo
      .
    
     A commonly adopted convention in nixpkgs is that
     executables provided by the package are contained within its first output.
     This convention allows the dependent packages to reference the executables
     provided by packages in a uniform manner. For instance, provided with the
     knowledge that the perl package contains a
     perl executable it can be referenced as
     ${pkgs.perl}/bin/perl within a Nix derivation that
     needs to execute a Perl script.
    
     The glibc package is a deliberate single exception to
     the “binaries first” convention. The glibc
     has libs as its first output allowing the libraries
     provided by glibc to be referenced directly (e.g.
     ${stdenv.glibc}/lib/ld-linux-x86-64.so.2). The
     executables provided by glibc can be accessed via its
     bin attribute (e.g.
     ${stdenv.glibc.bin}/bin/ldd).
    
     The reason for why glibc deviates from the convention
     is because referencing a library provided by glibc is a
     very common operation among Nix packages. For instance, third-party
     executables packaged by Nix are typically patched and relinked with the
     relevant version of glibc libraries from Nix packages
     (please see the documentation on
     patchelf for
     more details).
    
     The support code currently recognizes some particular kinds of outputs and
     either instructs the build system of the package to put files into their
     desired outputs or it moves the files during the fixup phase. Each group
     of file types has an outputFoo variable specifying the
     output name where they should go. If that variable isn't defined by the
     derivation writer, it is guessed – a default output name is defined,
     falling back to other possibilities if the output isn't defined.
    
 $outputDev
      
        is for development-only files. These include C(++) headers, pkg-config,
        cmake and aclocal files. They go to dev or
        out by default.
       
 $outputBin
      
        is meant for user-facing binaries, typically residing in bin/. They go
        to bin or out by default.
       
 $outputLib
      
        is meant for libraries, typically residing in lib/
        and libexec/. They go to lib or
        out by default.
       
 $outputDoc
      
        is for user documentation, typically residing in
        share/doc/. It goes to doc or
        out by default.
       
 $outputDevdoc
      
        is for developer documentation. Currently we count
        gtk-doc and devhelp books in there. It goes to
        devdoc or is removed (!) by default. This is because
        e.g. gtk-doc tends to be rather large and completely unused by nixpkgs
        users.
       
 $outputMan
      
        is for man pages (except for section 3). They go to
        man or $outputBin by default.
       
 $outputDevman
      
        is for section 3 man pages. They go to devman or
        $outputMan by default.
       
 $outputInfo
      
        is for info pages. They go to info or
        $outputBin by default.
       
       Some configure scripts don't like some of the parameters passed by
       default by the framework, e.g. --docdir=/foo/bar. You
       can disable this by setting setOutputFlags = false;.
      
The outputs of a single derivation can retain references to each other, but note that circular references are not allowed. (And each strongly-connected component would act as a single output anyway.)
       Most of split packages contain their core functionality in libraries.
       These libraries tend to refer to various kind of data that typically
       gets into out, e.g. locale strings, so there is often
       no advantage in separating the libraries into lib, as
       keeping them in out is easier.
      
Some packages have hidden assumptions on install paths, which complicates splitting.
"Cross-compilation" means compiling a program on one machine for another type of machine. For example, a typical use of cross-compilation is to compile programs for embedded devices. These devices often don't have the computing power and memory to compile their own programs. One might think that cross-compilation is a fairly niche concern. However, there are significant advantages to rigorously distinguishing between build-time and run-time environments! This applies even when one is developing and deploying on the same machine. Nixpkgs is increasingly adopting the opinion that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
This chapter will be organized in three parts. First, it will describe the basics of how to package software in a way that supports cross-compilation. Second, it will describe how to use Nixpkgs when cross-compiling. Third, it will describe the internal infrastructure supporting cross-compilation.
Nixpkgs follows the conventions of GNU autoconf. We distinguish between 3 types of platforms when building a derivation: build, host, and target. In summary, build is the platform on which a package is being built, host is the platform on which it will run. The third attribute, target, is relevant only for certain specific compilers and build tools.
     In Nixpkgs, these three platforms are defined as attribute sets under the
     names buildPlatform, hostPlatform,
     and targetPlatform. They are always defined as
     attributes in the standard environment. That means one can access them
     like:
{ stdenv, fooDep, barDep, .. }: ...stdenv.buildPlatform....
buildPlatform
      The "build platform" is the platform on which a package is built. Once someone has a built package, or pre-built binary package, the build platform should not matter and can be ignored.
hostPlatform
      The "host platform" is the platform on which a package will be run. This is the simplest platform to understand, but also the one with the worst name.
targetPlatform
      The "target platform" attribute is, unlike the other two attributes, not actually fundamental to the process of building software. Instead, it is only relevant for compatibility with building certain specific compilers and build tools. It can be safely ignored for all other packages.
The build process of certain compilers is written in such a way that the compiler resulting from a single build can itself only produce binaries for a single platform. The task of specifying this single "target platform" is thus pushed to build time of the compiler. The root cause of this that the compiler (which will be run on the host) and the standard library/runtime (which will be run on the target) are built by a single build process.
There is no fundamental need to think about a single target ahead of time like this. If the tool supports modular or pluggable backends, both the need to specify the target at build time and the constraint of having only a single target disappear. An example of such a tool is LLVM.
Although the existence of a "target platfom" is arguably a historical mistake, it is a common one: examples of tools that suffer from it are GCC, Binutils, GHC and Autoconf. Nixpkgs tries to avoid sharing in the mistake where possible. Still, because the concept of a target platform is so ingrained, it is best to support it as is.
     The exact schema these fields follow is a bit ill-defined due to a long
     and convoluted evolution, but this is slowly being cleaned up. You can see
     examples of ones used in practice in
     lib.systems.examples; note how they are not all very
     consistent. For now, here are few fields can count on them containing:
    
system
      
        This is a two-component shorthand for the platform. Examples of this
        would be "x86_64-darwin" and "i686-linux"; see
        lib.systems.doubles for more. The first component
        corresponds to the CPU architecture of the platform and the second to
        the operating system of the platform ([cpu]-[os]).
        This format has built-in support in Nix, such as the
        builtins.currentSystem impure string.
       
config
      
        This is a 3- or 4- component shorthand for the platform. Examples of
        this would be x86_64-unknown-linux-gnu and
        aarch64-apple-darwin14. This is a standard format
        called the "LLVM target triple", as they are pioneered by LLVM. In the
        4-part form, this corresponds to
        [cpu]-[vendor]-[os]-[abi]. This format is strictly
        more informative than the "Nix host double", as the previous format
        could analogously be termed. This needs a better name than
        config!
       
parsed
      
        This is a Nix representation of a parsed LLVM target triple with
        white-listed components. This can be specified directly, or actually
        parsed from the config. See
        lib.systems.parse for the exact representation.
       
libc
      
        This is a string identifying the standard C library used. Valid
        identifiers include "glibc" for GNU libc, "libSystem" for Darwin's
        Libsystem, and "uclibc" for µClibc. It should probably be
        refactored to use the module system, like parse.
       
is*
      
        These predicates are defined in lib.systems.inspect,
        and slapped onto every platform. They are superior to the ones in
        stdenv as they force the user to be explicit about
        which platform they are inspecting. Please use these instead of those.
       
platform
      
        This is, quite frankly, a dumping ground of ad-hoc settings (it's an
        attribute set). See lib.systems.platforms for
        examples—there's hopefully one in there that will work verbatim
        for each platform that is working. Please help us triage these flags
        and give them better homes!
       
In this section we explore the relationship between both runtime and build-time dependencies and the 3 Autoconf platforms.
A runtime dependency between 2 packages implies that between them both the host and target platforms match. This is directly implied by the meaning of "host platform" and "runtime dependency": The package dependency exists while both packages are running on a single host platform.
A build time dependency, however, implies a shift in platforms between the depending package and the depended-on package. The meaning of a build time dependency is that to build the depending package we need to be able to run the depended-on's package. The depending package's build platform is therefore equal to the depended-on package's host platform. Analogously, the depending package's host platform is equal to the depended-on package's target platform.
In this manner, given the 3 platforms for one package, we can determine the three platforms for all its transitive dependencies. This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the sliding window principle.
     Some examples will make this clearer. If a package is being built with a
     (build, host, target) platform triple of (foo,
     bar, bar), then its build-time dependencies would have a triple
     of (foo, foo, bar), and those
     packages' build-time dependencies would have a triple of
     (foo, foo, foo). In other words, it should take two
     "rounds" of following build-time dependency edges before one reaches a
     fixed point where, by the sliding window principle, the platform triple no
     longer changes. Indeed, this happens with cross-compilation, where only
     rounds of native dependencies starting with the second necessarily
     coincide with native packages.
    
The depending package's target platform is unconstrained by the sliding window principle, which makes sense in that one can in principle build cross compilers targeting arbitrary platforms.
     How does this work in practice? Nixpkgs is now structured so that
     build-time dependencies are taken from buildPackages,
     whereas run-time dependencies are taken from the top level attribute set.
     For example, buildPackages.gcc should be used at
     build-time, while gcc should be used at run-time. Now,
     for most of Nixpkgs's history, there was no
     buildPackages, and most packages have not been
     refactored to use it explicitly. Instead, one can use the six
     (gasp) attributes used for specifying dependencies as
     documented in Section 3.3, “Specifying dependencies”. We "splice"
     together the run-time and build-time package sets with
     callPackage, and then mkDerivation
     for each of four attributes pulls the right derivation out. This splicing
     can be skipped when not cross-compiling as the package sets are the same,
     but is a bit slow for cross-compiling. Because of this, a
     best-of-both-worlds solution is in the works with no splicing or explicit
     access of buildPackages needed. For now, feel free to
     use either method.
    
      There is also a "backlink" targetPackages, yielding a
      package set whose buildPackages is the current package
      set. This is a hack, though, to accommodate compilers with lousy build
      systems. Please do not use this unless you are absolutely sure you are
      packaging such a compiler and there is no other way.
     
Some frequently encountered problems when packaging for cross-compilation should be answered here. Ideally, the information above is exhaustive, so this section cannot provide any new information, but it is ludicrous and cruel to expect everyone to spend effort working through the interaction of many features just to figure out the same answer to the same common problem. Feel free to add to this list!
    Nixpkgs can be instantiated with localSystem alone, in
    which case there is no cross-compiling and everything is built by and for
    that system, or also with crossSystem, in which case
    packages run on the latter, but all building happens on the former. Both
    parameters take the same schema as the 3 (build, host, and target)
    platforms defined in the previous section. As mentioned above,
    lib.systems.examples has some platforms which are used
    as arguments for these parameters in practice. You can use them
    programmatically, or on the command line:
nix-build <nixpkgs> --arg crossSystem '(import <nixpkgs/lib>).systems.examples.fooBarBaz' -A whatever
Eventually we would like to make these platform examples an unnecessary convenience so that
nix-build <nixpkgs> --arg crossSystem '{ config = "<arch>-<os>-<vendor>-<abi>"; }' -A whateverworks in the vast majority of cases. The problem today is dependencies on other sorts of configuration which aren't given proper defaults. We rely on the examples to crudely to set those configuration parameters in some vaguely sane manner on the users behalf. Issue #34274 tracks this inconvenience along with its root cause in crufty configuration options.
    While one is free to pass both parameters in full, there's a lot of logic
    to fill in missing fields. As discussed in the previous section, only one
    of system, config, and
    parsed is needed to infer the other two. Additionally,
    libc will be inferred from parse.
    Finally, localSystem.system is also
    impurely inferred based on the platform evaluation
    occurs. This means it is often not necessary to pass
    localSystem at all, as in the command-line example in
    the previous paragraph.
   
     Many sources (manual, wiki, etc) probably mention passing
     system, platform, along with the
     optional crossSystem to nixpkgs: import
     <nixpkgs> { system = ..; platform = ..; crossSystem = ..;
     }. Passing those two instead of localSystem
     is still supported for compatibility, but is discouraged. Indeed, much of
     the inference we do for these parameters is motivated by compatibility as
     much as convenience.
    
    One would think that localSystem and
    crossSystem overlap horribly with the three
    *Platforms (buildPlatform,
    hostPlatform, and targetPlatform; see
    stage.nix or the manual). Actually, those identifiers
    are purposefully not used here to draw a subtle but important distinction:
    While the granularity of having 3 platforms is necessary to properly
    *build* packages, it is overkill for specifying the user's *intent* when
    making a build plan or package set. A simple "build vs deploy" dichotomy is
    adequate: the sliding window principle described in the previous section
    shows how to interpolate between the these two "end points" to get the 3
    platform triple for each bootstrapping stage. That means for any package a
    given package set, even those not bound on the top level but only reachable
    via dependencies or buildPackages, the three platforms
    will be defined as one of localSystem or
    crossSystem, with the former replacing the latter as one
    traverses build-time dependencies. A last simple difference is that
    crossSystem should be null when one doesn't want to
    cross-compile, while the *Platforms are always non-null.
    localSystem is always non-null.
   
To be written.
     If one explores Nixpkgs, they will see derivations with names like
     gccCross. Such *Cross derivations is
     a holdover from before we properly distinguished between the host and
     target platforms—the derivation with "Cross" in the name covered
     the build = host != target case, while the other
     covered the host = target, with build platform the same
     or not based on whether one was using its .nativeDrv or
     .crossDrv. This ugliness will disappear soon.
    
Nix comes with certain defaults about what packages can and cannot be installed, based on a package's metadata. By default, Nix will prevent installation if any of the following criteria are true:
     The package is thought to be broken, and has had its
     meta.broken set to true.
    
     The package isn't intended to run on the given system, as none of its
     meta.platforms match the given system.
    
     The package's meta.license is set to a license which is
     considered to be unfree.
    
     The package has known security vulnerabilities but has not or can not be
     updated for some reason, and a list of issues has been entered in to the
     package's meta.knownVulnerabilities.
    
   Note that all this is checked during evaluation already, and the check
   includes any package that is evaluated. In particular, all build-time
   dependencies are checked. nix-env -qa will (attempt to)
   hide any packages that would be refused.
  
Each of these criteria can be altered in the nixpkgs configuration.
   The nixpkgs configuration for a NixOS system is set in the
   configuration.nix, as in the following example:
{
  nixpkgs.config = {
    allowUnfree = true;
  };
}
However, this does not allow unfree software for individual users. Their configurations are managed separately.
   A user's of nixpkgs configuration is stored in a user-specific configuration
   file located at ~/.config/nixpkgs/config.nix. For
   example:
{
  allowUnfree = true;
}
Note that we are not able to test or build unfree software on Hydra due to policy. Most unfree licenses prohibit us from either executing or distributing the software.
There are two ways to try compiling a package which has been marked as broken.
For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_BROKEN=1
      For permanently allowing broken packages to be built, you may add
      allowBroken = true; to your user's configuration file,
      like this:
{
  allowBroken = true;
}
There are also two ways to try compiling a package which has been marked as unsuported for the given system.
For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1
      For permanently allowing broken packages to be built, you may add
      allowUnsupportedSystem = true; to your user's
      configuration file, like this:
{
  allowUnsupportedSystem = true;
}
    The difference between a package being unsupported on some system and being
    broken is admittedly a bit fuzzy. If a program ought
    to work on a certain platform, but doesn't, the platform should be included
    in meta.platforms, but marked as broken with e.g.
    meta.broken = !hostPlatform.isWindows. Of course, this
    begs the question of what "ought" means exactly. That is left to the
    package maintainer.
   
There are several ways to tweak how Nix handles a package which has been marked as unfree.
To temporarily allow all unfree packages, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_UNFREE=1
      It is possible to permanently allow individual unfree packages, while
      still blocking unfree packages by default using the
      allowUnfreePredicate configuration option in the user
      configuration file.
     
This option is a function which accepts a package as a parameter, and returns a boolean. The following example configuration accepts a package and always returns false:
{
  allowUnfreePredicate = (pkg: false);
}
For a more useful example, try the following. This configuration only allows unfree packages named flash player and visual studio code:
{
  allowUnfreePredicate = (pkg: builtins.elem
    (builtins.parseDrvName pkg.name).name [
      "flashplayer"
      "vscode"
    ]);
}
      It is also possible to whitelist and blacklist licenses that are
      specifically acceptable or not acceptable, using
      whitelistedLicenses and
      blacklistedLicenses, respectively.
     
      The following example configuration whitelists the licenses
      amd and wtfpl:
{
  whitelistedLicenses = with stdenv.lib.licenses; [ amd wtfpl ];
}
      The following example configuration blacklists the
      gpl3 and agpl3 licenses:
{
  blacklistedLicenses = with stdenv.lib.licenses; [ agpl3 gpl3 ];
}
    A complete list of licenses can be found in the file
    lib/licenses.nix of the nixpkgs tree.
   
There are several ways to tweak how Nix handles a package which has been marked as insecure.
To temporarily allow all insecure packages, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_INSECURE=1
      It is possible to permanently allow individual insecure packages, while
      still blocking other insecure packages by default using the
      permittedInsecurePackages configuration option in the
      user configuration file.
     
      The following example configuration permits the installation of the
      hypothetically insecure package hello, version
      1.2.3:
{
  permittedInsecurePackages = [
    "hello-1.2.3"
  ];
}
      It is also possible to create a custom policy around which insecure
      packages to allow and deny, by overriding the
      allowInsecurePredicate configuration option.
     
      The allowInsecurePredicate option is a function which
      accepts a package and returns a boolean, much like
      allowUnfreePredicate.
     
The following configuration example only allows insecure packages with very short names:
{
  allowInsecurePredicate = (pkg: (builtins.stringLength (builtins.parseDrvName pkg.name).name) <= 5);
}
      Note that permittedInsecurePackages is only checked if
      allowInsecurePredicate is not specified.
     
    You can define a function called packageOverrides in
    your local ~/.config/nixpkgs/config.nix to override
    Nix packages. It must be a function that takes pkgs as an argument and
    returns a modified set of packages.
{
  packageOverrides = pkgs: rec {
    foo = pkgs.foo.override { ... };
  };
}
     Using packageOverrides, it is possible to manage
     packages declaratively. This means that we can list all of our desired
     packages within a declarative Nix expression. For example, to have
     aspell, bc,
     ffmpeg, coreutils,
     gdb, nixUnstable,
     emscripten, jq,
     nox, and silver-searcher, we could
     use the following in ~/.config/nixpkgs/config.nix:
    
{
  packageOverrides = pkgs: with pkgs; {
    myPackages = pkgs.buildEnv {
      name = "my-packages";
      paths = [
        aspell
        bc
        coreutils
        gdb
        ffmpeg
        nixUnstable
        emscripten
        jq
        nox
        silver-searcher
      ];
    };
  };
}
     To install it into our environment, you can just run nix-env -iA
     nixpkgs.myPackages. If you want to load the packages to be built
     from a working copy of nixpkgs you just run
     nix-env -f. -iA myPackages. To explore what's been
     installed, just look through ~/.nix-profile/. You can
     see that a lot of stuff has been installed. Some of this stuff is useful
     some of it isn't. Let's tell Nixpkgs to only link the stuff that we want:
    
{
  packageOverrides = pkgs: with pkgs; {
    myPackages = pkgs.buildEnv {
      name = "my-packages";
      paths = [
        aspell
        bc
        coreutils
        gdb
        ffmpeg
        nixUnstable
        emscripten
        jq
        nox
        silver-searcher
      ];
      pathsToLink = [ "/share" "/bin" ];
    };
  };
}
     pathsToLink tells Nixpkgs to only link the paths listed
     which gets rid of the extra stuff in the profile.
     /bin and /share are good
     defaults for a user environment, getting rid of the clutter. If you are
     running on Nix on MacOS, you may want to add another path as well,
     /Applications, that makes GUI apps available.
    
     After building that new environment, look through
     ~/.nix-profile to make sure everything is there that
     we wanted. Discerning readers will note that some files are missing. Look
     inside ~/.nix-profile/share/man/man1/ to verify this.
     There are no man pages for any of the Nix tools! This is because some
     packages like Nix have multiple outputs for things like documentation (see
     section 4). Let's make Nix install those as well.
    
{
  packageOverrides = pkgs: with pkgs; {
    myPackages = pkgs.buildEnv {
      name = "my-packages";
      paths = [
        aspell
        bc
        coreutils
        ffmpeg
        nixUnstable
        emscripten
        jq
        nox
        silver-searcher
      ];
      pathsToLink = [ "/share/man" "/share/doc" "/bin" ];
      extraOutputsToInstall = [ "man" "doc" ];
    };
  };
}
This provides us with some useful documentation for using our packages. However, if we actually want those manpages to be detected by man, we need to set up our environment. This can also be managed within Nix expressions.
{
  packageOverrides = pkgs: with pkgs; rec {
    myProfile = writeText "my-profile" ''
      export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
      export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
    '';
    myPackages = pkgs.buildEnv {
      name = "my-packages";
      paths = [
        (runCommand "profile" {} ''
          mkdir -p $out/etc/profile.d
          cp ${myProfile} $out/etc/profile.d/my-profile.sh
        '')
        aspell
        bc
        coreutils
        ffmpeg
        man
        nixUnstable
        emscripten
        jq
        nox
        silver-searcher
      ];
      pathsToLink = [ "/share/man" "/share/doc" "/bin" "/etc" ];
      extraOutputsToInstall = [ "man" "doc" ];
    };
  };
}
     For this to work fully, you must also have this script sourced when you
     are logged in. Try adding something like this to your
     ~/.profile file:
    
#!/bin/sh
if [ -d $HOME/.nix-profile/etc/profile.d ]; then
  for i in $HOME/.nix-profile/etc/profile.d/*.sh; do
    if [ -r $i ]; then
      . $i
    fi
  done
fi
     Now just run source $HOME/.profile and you can starting
     loading man pages from your environent.
    
Configuring GNU info is a little bit trickier than man pages. To work correctly, info needs a database to be generated. This can be done with some small modifications to our environment scripts.
{
  packageOverrides = pkgs: with pkgs; rec {
    myProfile = writeText "my-profile" ''
      export PATH=$HOME/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/sbin:/bin:/usr/sbin:/usr/bin
      export MANPATH=$HOME/.nix-profile/share/man:/nix/var/nix/profiles/default/share/man:/usr/share/man
      export INFOPATH=$HOME/.nix-profile/share/info:/nix/var/nix/profiles/default/share/info:/usr/share/info
    '';
    myPackages = pkgs.buildEnv {
      name = "my-packages";
      paths = [
        (runCommand "profile" {} ''
          mkdir -p $out/etc/profile.d
          cp ${myProfile} $out/etc/profile.d/my-profile.sh
        '')
        aspell
        bc
        coreutils
        ffmpeg
        man
        nixUnstable
        emscripten
        jq
        nox
        silver-searcher
        texinfoInteractive
      ];
      pathsToLink = [ "/share/man" "/share/doc" "/share/info" "/bin" "/etc" ];
      extraOutputsToInstall = [ "man" "doc" "info" ];
      postBuild = ''
        if [ -x $out/bin/install-info -a -w $out/share/info ]; then
          shopt -s nullglob
          for i in $out/share/info/*.info $out/share/info/*.info.gz; do
              $out/bin/install-info $i $out/share/info/dir
          done
        fi
      '';
    };
  };
}
     postBuild tells Nixpkgs to run a command after building
     the environment. In this case, install-info adds the
     installed info pages to dir which is GNU info's default
     root node. Note that texinfoInteractive is added to the
     environment to give the install-info command.
    
lib.attrset.attrByPathlib.attrsets.hasAttrByPathlib.attrsets.setAttrByPathlib.attrsets.getAttrFromPathlib.attrsets.attrValslib.attrsets.attrValueslib.attrsets.catAttrslib.attrsets.filterAttrslib.attrsets.filterAttrsRecursivelib.attrsets.foldAttrslib.attrsets.collectlib.attrsets.nameValuePairlib.attrsets.mapAttrslib.attrsets.mapAttrs'lib.attrsets.mapAttrsToListlib.attrsets.mapAttrsRecursivelib.attrsets.mapAttrsRecursiveCondlib.attrsets.genAttrslib.attrsets.isDerivationlib.attrsets.toDerivationlib.attrsets.optionalAttrslib.attrsets.zipAttrsWithNameslib.attrsets.zipAttrsWithlib.attrsets.zipAttrslib.attrsets.recursiveUpdateUntillib.attrsets.recursiveUpdatelib.strings.concatStringslib.strings.concatMapStringslib.strings.concatImapStringslib.strings.intersperselib.strings.concatStringsSeplib.strings.concatMapStringsSeplib.strings.concatImapStringsSeplib.strings.makeSearchPathlib.strings.makeSearchPathOutputlib.strings.makeLibraryPathlib.strings.makeBinPathlib.strings.optionalStringlib.strings.hasPrefixlib.strings.hasSuffixlib.strings.hasInfixlib.strings.stringToCharacterslib.strings.stringAsCharslib.strings.escapelib.strings.escapeShellArglib.strings.escapeShellArgslib.strings.escapeNixStringlib.strings.toLowerlib.strings.toUpperlib.strings.addContextFromlib.strings.splitStringlib.strings.removePrefixlib.strings.removeSuffixlib.strings.versionOlderlib.strings.versionAtLeastlib.strings.getVersionlib.strings.nameFromURLlib.strings.enableFeaturelib.strings.enableFeatureAslib.strings.withFeaturelib.strings.withFeatureAslib.strings.fixedWidthStringlib.strings.fixedWidthNumberlib.strings.isCoercibleToStringlib.strings.isStorePathlib.strings.toIntlib.strings.readPathsFromFilelib.strings.fileContentslib.trivial.idlib.trivial.constlib.trivial.concatlib.trivial.orlib.trivial.andlib.trivial.bitAndlib.trivial.bitOrlib.trivial.bitXorlib.trivial.bitNotlib.trivial.boolToStringlib.trivial.mergeAttrslib.trivial.fliplib.trivial.mapNullablelib.trivial.versionlib.trivial.releaselib.trivial.codeNamelib.trivial.versionSuffixlib.trivial.revisionWithDefaultlib.trivial.inNixShelllib.trivial.minlib.trivial.maxlib.trivial.modlib.trivial.comparelib.trivial.splitByAndComparelib.trivial.importJSONlib.trivial.setFunctionArgslib.trivial.functionArgslib.trivial.isFunctionlib.lists.singletonlib.lists.foldrlib.lists.foldlib.lists.foldllib.lists.foldl'lib.lists.imap0lib.lists.imap1lib.lists.concatMaplib.lists.flattenlib.lists.removelib.lists.findSinglelib.lists.findFirstlib.lists.anylib.lists.alllib.lists.countlib.lists.optionallib.lists.optionalslib.lists.toListlib.lists.rangelib.lists.partitionlib.lists.groupBy'lib.lists.zipListsWithlib.lists.zipListslib.lists.reverseListlib.lists.listDfslib.lists.toposortlib.lists.sortlib.lists.compareListslib.lists.naturalSortlib.lists.takelib.lists.droplib.lists.sublistlib.lists.lastlib.lists.initlib.lists.crossListslib.lists.uniquelib.lists.intersectListslib.lists.subtractListslib.lists.mutuallyExclusivelib.debug.traceIflib.debug.traceValFnlib.debug.traceVallib.debug.traceSeqlib.debug.traceSeqNlib.debug.traceValSeqFnlib.debug.traceValSeqlib.debug.traceValSeqNFnlib.debug.traceValSeqNlib.debug.runTestslib.debug.testAllTruelib.options.isOptionlib.options.mkOptionlib.options.mkEnableOptionlib.options.mkSinkUndeclaredOptionslib.options.mergeEqualOptionlib.options.getValueslib.options.getFileslib.options.scrubOptionValuelib.options.literalExamplelib.options.showOptionThe nixpkgs repository has several utility functions to manipulate Nix expressions.
    Nixpkgs provides a standard library at pkgs.lib, or
    through import <nixpkgs/lib>.
   
      Located at
      lib/asserts.nix:21
      in <nixpkgs>.
     
      Print a trace message if pred is false.
     
Intended to be used to augment asserts with helpful error messages.
pred
       
         Condition under which the msg should
         not be printed.
        
msg
       Message to print.
assert lib.asserts.assertMsg ("foo" == "bar") "foo is not bar, silly"
stderr> trace: foo is not bar, silly
stderr> assert failed
      Located at
      lib/asserts.nix:38
      in <nixpkgs>.
     
      Specialized asserts.assertMsg for checking if
      val is one of the elements of xs.
      Useful for checking enums.
     
name
       
         The name of the variable the user entered val into,
         for inclusion in the error message.
        
val
       
         The value of what the user provided, to be compared against the values
         in xs.
        
xs
       The list of valid values.
let sslLibrary = "bearssl";
in lib.asserts.assertOneOf "sslLibrary" sslLibrary [ "openssl" "bearssl" ];
=> false
stderr> trace: sslLibrary must be one of "openssl", "libressl", but is: "bearssl"
        
      Located at
      lib/attrsets.nix:24
      in <nixpkgs>.
     
Return an attribute from within nested attribute sets.
attrPath
       
         A list of strings representing the path through the nested attribute
         set set.
        
default
       
         Default value if attrPath does not resolve to an
         existing value.
        
set
       The nested attributeset to select values from.
let set = { a = { b = 3; }; };
in lib.attrsets.attrByPath [ "a" "b" ] 0 set
=> 3
lib.attrsets.attrByPath [ "a" "b" ] 0 {}
=> 0
      Located at
      lib/attrsets.nix:42
      in <nixpkgs>.
     
Determine if an attribute exists within a nested attribute set.
attrPath
       
         A list of strings representing the path through the nested attribute
         set set.
        
set
       The nested attributeset to check.
lib.attrsets.hasAttrByPath
  [ "a" "b" "c" "d" ]
  { a = { b = { c = { d = 123; }; }; }; }
=> true
      Located at
      lib/attrsets.nix:57
      in <nixpkgs>.
     
      Create a new attribute set with value set at the
      nested attribute location specified in attrPath.
     
attrPath
       A list of strings representing the path through the nested attribute set.
value
       
         The value to set at the location described by
         attrPath.
        
lib.attrsets.setAttrByPath [ "a" "b" ] 3
=> { a = { b = 3; }; }
      Located at
      lib/attrsets.nix:73
      in <nixpkgs>.
     
      Like Section 7.1.2.1, “lib.attrset.attrByPath” except
      without a default, and it will throw if the value doesn't exist.
     
attrPath
       
         A list of strings representing the path through the nested attribute
         set set.
        
set
       The nested attribute set to find the value in.
lib.attrsets.getAttrFromPath [ "a" "b" ] { a = { b = 3; }; }
=> 3
lib.attrsets.getAttrFromPath [ "x" "y" ] { }
=> error: cannot find attribute `x.y'
      Located at
      lib/attrsets.nix:84
      in <nixpkgs>.
     
Return the specified attributes from a set. All values must exist.
nameList
       
         The list of attributes to fetch from set. Each
         attribute name must exist on the attrbitue set.
        
set
       The set to get attribute values from.
lib.attrsets.attrVals [ "a" "b" "c" ] { a = 1; b = 2; c = 3; }
=> [ 1 2 3 ]
lib.attrsets.attrVals [ "d" ] { }
error: attribute 'd' missing
      Located at
      lib/attrsets.nix:94
      in <nixpkgs>.
     
Get all the attribute values from an attribute set.
      Provides a backwards-compatible interface of
      builtins.attrValues for Nix version older than 1.8.
     
attrs
       The attribute set.
      Located at
      lib/attrsets.nix:113
      in <nixpkgs>.
     
      Collect each attribute named `attr' from the list of attribute sets,
      sets. Sets that don't contain the named attribute are
      ignored.
     
      Provides a backwards-compatible interface of
      builtins.catAttrs for Nix version older than 1.9.
     
attr
       
         Attribute name to select from each attribute set in
         sets.
        
sets
       
         The list of attribute sets to select attr from.
        
Attribute sets which don't have the attribute are ignored.
catAttrs "a" [{a = 1;} {b = 0;} {a = 2;}]
=> [ 1 2 ]
      
      Located at
      lib/attrsets.nix:124
      in <nixpkgs>.
     
Filter an attribute set by removing all attributes for which the given predicate return false.
pred
       
         String -> Any -> Bool
        
Predicate which returns true to include an attribute, or returns false to exclude it.
name
          The attribute's name
value
          The attribute's value
         Returns true to include the attribute,
         false to exclude the attribute.
        
set
       The attribute set to filter
filterAttrs (n: v: n == "foo") { foo = 1; bar = 2; }
=> { foo = 1; }
      Located at
      lib/attrsets.nix:135
      in <nixpkgs>.
     
Filter an attribute set recursively by removing all attributes for which the given predicate return false.
pred
       
         String -> Any -> Bool
        
Predicate which returns true to include an attribute, or returns false to exclude it.
name
          The attribute's name
value
          The attribute's value
         Returns true to include the attribute,
         false to exclude the attribute.
        
set
       The attribute set to filter
lib.attrsets.filterAttrsRecursive
  (n: v: v != null)
  {
    levelA = {
      example = "hi";
      levelB = {
        hello = "there";
        this-one-is-present = {
          this-is-excluded = null;
        };
      };
      this-one-is-also-excluded = null;
    };
    also-excluded = null;
  }
=> {
     levelA = {
       example = "hi";
       levelB = {
         hello = "there";
         this-one-is-present = { };
       };
     };
   }
     
      Located at
      lib/attrsets.nix:154
      in <nixpkgs>.
     
Apply fold function to values grouped by key.
op
       
         Any -> Any -> Any
        
         Given a value val and a collector
         col, combine the two.
        
val
          An attribute's value
col
          
            The result of previous op calls with other
            values and nul.
           
nul
       The null-value, the starting value.
list_of_attrs
       A list of attribute sets to fold together by key.
lib.attrsets.foldAttrs
  (n: a: [n] ++ a) []
  [
    { a = 2; b = 7; }
    { a = 3; }
    { b = 6; }
  ]
=> { a = [ 2 3 ]; b = [ 7 6 ]; }
      Located at
      lib/attrsets.nix:178
      in <nixpkgs>.
     
      Recursively collect sets that verify a given predicate named
      pred from the set attrs. The
      recursion stops when pred returns
      true.
     
pred
       
         Any -> Bool
        
Given an attribute's value, determine if recursion should stop.
value
          The attribute set value.
attrs
       The attribute set to recursively collect.
lib.attrsets.collect isList { a = { b = ["b"]; }; c = [1]; }
=> [["b"] [1]]
outPath attribute name.
collect (x: x ? outPath)
  { a = { outPath = "a/"; }; b = { outPath = "b/"; }; }
=> [{ outPath = "a/"; } { outPath = "b/"; }]
      Located at
      lib/attrsets.nix:194
      in <nixpkgs>.
     
      Utility function that creates a {name, value} pair as
      expected by builtins.listToAttrs.
     
name
       The attribute name.
value
       The attribute value.
      Located at
      lib/attrsets.nix:207
      in <nixpkgs>.
     
Apply a function to each element in an attribute set, creating a new attribute set.
      Provides a backwards-compatible interface of
      builtins.mapAttrs for Nix version older than 2.1.
     
fn
       
         String -> Any -> Any
        
Given an attribute's name and value, return a new value.
name
          The name of the attribute.
value
          The attribute's value.
lib.attrsets.mapAttrs
  (name: value: name + "-" value)
  { x = "foo"; y = "bar"; }
=> { x = "x-foo"; y = "y-bar"; }
      Located at
      lib/attrsets.nix:221
      in <nixpkgs>.
     
      Like mapAttrs, but allows the name of each attribute
      to be changed in addition to the value. The applied function should
      return both the new name and value as a
      nameValuePair.
     
fn
       
         String -> Any -> { name = String; value = Any
         }
        
Given an attribute's name and value, return a new name value pair.
name
          The name of the attribute.
value
          The attribute's value.
set
       The attribute set to map over.
lib.attrsets.mapAttrs' (name: value: lib.attrsets.nameValuePair ("foo_" + name) ("bar-" + value))
   { x = "a"; y = "b"; }
=> { foo_x = "bar-a"; foo_y = "bar-b"; }
    
      Located at
      lib/attrsets.nix:233
      in <nixpkgs>.
     
      Call fn for each attribute in the given
      set and return the result in a list.
     
fn
       
         String -> Any -> Any
        
Given an attribute's name and value, return a new value.
name
          The name of the attribute.
value
          The attribute's value.
set
       The attribute set to map over.
lib.attrsets.mapAttrsToList (name: value: "${name}=${value}")
   { x = "a"; y = "b"; }
=> [ "x=a" "y=b" ]
      Located at
      lib/attrsets.nix:250
      in <nixpkgs>.
     
      Like mapAttrs, except that it recursively applies
      itself to attribute sets. Also, the first argument of the argument
      function is a list of the names of the containing
      attributes.
     
f
       
         [ String ] -> Any -> Any
        
Given a list of attribute names and value, return a new value.
name_path
          The list of attribute names to this value.
            For example, the name_path for the
            example string in the attribute set {
            foo = { bar = "example"; }; } is [ "foo" "bar"
            ].
           
value
          The attribute's value.
set
       The attribute set to recursively map over.
lib.attrsets.mapAttrsRecursive
mapAttrsRecursive
  (path: value: concatStringsSep "-" (path ++ [value]))
  {
    n = {
      a = "A";
      m = {
        b = "B";
        c = "C";
      };
    };
    d = "D";
  }
=> {
     n = {
       a = "n-a-A";
       m = {
         b = "n-m-b-B";
         c = "n-m-c-C";
       };
     };
     d = "d-D";
   }
    
      Located at
      lib/attrsets.nix:271
      in <nixpkgs>.
     
      Like mapAttrsRecursive, but it takes an additional
      predicate function that tells it whether to recursive into an attribute
      set. If it returns false, mapAttrsRecursiveCond does
      not recurse, but does apply the map function. It is returns true, it does
      recurse, and does not apply the map function.
     
cond
       
         (AttrSet -> Bool)
        
         Determine if mapAttrsRecursive should recurse
         deeper in to the attribute set.
        
attributeset
          An attribute set.
f
       
         [ String ] -> Any -> Any
        
Given a list of attribute names and value, return a new value.
name_path
          The list of attribute names to this value.
            For example, the name_path for the
            example string in the attribute set {
            foo = { bar = "example"; }; } is [ "foo" "bar"
            ].
           
value
          The attribute's value.
set
       The attribute set to recursively map over.
lib.attrsets.mapAttrsRecursiveCond
  ({ recurse ? false, ... }: recurse)
  (name: value: builtins.toJSON value)
  {
    dorecur = {
      recurse = true;
      hello = "there";
    };
    dontrecur = {
      converted-to- = "json";
    };
  }
=> {
     dorecur = {
       hello = "\"there\"";
       recurse = "true";
     };
     dontrecur = "{\"converted-to\":\"json\"}";
   }
    
      Located at
      lib/attrsets.nix:291
      in <nixpkgs>.
     
Generate an attribute set by mapping a function over a list of attribute names.
names
       Names of values in the resulting attribute set.
f
       
         String -> Any
        
Takes the name of the attribute and return the attribute's value.
name
          The name of the attribute to generate a value for.
lib.attrsets.genAttrs [ "foo" "bar" ] (name: "x_${name}")
=> { foo = "x_foo"; bar = "x_bar"; }
     
      Located at
      lib/attrsets.nix:305
      in <nixpkgs>.
     
      Check whether the argument is a derivation. Any set with { type =
      "derivation"; } counts as a derivation.
     
value
       The value which is possibly a derivation.
lib.attrsets.isDerivation (import <nixpkgs> {}).ruby
=> true
     
      Located at
      lib/attrsets.nix:308
      in <nixpkgs>.
     
Converts a store path to a fake derivation.
path
       A store path to convert to a derivation.
      Located at
      lib/attrsets.nix:331
      in <nixpkgs>.
     
Conditionally return an attribute set or an empty attribute set.
cond
       
         Condition under which the as attribute set is
         returned.
        
as
       
         The attribute set to return if cond is true.
        
cond is true
lib.attrsets.optionalAttrs true { my = "set"; }
=> { my = "set"; }
     cond is false
lib.attrsets.optionalAttrs false { my = "set"; }
=> { }
     
      Located at
      lib/attrsets.nix:341
      in <nixpkgs>.
     
      Merge sets of attributes and use the function f to
      merge attribute values where the attribute name is in
      names.
     
names
       A list of attribute names to zip.
f
       
         (String -> [ Any ] -> Any
        
Accepts an attribute name, all the values, and returns a combined value.
name
          The name of the attribute each value came from.
vs
          A list of values collected from the list of attribute sets.
sets
       A list of attribute sets to zip together.
lib.attrsets.zipAttrsWithNames
  [ "a" "b" ]
  (name: vals: "${name} ${toString (builtins.foldl' (a: b: a + b) 0 vals)}")
  [
    { a = 1; b = 1; c = 1; }
    { a = 10; }
    { b = 100; }
    { c = 1000; }
  ]
=> { a = "a 11"; b = "b 101"; }
     
      Located at
      lib/attrsets.nix:356
      in <nixpkgs>.
     
      Merge sets of attributes and use the function f to
      merge attribute values. Similar to
      Section 7.1.2.22, “lib.attrsets.zipAttrsWithNames” where
      all key names are passed for names.
     
f
       
         (String -> [ Any ] -> Any
        
Accepts an attribute name, all the values, and returns a combined value.
name
          The name of the attribute each value came from.
vs
          A list of values collected from the list of attribute sets.
sets
       A list of attribute sets to zip together.
lib.attrsets.zipAttrsWith
  (name: vals: "${name} ${toString (builtins.foldl' (a: b: a + b) 0 vals)}")
  [
    { a = 1; b = 1; c = 1; }
    { a = 10; }
    { b = 100; }
    { c = 1000; }
  ]
=> { a = "a 11"; b = "b 101"; c = "c 1001"; }
     
      Located at
      lib/attrsets.nix:363
      in <nixpkgs>.
     
      Merge sets of attributes and combine each attribute value in to a list.
      Similar to Section 7.1.2.23, “lib.attrsets.zipAttrsWith”
      where the merge function returns a list of all values.
     
sets
       A list of attribute sets to zip together.
lib.attrsets.zipAttrs
  [
    { a = 1; b = 1; c = 1; }
    { a = 10; }
    { b = 100; }
    { c = 1000; }
  ]
=> { a = [ 1 10 ]; b = [ 1 100 ]; c = [ 1 1000 ]; }
     
      Located at
      lib/attrsets.nix:393
      in <nixpkgs>.
     
      Does the same as the update operator // except that
      attributes are merged until the given predicate is verified. The
      predicate should accept 3 arguments which are the path to reach the
      attribute, a part of the first attribute set and a part of the second
      attribute set. When the predicate is verified, the value of the first
      attribute set is replaced by the value of the second attribute set.
     
pred
       
         [ String ] -> AttrSet -> AttrSet -> Bool
        
path
          The path to the values in the left and right hand sides.
l
          The left hand side value.
r
          The right hand side value.
lhs
       The left hand attribute set of the merge.
rhs
       The right hand attribute set of the merge.
lib.attrsets.recursiveUpdateUntil (path: l: r: path == ["foo"])
  {
    # first attribute set
    foo.bar = 1;
    foo.baz = 2;
    bar = 3;
  }
  {
    #second attribute set
    foo.bar = 1;
    foo.quz = 2;
    baz = 4;
  }
=> {
  foo.bar = 1; # 'foo.*' from the second set
  foo.quz = 2; #
  bar = 3;     # 'bar' from the first set
  baz = 4;     # 'baz' from the second set
}
     
      Located at
      lib/attrsets.nix:424
      in <nixpkgs>.
     
      A recursive variant of the update operator //. The
      recursion stops when one of the attribute values is not an attribute set,
      in which case the right hand side value takes precedence over the left
      hand side value.
     
lhs
       The left hand attribute set of the merge.
rhs
       The right hand attribute set of the merge.
recursiveUpdate
  {
    boot.loader.grub.enable = true;
    boot.loader.grub.device = "/dev/hda";
  }
  {
    boot.loader.grub.device = "";
  }
=> {
  boot.loader.grub.enable = true;
  boot.loader.grub.device = "";
}
Map a function over a list and concatenate the resulting strings.
f
       Function argument
list
       Function argument
lib.strings.concatMapStrings usage exampleconcatMapStrings (x: "a" + x) ["foo" "bar"] => "afooabar"
      Located at
      lib/strings.nix:31
      in <nixpkgs>.
     
Like `concatMapStrings` except that the f functions also gets the position as a parameter.
f
       Function argument
list
       Function argument
lib.strings.concatImapStrings usage example
concatImapStrings (pos: x: "${toString pos}-${x}") ["foo" "bar"]
=> "1-foo2-bar"
      Located at
      lib/strings.nix:42
      in <nixpkgs>.
     
Place an element between each element of a list
separator
       Separator to add between elements
list
       Input list
lib.strings.intersperse usage exampleintersperse "/" ["usr" "local" "bin"] => ["usr" "/" "local" "/" "bin"].
      Located at
      lib/strings.nix:52
      in <nixpkgs>.
     
Concatenate a list of strings with a separator between each element
lib.strings.concatStringsSep usage exampleconcatStringsSep "/" ["usr" "local" "bin"] => "usr/local/bin"
      Located at
      lib/strings.nix:69
      in <nixpkgs>.
     
Maps a function over a list of strings and then concatenates the result with the specified separator interspersed between elements.
sep
       Separator to add between elements
f
       Function to map over the list
list
       List of input strings
lib.strings.concatMapStringsSep usage exampleconcatMapStringsSep "-" (x: toUpper x) ["foo" "bar" "baz"] => "FOO-BAR-BAZ"
      Located at
      lib/strings.nix:82
      in <nixpkgs>.
     
Same as `concatMapStringsSep`, but the mapping function additionally receives the position of its argument.
sep
       Separator to add between elements
f
       Function that receives elements and their positions
list
       List of input strings
lib.strings.concatImapStringsSep usage exampleconcatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ] => "6-3-2"
      Located at
      lib/strings.nix:99
      in <nixpkgs>.
     
Construct a Unix-style, colon-separated search path consisting of the given `subDir` appended to each of the given paths.
subDir
       Directory name to append
paths
       List of base paths
lib.strings.makeSearchPath usage examplemakeSearchPath "bin" ["/root" "/usr" "/usr/local"] => "/root/bin:/usr/bin:/usr/local/bin" makeSearchPath "bin" [""] => "/bin"
      Located at
      lib/strings.nix:118
      in <nixpkgs>.
     
Construct a Unix-style search path by appending the given `subDir` to the specified `output` of each of the packages. If no output by the given name is found, fallback to `.out` and then to the default.
output
       Package output to use
subDir
       Directory name to append
pkgs
       List of packages
lib.strings.makeSearchPathOutput usage examplemakeSearchPathOutput "dev" "bin" [ pkgs.openssl pkgs.zlib ] => "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r-dev/bin:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/bin"
      Located at
      lib/strings.nix:136
      in <nixpkgs>.
     
Construct a library search path (such as RPATH) containing the libraries for a set of packages
lib.strings.makeLibraryPath usage example
makeLibraryPath [ "/usr" "/usr/local" ]
=> "/usr/lib:/usr/local/lib"
pkgs = import <nixpkgs> { }
makeLibraryPath [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r/lib:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/lib"
      Located at
      lib/strings.nix:154
      in <nixpkgs>.
     
Construct a binary search path (such as $PATH) containing the binaries for a set of packages.
lib.strings.makeBinPath usage examplemakeBinPath ["/root" "/usr" "/usr/local"] => "/root/bin:/usr/bin:/usr/local/bin"
      Located at
      lib/strings.nix:163
      in <nixpkgs>.
     
Depending on the boolean `cond', return either the given string or the empty string. Useful to concatenate against a bigger string.
cond
       Condition
string
       String to return if condition is true
lib.strings.optionalString usage exampleoptionalString true "some-string" => "some-string" optionalString false "some-string" => ""
      Located at
      lib/strings.nix:176
      in <nixpkgs>.
     
Determine whether a string has given prefix.
pref
       Prefix to check for
str
       Input string
lib.strings.hasPrefix usage examplehasPrefix "foo" "foobar" => true hasPrefix "foo" "barfoo" => false
      Located at
      lib/strings.nix:192
      in <nixpkgs>.
     
Determine whether a string has given suffix.
suffix
       Suffix to check for
content
       Input string
lib.strings.hasSuffix usage examplehasSuffix "foo" "foobar" => false hasSuffix "foo" "barfoo" => true
      Located at
      lib/strings.nix:208
      in <nixpkgs>.
     
Determine whether a string contains the given infix
infix
       Function argument
content
       Function argument
lib.strings.hasInfix usage examplehasInfix "bc" "abcd" => true hasInfix "ab" "abcd" => true hasInfix "cd" "abcd" => true hasInfix "foo" "abcd" => false
      Located at
      lib/strings.nix:233
      in <nixpkgs>.
     
Convert a string to a list of characters (i.e. singleton strings). This allows you to, e.g., map a function over each character. However, note that this will likely be horribly inefficient; Nix is not a general purpose programming language. Complex string manipulations should, if appropriate, be done in a derivation. Also note that Nix treats strings as a list of bytes and thus doesn't handle unicode.
s
       Function argument
lib.strings.stringToCharacters usage examplestringToCharacters "" => [ ] stringToCharacters "abc" => [ "a" "b" "c" ] stringToCharacters "💩" => [ "�" "�" "�" "�" ]
      Located at
      lib/strings.nix:257
      in <nixpkgs>.
     
Manipulate a string character by character and replace them by strings before concatenating the results.
f
       Function to map over each individual character
s
       Input string
lib.strings.stringAsChars usage examplestringAsChars (x: if x == "a" then "i" else x) "nax" => "nix"
      Located at
      lib/strings.nix:269
      in <nixpkgs>.
     
Escape occurrence of the elements of `list` in `string` by prefixing it with a backslash.
list
       Function argument
      Located at
      lib/strings.nix:286
      in <nixpkgs>.
     
Quote string to be used safely within the Bourne shell.
arg
       Function argument
lib.strings.escapeShellArg usage exampleescapeShellArg "esc'ape\nme" => "'esc'\\''ape\nme'"
      Located at
      lib/strings.nix:296
      in <nixpkgs>.
     
Quote all arguments to be safely passed to the Bourne shell.
lib.strings.escapeShellArgs usage exampleescapeShellArgs ["one" "two three" "four'five"] => "'one' 'two three' 'four'\\''five'"
      Located at
      lib/strings.nix:306
      in <nixpkgs>.
     
Turn a string into a Nix expression representing that string
s
       Function argument
lib.strings.escapeNixString usage example
escapeNixString "hello\${}\n"
=> "\"hello\\\${}\\n\""
      Located at
      lib/strings.nix:316
      in <nixpkgs>.
     
Appends string context from another string. This is an implementation detail of Nix.
Strings in Nix carry an invisible `context` which is a list of strings representing store paths. If the string is later used in a derivation attribute, the derivation will properly populate the inputDrvs and inputSrcs.
a
       Function argument
b
       Function argument
lib.strings.addContextFrom usage example
pkgs = import <nixpkgs> { };
addContextFrom pkgs.coreutils "bar"
=> "bar"
      Located at
      lib/strings.nix:369
      in <nixpkgs>.
     
Cut a string with a separator and produces a list of strings which were separated by this separator.
NOTE: this function is not performant and should never be used.
_sep
       Function argument
_s
       Function argument
lib.strings.splitString usage examplesplitString "." "foo.bar.baz" => [ "foo" "bar" "baz" ] splitString "/" "/usr/local/bin" => [ "" "usr" "local" "bin" ]
      Located at
      lib/strings.nix:382
      in <nixpkgs>.
     
Return a string without the specified prefix, if the prefix matches.
prefix
       Prefix to remove if it matches
str
       Input string
lib.strings.removePrefix usage exampleremovePrefix "foo." "foo.bar.baz" => "bar.baz" removePrefix "xxx" "foo.bar.baz" => "foo.bar.baz"
      Located at
      lib/strings.nix:415
      in <nixpkgs>.
     
Return a string without the specified suffix, if the suffix matches.
suffix
       Suffix to remove if it matches
str
       Input string
lib.strings.removeSuffix usage exampleremoveSuffix "front" "homefront" => "home" removeSuffix "xxx" "homefront" => "homefront"
      Located at
      lib/strings.nix:439
      in <nixpkgs>.
     
Return true if string v1 denotes a version older than v2.
v1
       Function argument
v2
       Function argument
lib.strings.versionOlder usage exampleversionOlder "1.1" "1.2" => true versionOlder "1.1" "1.1" => false
      Located at
      lib/strings.nix:461
      in <nixpkgs>.
     
Return true if string v1 denotes a version equal to or newer than v2.
v1
       Function argument
v2
       Function argument
lib.strings.versionAtLeast usage exampleversionAtLeast "1.1" "1.0" => true versionAtLeast "1.1" "1.1" => true versionAtLeast "1.1" "1.2" => false
      Located at
      lib/strings.nix:473
      in <nixpkgs>.
     
This function takes an argument that's either a derivation or a derivation's "name" attribute and extracts the version part from that argument.
x
       Function argument
lib.strings.getVersion usage examplegetVersion "youtube-dl-2016.01.01" => "2016.01.01" getVersion pkgs.youtube-dl => "2016.01.01"
      Located at
      lib/strings.nix:485
      in <nixpkgs>.
     
Extract name with version from URL. Ask for separator which is supposed to start extension.
url
       Function argument
sep
       Function argument
lib.strings.nameFromURL usage examplenameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "-" => "nix" nameFromURL "https://nixos.org/releases/nix/nix-1.7/nix-1.7-x86_64-linux.tar.bz2" "_" => "nix-1.7-x86"
      Located at
      lib/strings.nix:501
      in <nixpkgs>.
     
Create an --{enable,disable}-<feat> string that can be passed to standard GNU Autoconf scripts.
enable
       Function argument
feat
       Function argument
lib.strings.enableFeature usage exampleenableFeature true "shared" => "--enable-shared" enableFeature false "shared" => "--disable-shared"
      Located at
      lib/strings.nix:517
      in <nixpkgs>.
     
Create an --{enable-<feat>=<value>,disable-<feat>} string that can be passed to standard GNU Autoconf scripts.
enable
       Function argument
feat
       Function argument
value
       Function argument
lib.strings.enableFeatureAs usage exampleenableFeature true "shared" "foo" => "--enable-shared=foo" enableFeature false "shared" (throw "ignored") => "--disable-shared"
      Located at
      lib/strings.nix:528
      in <nixpkgs>.
     
Create an --{with,without}-<feat> string that can be passed to standard GNU Autoconf scripts.
with_
       Function argument
feat
       Function argument
lib.strings.withFeature usage examplewithFeature true "shared" => "--with-shared" withFeature false "shared" => "--without-shared"
      Located at
      lib/strings.nix:539
      in <nixpkgs>.
     
Create an --{with-<feat>=<value>,without-<feat>} string that can be passed to standard GNU Autoconf scripts.
with_
       Function argument
feat
       Function argument
value
       Function argument
lib.strings.withFeatureAs usage examplewith_Feature true "shared" "foo" => "--with-shared=foo" with_Feature false "shared" (throw "ignored") => "--without-shared"
      Located at
      lib/strings.nix:550
      in <nixpkgs>.
     
Create a fixed width string with additional prefix to match required width.
This function will fail if the input string is longer than the requested length.
width
       Function argument
filler
       Function argument
str
       Function argument
lib.strings.fixedWidthString usage examplefixedWidthString 5 "0" (toString 15) => "00015"
      Located at
      lib/strings.nix:564
      in <nixpkgs>.
     
Format a number adding leading zeroes up to fixed width.
width
       Function argument
n
       Function argument
      Located at
      lib/strings.nix:581
      in <nixpkgs>.
     
Check whether a value can be coerced to a string
x
       Function argument
      Located at
      lib/strings.nix:584
      in <nixpkgs>.
     
Check whether a value is a store path.
x
       Function argument
lib.strings.isStorePath usage example
isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11/bin/python"
=> false
isStorePath "/nix/store/d945ibfx9x185xf04b890y4f9g3cbb63-python-2.7.11/"
=> true
isStorePath pkgs.python
=> true
isStorePath [] || isStorePath 42 || isStorePath {} || …
=> false
      Located at
      lib/strings.nix:602
      in <nixpkgs>.
     
Parse a string string as an int.
str
       Function argument
lib.strings.toInt usage exampletoInt "1337" => 1337 toInt "-4" => -4 toInt "3.14" => error: floating point JSON numbers are not supported
      Located at
      lib/strings.nix:623
      in <nixpkgs>.
     
Read a list of paths from `file`, relative to the `rootPath`. Lines beginning with `#` are treated as comments and ignored. Whitespace is significant.
NOTE: This function is not performant and should be avoided.
rootPath
       Function argument
file
       Function argument
lib.strings.readPathsFromFile usage examplereadPathsFromFile /prefix ./pkgs/development/libraries/qt-5/5.4/qtbase/series => [ "/prefix/dlopen-resolv.patch" "/prefix/tzdir.patch" "/prefix/dlopen-libXcursor.patch" "/prefix/dlopen-openssl.patch" "/prefix/dlopen-dbus.patch" "/prefix/xdg-config-dirs.patch" "/prefix/nix-profiles-library-paths.patch" "/prefix/compose-search-path.patch" ]
      Located at
      lib/strings.nix:644
      in <nixpkgs>.
     
Read the contents of a file removing the trailing \n
file
       Function argument
lib.strings.fileContents usage example$ echo "1.0" > ./version fileContents ./version => "1.0"
      Located at
      lib/strings.nix:663
      in <nixpkgs>.
     
The identity function For when you need a function that does “nothing”.
x
       The value to return
      Located at
      lib/trivial.nix:12
      in <nixpkgs>.
     
The constant function
Ignores the second argument. If called with only one argument, constructs a function that always returns a static value.
x
       Value to return
y
       Value to ignore
      Located at
      lib/trivial.nix:26
      in <nixpkgs>.
     
Concatenate two lists
x
       Function argument
y
       Function argument
      Located at
      lib/trivial.nix:43
      in <nixpkgs>.
     
Convert a boolean to a string.
This function uses the strings "true" and "false" to represent boolean values. Calling `toString` on a bool instead returns "1" and "" (sic!).
b
       Function argument
      Located at
      lib/trivial.nix:77
      in <nixpkgs>.
     
Merge two attribute sets shallowly, right side trumps left
mergeAttrs :: attrs -> attrs -> attrs
x
       Left attribute set
y
       Right attribute set (higher precedence for equal keys)
lib.trivial.mergeAttrs usage example
mergeAttrs { a = 1; b = 2; } { b = 3; c = 4; }
=> { a = 1; b = 3; c = 4; }
      Located at
      lib/trivial.nix:87
      in <nixpkgs>.
     
Flip the order of the arguments of a binary function.
f
       Function argument
a
       Function argument
b
       Function argument
      Located at
      lib/trivial.nix:101
      in <nixpkgs>.
     
Apply function if the supplied argument is non-null.
f
       Function to call
a
       Argument to check for null before passing it to `f`
lib.trivial.mapNullable usage examplemapNullable (x: x+1) null => null mapNullable (x: x+1) 22 => 23
      Located at
      lib/trivial.nix:111
      in <nixpkgs>.
     
Returns the current full nixpkgs version number.
      Located at
      lib/trivial.nix:127
      in <nixpkgs>.
     
Returns the current nixpkgs release number as string.
      Located at
      lib/trivial.nix:130
      in <nixpkgs>.
     
Returns the current nixpkgs release code name.
On each release the first letter is bumped and a new animal is chosen starting with that new letter.
      Located at
      lib/trivial.nix:137
      in <nixpkgs>.
     
Returns the current nixpkgs version suffix as string.
      Located at
      lib/trivial.nix:140
      in <nixpkgs>.
     
Attempts to return the the current revision of nixpkgs and returns the supplied default value otherwise.
default
       Default value to return if revision can not be determined
      Located at
      lib/trivial.nix:151
      in <nixpkgs>.
     
Determine whether the function is being called from inside a Nix shell.
      Located at
      lib/trivial.nix:169
      in <nixpkgs>.
     
Return minimum of two numbers.
x
       Function argument
y
       Function argument
      Located at
      lib/trivial.nix:175
      in <nixpkgs>.
     
Return maximum of two numbers.
x
       Function argument
y
       Function argument
      Located at
      lib/trivial.nix:178
      in <nixpkgs>.
     
Integer modulus
base
       Function argument
int
       Function argument
      Located at
      lib/trivial.nix:188
      in <nixpkgs>.
     
C-style comparisons
a < b, compare a b => -1 a == b, compare a b => 0 a > b, compare a b => 1
a
       Function argument
b
       Function argument
      Located at
      lib/trivial.nix:199
      in <nixpkgs>.
     
Split type into two subtypes by predicate `p`, take all elements of the first subtype to be less than all the elements of the second subtype, compare elements of a single subtype with `yes` and `no` respectively.
p
       Predicate
yes
       Comparison function if predicate holds for both values
no
       Comparison function if predicate holds for neither value
a
       First value to compare
b
       Second value to compare
lib.trivial.splitByAndCompare usage examplelet cmp = splitByAndCompare (hasPrefix "foo") compare compare; in cmp "a" "z" => -1 cmp "fooa" "fooz" => -1 cmp "f" "a" => 1 cmp "fooa" "a" => -1 # while compare "fooa" "a" => 1
      Located at
      lib/trivial.nix:224
      in <nixpkgs>.
     
Reads a JSON file.
Type :: path -> any
path
       Function argument
      Located at
      lib/trivial.nix:244
      in <nixpkgs>.
     
Add metadata about expected function arguments to a function. The metadata should match the format given by builtins.functionArgs, i.e. a set from expected argument to a bool representing whether that argument has a default or not. setFunctionArgs : (a → b) → Map String Bool → (a → b)
This function is necessary because you can't dynamically create a function of the { a, b ? foo, ... }: format, but some facilities like callPackage expect to be able to query expected arguments.
f
       Function argument
args
       Function argument
      Located at
      lib/trivial.nix:278
      in <nixpkgs>.
     
Extract the expected function arguments from a function. This works both with nix-native { a, b ? foo, ... }: style functions and functions with args set with 'setFunctionArgs'. It has the same return type and semantics as builtins.functionArgs. setFunctionArgs : (a → b) → Map String Bool.
f
       Function argument
      Located at
      lib/trivial.nix:290
      in <nixpkgs>.
     
Check whether something is a function or something annotated with function args.
f
       Function argument
      Located at
      lib/trivial.nix:295
      in <nixpkgs>.
     
Create a list consisting of a single element. `singleton x` is sometimes more convenient with respect to indentation than `[x]` when x spans multiple lines.
x
       Function argument
      Located at
      lib/lists.nix:22
      in <nixpkgs>.
     
“right fold” a binary function `op` between successive elements of `list` with `nul' as the starting value, i.e., `foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))`.
op
       Function argument
nul
       Function argument
list
       Function argument
lib.lists.foldr usage exampleconcat = foldr (a: b: a + b) "z" concat [ "a" "b" "c" ] => "abcz" # different types strange = foldr (int: str: toString (int + 1) + str) "a" strange [ 1 2 3 4 ] => "2345a"
      Located at
      lib/lists.nix:39
      in <nixpkgs>.
     
`fold` is an alias of `foldr` for historic reasons
      Located at
      lib/lists.nix:50
      in <nixpkgs>.
     
“left fold”, like `foldr`, but from the left: `foldl op nul [x_1 x_2 ... x_n] == op (... (op (op nul x_1) x_2) ... x_n)`.
op
       Function argument
nul
       Function argument
list
       Function argument
lib.lists.foldl usage examplelconcat = foldl (a: b: a + b) "z" lconcat [ "a" "b" "c" ] => "zabc" # different types lstrange = foldl (str: int: str + toString (int + 1)) "" strange [ 1 2 3 4 ] => "a2345"
      Located at
      lib/lists.nix:67
      in <nixpkgs>.
     
Strict version of `foldl`.
The difference is that evaluation is forced upon access. Usually used with small whole results (in contract with lazily-generated list or large lists where only a part is consumed.)
      Located at
      lib/lists.nix:83
      in <nixpkgs>.
     
Map with index starting from 0
f
       Function argument
list
       Function argument
lib.lists.imap0 usage example
imap0 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-0" "b-1" ]
      Located at
      lib/lists.nix:93
      in <nixpkgs>.
     
Map with index starting from 1
f
       Function argument
list
       Function argument
lib.lists.imap1 usage example
imap1 (i: v: "${v}-${toString i}") ["a" "b"]
=> [ "a-1" "b-2" ]
      Located at
      lib/lists.nix:103
      in <nixpkgs>.
     
Map and concatenate the result.
lib.lists.concatMap usage exampleconcatMap (x: [x] ++ ["z"]) ["a" "b"] => [ "a" "z" "b" "z" ]
      Located at
      lib/lists.nix:113
      in <nixpkgs>.
     
Flatten the argument into a single list; that is, nested lists are spliced into the top-level lists.
x
       Function argument
lib.lists.flatten usage exampleflatten [1 [2 [3] 4] 5] => [1 2 3 4 5] flatten 1 => [1]
      Located at
      lib/lists.nix:124
      in <nixpkgs>.
     
Remove elements equal to 'e' from a list. Useful for buildInputs.
e
       Element to remove from the list
      Located at
      lib/lists.nix:137
      in <nixpkgs>.
     
Find the sole element in the list matching the specified predicate, returns `default` if no such element exists, or `multiple` if there are multiple matching elements.
pred
       Predicate
default
       Default value to return if element was not found.
multiple
       Default value to return if more than one element was found
list
       Input list
lib.lists.findSingle usage examplefindSingle (x: x == 3) "none" "multiple" [ 1 3 3 ] => "multiple" findSingle (x: x == 3) "none" "multiple" [ 1 3 ] => 3 findSingle (x: x == 3) "none" "multiple" [ 1 9 ] => "none"
      Located at
      lib/lists.nix:155
      in <nixpkgs>.
     
Find the first element in the list matching the specified predicate or return `default` if no such element exists.
pred
       Predicate
default
       Default value to return
list
       Input list
lib.lists.findFirst usage examplefindFirst (x: x > 3) 7 [ 1 6 4 ] => 6 findFirst (x: x > 9) 7 [ 1 6 4 ] => 7
      Located at
      lib/lists.nix:180
      in <nixpkgs>.
     
Return true if function `pred` returns true for at least one element of `list`.
lib.lists.any usage example
any isString [ 1 "a" { } ]
=> true
any isString [ 1 { } ]
=> false
      Located at
      lib/lists.nix:201
      in <nixpkgs>.
     
Return true if function `pred` returns true for all elements of `list`.
lib.lists.all usage exampleall (x: x < 3) [ 1 2 ] => true all (x: x < 3) [ 1 2 3 ] => false
      Located at
      lib/lists.nix:214
      in <nixpkgs>.
     
Count how many elements of `list` match the supplied predicate function.
pred
       Predicate
      Located at
      lib/lists.nix:225
      in <nixpkgs>.
     
Return a singleton list or an empty list, depending on a boolean value. Useful when building lists with optional elements (e.g. `++ optional (system == "i686-linux") flashplayer').
cond
       Function argument
elem
       Function argument
lib.lists.optional usage exampleoptional true "foo" => [ "foo" ] optional false "foo" => [ ]
      Located at
      lib/lists.nix:241
      in <nixpkgs>.
     
Return a list or an empty list, depending on a boolean value.
cond
       Condition
elems
       List to return if condition is true
lib.lists.optionals usage exampleoptionals true [ 2 3 ] => [ 2 3 ] optionals false [ 2 3 ] => [ ]
      Located at
      lib/lists.nix:253
      in <nixpkgs>.
     
If argument is a list, return it; else, wrap it in a singleton list. If you're using this, you should almost certainly reconsider if there isn't a more "well-typed" approach.
x
       Function argument
      Located at
      lib/lists.nix:270
      in <nixpkgs>.
     
Return a list of integers from `first' up to and including `last'.
first
       First integer in the range
last
       Last integer in the range
      Located at
      lib/lists.nix:282
      in <nixpkgs>.
     
Splits the elements of a list in two lists, `right` and `wrong`, depending on the evaluation of a predicate.
lib.lists.partition usage example
partition (x: x > 2) [ 5 1 2 3 4 ]
=> { right = [ 5 3 4 ]; wrong = [ 1 2 ]; }
      Located at
      lib/lists.nix:301
      in <nixpkgs>.
     
Splits the elements of a list into many lists, using the return value of a predicate. Predicate should return a string which becomes keys of attrset `groupBy' returns.
`groupBy'` allows to customise the combining function and initial value
op
       Function argument
nul
       Function argument
pred
       Function argument
lst
       Function argument
lib.lists.groupBy' usage example
groupBy (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = [ 5 3 4 ]; false = [ 1 2 ]; }
groupBy (x: x.name) [ {name = "icewm"; script = "icewm &";}
{name = "xfce";  script = "xfce4-session &";}
{name = "icewm"; script = "icewmbg &";}
{name = "mate";  script = "gnome-session &";}
]
=> { icewm = [ { name = "icewm"; script = "icewm &"; }
{ name = "icewm"; script = "icewmbg &"; } ];
mate  = [ { name = "mate";  script = "gnome-session &"; } ];
xfce  = [ { name = "xfce";  script = "xfce4-session &"; } ];
}
groupBy' builtins.add 0 (x: boolToString (x > 2)) [ 5 1 2 3 4 ]
=> { true = 12; false = 3; }
      Located at
      lib/lists.nix:330
      in <nixpkgs>.
     
Merges two lists of the same size together. If the sizes aren't the same the merging stops at the shortest. How both lists are merged is defined by the first argument.
f
       Function to zip elements of both lists
fst
       First list
snd
       Second list
lib.lists.zipListsWith usage examplezipListsWith (a: b: a + b) ["h" "l"] ["e" "o"] => ["he" "lo"]
      Located at
      lib/lists.nix:350
      in <nixpkgs>.
     
Merges two lists of the same size together. If the sizes aren't the same the merging stops at the shortest.
lib.lists.zipLists usage example
zipLists [ 1 2 ] [ "a" "b" ]
=> [ { fst = 1; snd = "a"; } { fst = 2; snd = "b"; } ]
      Located at
      lib/lists.nix:369
      in <nixpkgs>.
     
Reverse the order of the elements of a list.
xs
       Function argument
      Located at
      lib/lists.nix:380
      in <nixpkgs>.
     
Depth-First Search (DFS) for lists `list != []`.
`before a b == true` means that `b` depends on `a` (there's an edge from `b` to `a`).
stopOnCycles
       Function argument
before
       Function argument
list
       Function argument
lib.lists.listDfs usage example
listDfs true hasPrefix [ "/home/user" "other" "/" "/home" ]
== { minimal = "/";                  # minimal element
visited = [ "/home/user" ];     # seen elements (in reverse order)
rest    = [ "/home" "other" ];  # everything else
}
listDfs true hasPrefix [ "/home/user" "other" "/" "/home" "/" ]
== { cycle   = "/";                  # cycle encountered at this element
loops   = [ "/" ];              # and continues to these elements
visited = [ "/" "/home/user" ]; # elements leading to the cycle (in reverse order)
rest    = [ "/home" "other" ];  # everything else
      Located at
      lib/lists.nix:402
      in <nixpkgs>.
     
Sort a list based on a partial ordering using DFS. This implementation is O(N^2), if your ordering is linear, use `sort` instead.
`before a b == true` means that `b` should be after `a` in the result.
before
       Function argument
list
       Function argument
lib.lists.toposort usage example
toposort hasPrefix [ "/home/user" "other" "/" "/home" ]
== { result = [ "/" "/home" "/home/user" "other" ]; }
toposort hasPrefix [ "/home/user" "other" "/" "/home" "/" ]
== { cycle = [ "/home/user" "/" "/" ]; # path leading to a cycle
loops = [ "/" ]; }                # loops back to these elements
toposort hasPrefix [ "other" "/home/user" "/home" "/" ]
== { result = [ "other" "/" "/home" "/home/user" ]; }
toposort (a: b: a < b) [ 3 2 1 ] == { result = [ 1 2 3 ]; }
      Located at
      lib/lists.nix:441
      in <nixpkgs>.
     
Sort a list based on a comparator function which compares two elements and returns true if the first argument is strictly below the second argument. The returned list is sorted in an increasing order. The implementation does a quick-sort.
      Located at
      lib/lists.nix:469
      in <nixpkgs>.
     
Compare two lists element-by-element.
cmp
       Function argument
a
       Function argument
b
       Function argument
lib.lists.compareLists usage examplecompareLists compare [] [] => 0 compareLists compare [] [ "a" ] => -1 compareLists compare [ "a" ] [] => 1 compareLists compare [ "a" "b" ] [ "a" "c" ] => 1
      Located at
      lib/lists.nix:498
      in <nixpkgs>.
     
Sort list using "Natural sorting". Numeric portions of strings are sorted in numeric order.
lst
       Function argument
lib.lists.naturalSort usage examplenaturalSort ["disk11" "disk8" "disk100" "disk9"] => ["disk8" "disk9" "disk11" "disk100"] naturalSort ["10.46.133.149" "10.5.16.62" "10.54.16.25"] => ["10.5.16.62" "10.46.133.149" "10.54.16.25"] naturalSort ["v0.2" "v0.15" "v0.0.9"] => [ "v0.0.9" "v0.2" "v0.15" ]
      Located at
      lib/lists.nix:521
      in <nixpkgs>.
     
Return the first (at most) N elements of a list.
count
       Number of elements to take
lib.lists.take usage exampletake 2 [ "a" "b" "c" "d" ] => [ "a" "b" ] take 2 [ ] => [ ]
      Located at
      lib/lists.nix:539
      in <nixpkgs>.
     
Remove the first (at most) N elements of a list.
count
       Number of elements to drop
list
       Input list
lib.lists.drop usage exampledrop 2 [ "a" "b" "c" "d" ] => [ "c" "d" ] drop 2 [ ] => [ ]
      Located at
      lib/lists.nix:553
      in <nixpkgs>.
     
Return a list consisting of at most `count` elements of `list`, starting at index `start`.
start
       Index at which to start the sublist
count
       Number of elements to take
list
       Input list
lib.lists.sublist usage examplesublist 1 3 [ "a" "b" "c" "d" "e" ] => [ "b" "c" "d" ] sublist 1 3 [ ] => [ ]
      Located at
      lib/lists.nix:570
      in <nixpkgs>.
     
Return the last element of a list.
This function throws an error if the list is empty.
list
       Function argument
      Located at
      lib/lists.nix:594
      in <nixpkgs>.
     
Return all elements but the last.
This function throws an error if the list is empty.
list
       Function argument
      Located at
      lib/lists.nix:608
      in <nixpkgs>.
     
Return the image of the cross product of some lists by a function.
f
       Function argument
lib.lists.crossLists usage example
crossLists (x:y: "${toString x}${toString y}") [[1 2] [3 4]]
=> [ "13" "14" "23" "24" ]
      Located at
      lib/lists.nix:619
      in <nixpkgs>.
     
Remove duplicate elements from the list. O(n^2) complexity.
list
       Function argument
      Located at
      lib/lists.nix:630
      in <nixpkgs>.
     
Intersects list 'e' and another list. O(nm) complexity.
e
       Function argument
      Located at
      lib/lists.nix:645
      in <nixpkgs>.
     
Subtracts list 'e' from another list. O(nm) complexity.
e
       Function argument
lib.lists.subtractLists usage examplesubtractLists [ 3 2 ] [ 1 2 3 4 5 3 ] => [ 1 4 5 ]
      Located at
      lib/lists.nix:653
      in <nixpkgs>.
     
Test if two lists have no common element. It should be slightly more efficient than (intersectLists a b == [])
a
       Function argument
b
       Function argument
      Located at
      lib/lists.nix:658
      in <nixpkgs>.
     
Conditionally trace the supplied message, based on a predicate.
pred
       Predicate to check
msg
       Message that should be traced
x
       Value to return
      Located at
      lib/debug.nix:35
      in <nixpkgs>.
     
Trace the supplied value after applying a function to it, and return the original value.
f
       Function to apply
x
       Value to trace and return
lib.debug.traceValFn usage example
traceValFn (v: "mystring ${v}") "foo"
trace: mystring foo
=> "foo"
      Located at
      lib/debug.nix:53
      in <nixpkgs>.
     
`builtins.trace`, but the value is `builtins.deepSeq`ed first.
x
       The value to trace
y
       The value to return
lib.debug.traceSeq usage example
trace { a.b.c = 3; } null
trace: { a = <CODE>; }
=> null
traceSeq { a.b.c = 3; } null
trace: { a = { b = { c = 3; }; }; }
=> null
      Located at
      lib/debug.nix:82
      in <nixpkgs>.
     
Like `traceSeq`, but only evaluate down to depth n. This is very useful because lots of `traceSeq` usages lead to an infinite recursion.
depth
       Function argument
x
       Function argument
y
       Function argument
lib.debug.traceSeqN usage example
traceSeqN 2 { a.b.c = 3; } null
trace: { a = { b = {…}; }; }
=> null
      Located at
      lib/debug.nix:97
      in <nixpkgs>.
     
A combination of `traceVal` and `traceSeq` that applies a provided function to the value to be traced after `deepSeq`ing it.
f
       Function to apply
v
       Value to trace
      Located at
      lib/debug.nix:114
      in <nixpkgs>.
     
A combination of `traceVal` and `traceSeq`.
      Located at
      lib/debug.nix:121
      in <nixpkgs>.
     
A combination of `traceVal` and `traceSeqN` that applies a provided function to the value to be traced.
f
       Function to apply
depth
       Function argument
v
       Value to trace
      Located at
      lib/debug.nix:125
      in <nixpkgs>.
     
A combination of `traceVal` and `traceSeqN`.
      Located at
      lib/debug.nix:133
      in <nixpkgs>.
     
Evaluate a set of tests. A test is an attribute set `{expr, expected}`, denoting an expression and its expected result. The result is a list of failed tests, each represented as `{name, expected, actual}`, denoting the attribute name of the failing test and its expected and actual results.
Used for regression testing of the functions in lib; see tests.nix for an example. Only tests having names starting with "test" are run.
Add attr { tests = ["testName"]; } to run these tests only.
tests
       Tests to run
      Located at
      lib/debug.nix:150
      in <nixpkgs>.
     
Create a test assuming that list elements are `true`.
expr
       Function argument
      Located at
      lib/debug.nix:166
      in <nixpkgs>.
     
Returns true when the given argument is an option
lib.options.isOption usage example
isOption 1             // => false
isOption (mkOption {}) // => true
      Located at
      lib/options.nix:19
      in <nixpkgs>.
     
Creates an Option attribute set. mkOption accepts an attribute set with the following keys:
All keys default to `null` when not given.
pattern
       Structured function argument
default
          Default value used when no definition is given in the configuration.
defaultText
          Textual representation of the default, for the manual.
example
          Example value used in the manual.
description
          String describing the option.
relatedPackages
          Related packages used in the manual (see `genRelatedPackages` in ../nixos/doc/manual/default.nix).
type
          Option type, providing type-checking and value merging.
apply
          Function that converts the option value to something else.
internal
          Whether the option is for NixOS developers only.
visible
          Whether the option shows up in the manual.
readOnly
          Whether the option can be set only once
lib.options.mkOption usage example
mkOption { }  // => { _type = "option"; }
mkOption { defaultText = "foo"; } // => { _type = "option"; defaultText = "foo"; }
      Located at
      lib/options.nix:29
      in <nixpkgs>.
     
Creates an Option attribute set for a boolean value option i.e an option to be toggled on or off:
name
       Name for the created option
lib.options.mkEnableOption usage example
mkEnableOption "foo"
=> { _type = "option"; default = false; description = "Whether to enable foo."; example = true; type = { ... }; }
      Located at
      lib/options.nix:61
      in <nixpkgs>.
     
This option accepts anything, but it does not produce any result.
This is useful for sharing a module across different module sets without having to implement similar features as long as the values of the options are not accessed.
attrs
       Function argument
      Located at
      lib/options.nix:75
      in <nixpkgs>.
     
"Merge" option definitions by checking that they all have the same value.
loc
       Function argument
defs
       Function argument
      Located at
      lib/options.nix:106
      in <nixpkgs>.
     
Extracts values of all "value" keys of the given list.
lib.options.getValues usage example
getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ]
getValues [ ]                               // => [ ]
      Located at
      lib/options.nix:122
      in <nixpkgs>.
     
Extracts values of all "file" keys of the given list
lib.options.getFiles usage example
getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ]
getFiles [ ]                                         // => [ ]
      Located at
      lib/options.nix:132
      in <nixpkgs>.
     
This function recursively removes all derivation attributes from `x` except for the `name` attribute.
This is to make the generation of `options.xml` much more efficient: the XML representation of derivations is very large (on the order of megabytes) and is not actually used by the manual generator.
x
       Function argument
      Located at
      lib/options.nix:171
      in <nixpkgs>.
     
For use in the `example` option attribute. It causes the given text to be included verbatim in documentation. This is necessary for example values that are not simple values, e.g., functions.
text
       Function argument
      Located at
      lib/options.nix:183
      in <nixpkgs>.
     
Convert an option, described as a list of the option parts in to a safe, human readable version.
parts
       Function argument
lib.options.showOption usage example(showOption ["foo" "bar" "baz"]) == "foo.bar.baz" (showOption ["foo" "bar.baz" "tux"]) == "foo.\"bar.baz\".tux"
      Located at
      lib/options.nix:194
      in <nixpkgs>.
     
    Sometimes one wants to override parts of nixpkgs, e.g.
    derivation attributes, the results of derivations.
   
These functions are used to make changes to packages, returning only single packages. Overlays, on the other hand, can be used to combine the overridden packages across the entire package set of Nixpkgs.
     The function override is usually available for all the
     derivations in the nixpkgs expression (pkgs).
    
It is used to override the arguments passed to a function.
Example usages:
pkgs.foo.override { arg1 = val1; arg2 = val2; ... }
import pkgs.path { overlays = [ (self: super: {
  foo = super.foo.override { barSupport = true ; };
  })]};
mypkg = pkgs.callPackage ./mypkg.nix {
  mydep = pkgs.mydep.override { ... };
  }
     In the first example, pkgs.foo is the result of a
     function call with some default arguments, usually a derivation. Using
     pkgs.foo.override will call the same function with the
     given new arguments.
    
     The function overrideAttrs allows overriding the
     attribute set passed to a stdenv.mkDerivation call,
     producing a new derivation based on the original one. This function is
     available on all derivations produced by the
     stdenv.mkDerivation function, which is most packages in
     the nixpkgs expression pkgs.
    
Example usage:
helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
  separateDebugInfo = true;
});
     In the above example, the separateDebugInfo attribute
     is overridden to be true, thus building debug info for
     helloWithDebug, while all other attributes will be
     retained from the original hello package.
    
     The argument oldAttrs is conventionally used to refer
     to the attr set originally passed to
     stdenv.mkDerivation.
    
      Note that separateDebugInfo is processed only by the
      stdenv.mkDerivation function, not the generated, raw
      Nix derivation. Thus, using overrideDerivation will
      not work in this case, as it overrides only the attributes of the final
      derivation. It is for this reason that overrideAttrs
      should be preferred in (almost) all cases to
      overrideDerivation, i.e. to allow using
      stdenv.mkDerivation to process input arguments, as
      well as the fact that it is easier to use (you can use the same attribute
      names you see in your Nix code, instead of the ones generated (e.g.
      buildInputs vs nativeBuildInputs),
      and it involves less typing).
     
      You should prefer overrideAttrs in almost all cases,
      see its documentation for the reasons why.
      overrideDerivation is not deprecated and will continue
      to work, but is less nice to use and does not have as many abilities as
      overrideAttrs.
     
      Do not use this function in Nixpkgs as it evaluates a Derivation before
      modifying it, which breaks package abstraction and removes error-checking
      of function arguments. In addition, this evaluation-per-function
      application incurs a performance penalty, which can become a problem if
      many overrides are used. It is only intended for ad-hoc customisation,
      such as in ~/.config/nixpkgs/config.nix.
     
     The function overrideDerivation creates a new
     derivation based on an existing one by overriding the original's
     attributes with the attribute set produced by the specified function. This
     function is available on all derivations defined using the
     makeOverridable function. Most standard
     derivation-producing functions, such as
     stdenv.mkDerivation, are defined using this function,
     which means most packages in the nixpkgs expression,
     pkgs, have this function.
    
Example usage:
mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
  name = "sed-4.2.2-pre";
  src = fetchurl {
    url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
    sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
  };
  patches = [];
});
     In the above example, the name, src,
     and patches of the derivation will be overridden, while
     all other attributes will be retained from the original derivation.
    
     The argument oldAttrs is used to refer to the attribute
     set of the original derivation.
    
      A package's attributes are evaluated *before* being modified by the
      overrideDerivation function. For example, the
      name attribute reference in url =
      "mirror://gnu/hello/${name}.tar.gz"; is filled-in *before* the
      overrideDerivation function modifies the attribute
      set. This means that overriding the name attribute, in
      this example, *will not* change the value of the url
      attribute. Instead, we need to override both the name
      *and* url attributes.
     
     The function lib.makeOverridable is used to make the
     result of a function easily customizable. This utility only makes sense
     for functions that accept an argument set and return an attribute set.
    
Example usage:
f = { a, b }: { result = a+b; };
c = lib.makeOverridable f { a = 1; b = 2; };
     The variable c is the value of the f
     function applied with some default arguments. Hence the value of
     c.result is 3, in this example.
    
     The variable c however also has some additional
     functions, like c.override which
     can be used to override the default arguments. In this example the value
     of (c.override { a = 4; }).result is 6.
    
    Generators are functions that create file formats from nix data structures,
    e. g. for configuration files. There are generators available for:
    INI, JSON and YAML
   
    All generators follow a similar call interface: generatorName
    configFunctions data, where configFunctions is an
    attrset of user-defined functions that format nested parts of the content.
    They each have common defaults, so often they do not need to be set
    manually. An example is mkSectionName ? (name: libStr.escape [ "["
    "]" ] name) from the INI generator. It receives
    the name of a section and sanitizes it. The default
    mkSectionName escapes [ and
    ] with a backslash.
   
    Generators can be fine-tuned to produce exactly the file format required by
    your application/service. One example is an INI-file format which uses
    :  as separator, the strings
    "yes"/"no" as boolean values and
    requires all string values to be quoted:
   
with lib;
let
  customToINI = generators.toINI {
    # specifies how to format a key/value pair
    mkKeyValue = generators.mkKeyValueDefault {
      # specifies the generated string for a subset of nix values
      mkValueString = v:
             if v == true then ''"yes"''
        else if v == false then ''"no"''
        else if isString v then ''"${v}"''
        # and delegats all other values to the default generator
        else generators.mkValueStringDefault {} v;
    } ":";
  };
# the INI file can now be given as plain old nix values
in customToINI {
  main = {
    pushinfo = true;
    autopush = false;
    host = "localhost";
    port = 42;
  };
  mergetool = {
    merge = "diff3";
  };
}
This will produce the following INI file as nix string:
[main] autopush:"no" host:"localhost" port:42 pushinfo:"yes" str\:ange:"very::strange" [mergetool] merge:"diff3"
     Nix store paths can be converted to strings by enclosing a derivation
     attribute like so: "${drv}".
    
    Detailed documentation for each generator can be found in
    lib/generators.nix.
   
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
    In the lib/debug.nix file you will find a number of
    functions that help (pretty-)printing values while evaluation is runnnig.
    You can even specify how deep these values should be printed recursively,
    and transform them on the fly. Please consult the docstrings in
    lib/debug.nix for usage information.
   
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
    The two fetcher primitives are fetchurl and
    fetchzip. Both of these have two required arguments, a
    URL and a hash. The hash is typically sha256, although
    many more hash algorithms are supported. Nixpkgs contributors are currently
    recommended to use sha256. This hash will be used by Nix
    to identify your source. A typical usage of fetchurl is provided below.
   
{ stdenv, fetchurl }:
stdenv.mkDerivation {
  name = "hello";
  src = fetchurl {
    url = "http://www.example.org/hello.tar.gz";
    sha256 = "1111111111111111111111111111111111111111111111111111";
  };
}
    The main difference between fetchurl and
    fetchzip is in how they store the contents.
    fetchurl will store the unaltered contents of the URL
    within the Nix store. fetchzip on the other hand will
    decompress the archive for you, making files and directories directly
    accessible in the future. fetchzip can only be used
    with archives. Despite the name, fetchzip is not
    limited to .zip files and can also be used with any tarball.
   
    fetchpatch works very similarly to
    fetchurl with the same arguments expected. It expects
    patch files as a source and and performs normalization on them before
    computing the checksum. For example it will remove comments or other
    unstable parts that are sometimes added by version control systems and can
    change over time.
   
    Other fetcher functions allow you to add source code directly from a VCS
    such as subversion or git. These are mostly straightforward names based on
    the name of the command used with the VCS system. Because they give you a
    working repository, they act most like fetchzip.
   
fetchsvn
     
       Used with Subversion. Expects url to a Subversion
       directory, rev, and sha256.
      
fetchgit
     
       Used with Git. Expects url to a Git repo,
       rev, and sha256.
       rev in this case can be full the git commit id (SHA1
       hash) or a tag name like refs/tags/v1.0.
      
fetchfossil
     
       Used with Fossil. Expects url to a Fossil archive,
       rev, and sha256.
      
fetchcvs
     
       Used with CVS. Expects cvsRoot,
       tag, and sha256.
      
fetchhg
     
       Used with Mercurial. Expects url,
       rev, and sha256.
      
    A number of fetcher functions wrap part of fetchurl
    and fetchzip. They are mainly convenience functions
    intended for commonly used destinations of source code in Nixpkgs. These
    wrapper fetchers are listed below.
   
fetchFromGitHub
     
       fetchFromGitHub expects four arguments.
       owner is a string corresponding to the GitHub user or
       organization that controls this repository. repo
       corresponds to the name of the software repository. These are located at
       the top of every GitHub HTML page as
       owner/repo. rev
       corresponds to the Git commit hash or tag (e.g v1.0)
       that will be downloaded from Git. Finally, sha256
       corresponds to the hash of the extracted directory. Again, other hash
       algorithms are also available but sha256 is currently
       preferred.
      
fetchFromGitLab
     This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
fetchFromBitbucket
     This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
fetchFromSavannah
     This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
fetchFromRepoOrCz
     This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
    Nixpkgs provides a couple of functions that help with building derivations.
    The most important one, stdenv.mkDerivation, has
    already been documented above. The following functions wrap
    stdenv.mkDerivation, making it easier to use in
    certain cases.
   
runCommand
     
       This takes three arguments, name,
       env, and buildCommand.
       name is just the name that Nix will append to the
       store path in the same way that stdenv.mkDerivation
       uses its name attribute. env is an
       attribute set specifying environment variables that will be set for this
       derivation. These attributes are then passed to the wrapped
       stdenv.mkDerivation. buildCommand
       specifies the commands that will be run to create this derivation. Note
       that you will need to create $out for Nix to register
       the command as successful.
      
       An example of using runCommand is provided below.
      
       (import <nixpkgs> {}).runCommand "my-example" {} ''
         echo My example command is running
         mkdir $out
         echo I can write data to the Nix store > $out/message
         echo I can also run basic commands like:
         echo ls
         ls
         echo whoami
         whoami
         echo date
         date
       ''
     runCommandCC
     
       This works just like runCommand. The only difference
       is that it also provides a C compiler in
       buildCommand’s environment. To minimize your
       dependencies, you should only use this if you are sure you will need a C
       compiler as part of running your command.
      
writeTextFile, writeText, writeTextDir, writeScript, writeScriptBin
     
       These functions write text to the Nix store. This is
       useful for creating scripts from Nix expressions.
       writeTextFile takes an attribute set and expects two
       arguments, name and text.
       name corresponds to the name used in the Nix store
       path. text will be the contents of the file. You can
       also set executable to true to make this file have
       the executable bit set.
      
       Many more commands wrap writeTextFile including
       writeText, writeTextDir,
       writeScript, and writeScriptBin.
       These are convenience functions over writeTextFile.
      
symlinkJoin
     
       This can be used to put many derivations into the same directory
       structure. It works by creating a new derivation and adding symlinks to
       each of the paths listed. It expects two arguments,
       name, and paths.
       name is the name used in the Nix store path for the
       created derivation. paths is a list of paths that
       will be symlinked. These paths can be to Nix store derivations or any
       other subdirectory contained within.
      
    buildFHSUserEnv provides a way to build and run
    FHS-compatible lightweight sandboxes. It creates an isolated root with
    bound /nix/store, so its footprint in terms of disk
    space needed is quite small. This allows one to run software which is hard
    or unfeasible to patch for NixOS -- 3rd-party source trees with FHS
    assumptions, games distributed as tarballs, software with integrity
    checking and/or external self-updated binaries. It uses Linux namespaces
    feature to create temporary lightweight environments which are destroyed
    after all child processes exit, without root user rights requirement.
    Accepted arguments are:
   
name
     Environment name.
targetPkgs
     Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
multiPkgs
     Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
extraBuildCommands
     Additional commands to be executed for finalizing the directory structure.
extraBuildCommandsMulti
     
       Like extraBuildCommands, but executed only on
       multilib architectures.
      
extraOutputsToInstall
     Additional derivation outputs to be linked for both target and multi-architecture packages.
extraInstallCommands
     Additional commands to be executed for finalizing the derivation with runner script.
runScript
     
       A command that would be executed inside the sandbox and passed all the
       command line arguments. It defaults to bash.
      
    One can create a simple environment using a shell.nix
    like that:
   
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
  name = "simple-x11-env";
  targetPkgs = pkgs: (with pkgs;
    [ udev
      alsaLib
    ]) ++ (with pkgs.xorg;
    [ libX11
      libXcursor
      libXrandr
    ]);
  multiPkgs = pkgs: (with pkgs;
    [ udev
      alsaLib
    ]);
  runScript = "bash";
}).env
    Running nix-shell would then drop you into a shell with
    these libraries and binaries available. You can use this to run
    closed-source applications which expect FHS structure without hassles:
    simply change runScript to the application path, e.g.
    ./bin/start.sh -- relative paths are supported.
   
    pkgs.mkShell is a special kind of derivation that is
    only useful when using it combined with nix-shell. It
    will in fact fail to instantiate when invoked with
    nix-build.
   
    pkgs.dockerTools is a set of functions for creating and
    manipulating Docker images according to the
    
    Docker Image Specification v1.2.0 . Docker itself is not used to
    perform any of the operations done by these functions.
   
     The dockerTools API is unstable and may be subject to
     backwards-incompatible changes in the future.
    
This function is analogous to the docker build command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with docker load.
     The parameters of buildImage with relative example
     values are described below:
    
     The above example will build a Docker image
     redis/latest from the given base image. Loading and
     running this image in Docker results in redis-server
     being started automatically.
    
| 
        | |
| 
        | |
| 
        | |
| 
        | |
| 
        | |
| 
        | |
| 
        Note
         Using this parameter requires the  
 | |
| 
        | 
     After the new layer has been created, its closure (to which
     contents, config and
     runAsRoot contribute) will be copied in the layer
     itself. Only new dependencies that are not already in the existing layers
     will be copied.
    
At the end of the process, only one new single layer will be produced and added to the resulting image.
     The resulting repository will only list the single image
     image/tag. In the case of
     Example 7.130, “Docker build” it would be
     redis/latest.
    
     It is possible to inspect the arguments with which an image was built
     using its buildArgs attribute.
    
      If you see errors similar to getProtocolByName: does not exist
      (no such protocol name: tcp) you may need to add
      pkgs.iana-etc to contents.
     
      If you see errors similar to Error_Protocol ("certificate has
      unknown CA",True,UnknownCa) you may need to add
      pkgs.cacert to contents.
     
      By default buildImage will use a static date of one
      second past the UNIX Epoch. This allows buildImage
      to produce binary reproducible images. When listing images with
      docker images, the newly created images will be listed
      like this:
     
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello latest 08c791c7846e 48 years ago 25.2MB
      You can break binary reproducibility but have a sorted, meaningful
      CREATED column by setting created
      to now.
     
pkgs.dockerTools.buildImage {
  name = "hello";
  tag = "latest";
  created = "now";
  contents = pkgs.hello;
  config.Cmd = [ "/bin/hello" ];
}
and now the Docker CLI will display a reasonable date and sort the images as expected:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE hello latest de2bf4786de6 About a minute ago 25.2MB
however, the produced images will not be binary reproducible.
Create a Docker image with many of the store paths being on their own layer to improve sharing between images.
name
      The name of the resulting image.
tag optional
      Tag of the generated image.
Default: the output path's hash
contents optional
      Top level paths in the container. Either a single derivation, or a list of derivations.
        Default: []
       
config optional
      Run-time configuration of the container. A full list of the options are available at in the Docker Image Specification v1.2.0 .
        Default: {}
       
created optional
      
        Date and time the layers were created. Follows the same
        now exception supported by
        buildImage.
       
        Default: 1970-01-01T00:00:01Z
       
maxLayers optional
      Maximum number of layers to create.
        Default: 24
       
      Each path directly listed in contents will have a
      symlink in the root of the image.
     
For example:
pkgs.dockerTools.buildLayeredImage {
  name = "hello";
  contents = [ pkgs.hello ];
}
      will create symlinks for all the paths in the hello
      package:
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello /share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info /share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
      The closure of config is automatically included in the
      closure of the final image.
     
This allows you to make very simple Docker images with very little code. This container will start up and run hello:
pkgs.dockerTools.buildLayeredImage {
  name = "hello";
  config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
      Increasing the maxLayers increases the number of
      layers which have a chance to be shared between different images.
     
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
      If the produced image will not be extended by other Docker builds, it is
      safe to set maxLayers to 128.
      However it will be impossible to extend the image further.
     
      The first (maxLayers-2) most "popular" paths will have
      their own individual layers, then layer #maxLayers-1
      will contain all the remaining "unpopular" paths, and finally layer
      #maxLayers will contain the Image configuration.
     
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
This function is analogous to the docker pull command, in that it can be used to pull a Docker image from a Docker registry. By default Docker Hub is used to pull images.
Its parameters are described in the example below:
| 
        | |
| 
        $ nix-shell --packages skopeo jq --command "skopeo --override-os linux --override-arch x86_64 inspect docker://docker.io/nixos/nix:1.11 | jq -r '.Digest'" sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b This argument is required. | |
| 
        | |
| 
        | |
| 
        | |
| 
        | 
This function is analogous to the docker export command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with docker import.
      Using this function requires the kvm device to be
      available.
     
     The parameters of exportImage are the following:
    
exportImage {
  fromImage = someLayeredImage;
  fromImageName = null;
  fromImageTag = null;
  name = someLayeredImage.name;
}
  
     The parameters relative to the base image have the same synopsis as
     described in Section 7.9.1, “buildImage”, except
     that fromImage is the only required argument in this
     case.
    
     The name argument is the name of the derivation output,
     which defaults to fromImage.name.
    
     This constant string is a helper for setting up the base files for
     managing users and groups, only if such files don't exist already. It is
     suitable for being used in a runAsRoot
      script for cases
     like in the example below:
    
buildImage {
  name = "shadow-basic";
  runAsRoot = ''
    #!${stdenv.shell}
    ${shadowSetup}
    groupadd -r redis
    useradd -r -g redis redis
    mkdir /data
    chown redis:redis /data
  '';
}
     Creating base files like /etc/passwd or
     /etc/login.defs is necessary for shadow-utils to
     manipulate users and groups.
    
    prefer-remote-fetch is an overlay that download
    sources on remote builder. This is useful when the evaluating machine has a
    slow upload while the builder can fetch faster directly from the source. To
    use it, put the following snippet as a new overlay:
    self: super:
      (super.prefer-remote-fetch self super)
  A full configuration example for that sets the overlay up for your own account, could look like this
    $ mkdir ~/.config/nixpkgs/overlays/
    $ cat > ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix <<EOF
      self: super: super.prefer-remote-fetch self super
    EOF
  
   Nix packages can declare meta-attributes that contain
   information about a package such as a description, its homepage, its
   license, and so on. For instance, the GNU Hello package has a
   meta declaration like this:
meta = with stdenv.lib; {
  description = "A program that produces a familiar, friendly greeting";
  longDescription = ''
    GNU Hello is a program that prints "Hello, world!" when you run it.
    It is fully customizable.
  '';
  homepage = https://www.gnu.org/software/hello/manual/;
  license = licenses.gpl3Plus;
  maintainers = [ maintainers.eelco ];
  platforms = platforms.all;
};
Meta-attributes are not passed to the builder of the package. Thus, a change to a meta-attribute doesn’t trigger a recompilation of the package. The value of a meta-attribute must be a string.
The meta-attributes of a package can be queried from the command-line using nix-env:
$ nix-env -qa hello --json
{
    "hello": {
        "meta": {
            "description": "A program that produces a familiar, friendly greeting",
            "homepage": "https://www.gnu.org/software/hello/manual/",
            "license": {
                "fullName": "GNU General Public License version 3 or later",
                "shortName": "GPLv3+",
                "url": "http://www.fsf.org/licensing/licenses/gpl.html"
            },
            "longDescription": "GNU Hello is a program that prints \"Hello, world!\" when you run it.\nIt is fully customizable.\n",
            "maintainers": [
                "Ludovic Court\u00e8s <ludo@gnu.org>"
            ],
            "platforms": [
                "i686-linux",
                "x86_64-linux",
                "armv5tel-linux",
                "armv7l-linux",
                "mips32-linux",
                "x86_64-darwin",
                "i686-cygwin",
                "i686-freebsd",
                "x86_64-freebsd",
                "i686-openbsd",
                "x86_64-openbsd"
            ],
            "position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/default.nix:14"
        },
        "name": "hello-2.9",
        "system": "x86_64-linux"
    }
}
   nix-env knows about the description
   field specifically:
$ nix-env -qa hello --description hello-2.3 A program that produces a familiar, friendly greeting
It is expected that each meta-attribute is one of the following:
description
     A short (one-line) description of the package. This is shown by nix-env -q --description and also on the Nixpkgs release pages.
Don’t include a period at the end. Don’t include newline characters. Capitalise the first character. For brevity, don’t repeat the name of package — just describe what it does.
       Wrong: "libpng is a library that allows you to decode PNG
       images."
      
       Right: "A library for decoding PNG images"
      
longDescription
     An arbitrarily long description of the package.
branch
     Release branch. Used to specify that a package is not going to receive updates that are not in this branch; for example, Linux kernel 3.0 is supposed to be updated to 3.0.X, not 3.1.
homepage
     
       The package’s homepage. Example:
       https://www.gnu.org/software/hello/manual/
      
downloadPage
     
       The page where a link to the current version can be found. Example:
       https://ftp.gnu.org/gnu/hello/
      
license
     
       The license, or licenses, for the package. One from the attribute set
       defined in
       
       nixpkgs/lib/licenses.nix. At this moment
       using both a list of licenses and a single license is valid. If the
       license field is in the form of a list representation, then it means
       that parts of the package are licensed differently. Each license should
       preferably be referenced by their attribute. The non-list attribute
       value can also be a space delimited string representation of the
       contained attribute shortNames or spdxIds. The following are all valid
       examples:
       
          Single license referenced by attribute (preferred)
          stdenv.lib.licenses.gpl3.
         
          Single license referenced by its attribute shortName (frowned upon)
          "gpl3".
         
          Single license referenced by its attribute spdxId (frowned upon)
          "GPL-3.0".
         
          Multiple licenses referenced by attribute (preferred) with
          stdenv.lib.licenses; [ asl20 free ofl ].
         
          Multiple licenses referenced as a space delimited string of attribute
          shortNames (frowned upon) "asl20 free ofl".
         
For details, see Section 8.2, “Licenses”.
maintainers
     
       A list of names and e-mail addresses of the maintainers of this Nix
       expression. If you would like to be a maintainer of a package, you may
       want to add yourself to
       nixpkgs/maintainers/maintainer-list.nix
       and write something like [ stdenv.lib.maintainers.alice
       stdenv.lib.maintainers.bob ].
      
priority
     
       The priority of the package, used by
       nix-env to resolve file name conflicts between
       packages. See the Nix manual page for nix-env for
       details. Example: "10" (a low-priority package).
      
platforms
     The list of Nix platform types on which the package is supported. Hydra builds packages according to the platform specified. If no platform is specified, the package does not have prebuilt binaries. An example is:
meta.platforms = stdenv.lib.platforms.linux;
       Attribute Set stdenv.lib.platforms defines
       
       various common lists of platforms types.
      
tests
     
        This attribute is special in that it is not actually under the
        meta attribute set but rather under the
        passthru attribute set. This is due to a current
        limitation of Nix, and will change as soon as Nixpkgs will be able to
        depend on a new enough version of Nix. See
        the
        relevant issue for more details.
       
       An attribute set with as values tests. A test is a derivation, which
       builds successfully when the test passes, and fails to build otherwise.
       A derivation that is a test needs to have
       meta.timeout defined.
      
       The NixOS tests are available as nixosTests in
       parameters of derivations. For instance, the OpenSMTPD derivation
       includes lines similar to:
{ /* ... */, nixosTests }:
{
  # ...
  passthru.tests = {
    basic-functionality-and-dovecot-integration = nixosTests.opensmtpd;
  };
}
timeout
     
       A timeout (in seconds) for building the derivation. If the derivation
       takes longer than this time to build, it can fail due to breaking the
       timeout. However, all computers do not have the same computing power,
       hence some builders may decide to apply a multiplicative factor to this
       value. When filling this value in, try to keep it approximately
       consistent with other values already present in
       nixpkgs.
      
hydraPlatforms
     
       The list of Nix platform types for which the Hydra instance at
       hydra.nixos.org will build the package. (Hydra is the
       Nix-based continuous build system.) It defaults to the value of
       meta.platforms. Thus, the only reason to set
       meta.hydraPlatforms is if you want
       hydra.nixos.org to build the package on a subset of
       meta.platforms, or not at all, e.g.
meta.platforms = stdenv.lib.platforms.linux; meta.hydraPlatforms = [];
broken
     
       If set to true, the package is marked as
       “broken”, meaning that it won’t show up in
       nix-env -qa, and cannot be built or installed. Such
       packages should be removed from Nixpkgs eventually unless they are
       fixed.
      
updateWalker
     
       If set to true, the package is tested to be updated
       correctly by the update-walker.sh script without
       additional settings. Such packages have meta.version
       set and their homepage (or the page specified by
       meta.downloadPage) contains a direct link to the
       package tarball.
      
    The meta.license attribute should preferrably contain a
    value from stdenv.lib.licenses defined in
    
    nixpkgs/lib/licenses.nix, or in-place license
    description of the same format if the license is unlikely to be useful in
    another expression.
   
Although it's typically better to indicate the specific license, a few generic options are available:
stdenv.lib.licenses.free, "free"
      Catch-all for free software licenses not listed above.
stdenv.lib.licenses.unfreeRedistributable, "unfree-redistributable"
      Unfree package that can be redistributed in binary form. That is, it’s legal to redistribute the output of the derivation. This means that the package can be included in the Nixpkgs channel.
        Sometimes proprietary software can only be redistributed unmodified.
        Make sure the builder doesn’t actually modify the original
        binaries; otherwise we’re breaking the license. For instance,
        the NVIDIA X11 drivers can be redistributed unmodified, but our builder
        applies patchelf to make them work. Thus, its
        license is "unfree" and it cannot be included in the
        Nixpkgs channel.
       
stdenv.lib.licenses.unfree, "unfree"
      Unfree package that cannot be redistributed. You can build it yourself, but you cannot redistribute the output of the derivation. Thus it cannot be included in the Nixpkgs channel.
stdenv.lib.licenses.unfreeRedistributableFirmware, "unfree-redistributable-firmware"
      
        This package supplies unfree, redistributable firmware. This is a
        separate value from unfree-redistributable because
        not everybody cares whether firmware is free.
       
math.h not foundpython setup.py bdist_wheel cannot create .whlinstall_data / data_files problemsconfiguration.nix?
   The standard build environment makes it
   easy to build typical Autotools-based packages with very little code. Any
   other kind of package can be accomodated by overriding the appropriate
   phases of stdenv. However, there are specialised
   functions in Nixpkgs to easily build packages for other programming
   languages, such as Perl or Haskell. These are described in this chapter.
  
The Android build environment provides three major features and a number of supporting features.
The first use case is deploying the SDK with a desired set of plugins or subsets of an SDK.
with import <nixpkgs> {};
let
  androidComposition = androidenv.composeAndroidPackages {
    toolsVersion = "25.2.5";
    platformToolsVersion = "27.0.1";
    buildToolsVersions = [ "27.0.3" ];
    includeEmulator = false;
    emulatorVersion = "27.2.0";
    platformVersions = [ "24" ];
    includeSources = false;
    includeDocs = false;
    includeSystemImages = false;
    systemImageTypes = [ "default" ];
    abiVersions = [ "armeabi-v7a" ];
    lldbVersions = [ "2.0.2558144" ];
    cmakeVersions = [ "3.6.4111459" ];
    includeNDK = false;
    ndkVersion = "16.1.4479499";
    useGoogleAPIs = false;
    useGoogleTVAddOns = false;
    includeExtras = [
      "extras;google;gcm"
    ];
  };
in
androidComposition.androidsdk
The above function invocation states that we want an Android SDK with the above specified plugin versions. By default, most plugins are disabled. Notable exceptions are the tools, platform-tools and build-tools sub packages.
The following parameters are supported:
       toolsVersion, specifies the version of the tools
       package to use
      
       platformsToolsVersion specifies the version of the
       platform-tools plugin
      
       buildToolsVersion specifies the versions of the
       build-tools plugins to use.
      
       includeEmulator specifies whether to deploy the
       emulator package (false by default). When enabled,
       the version of the emulator to deploy can be specified by setting the
       emulatorVersion parameter.
      
       includeDocs specifies whether the documentation
       catalog should be included.
      
       lldbVersions specifies what LLDB versions should be
       deployed.
      
       cmakeVersions specifies which CMake versions should
       be deployed.
      
       includeNDK specifies that the Android NDK bundle
       should be included. Defaults to: false.
      
       ndkVersion specifies the NDK version that we want to
       use.
      
       includeExtras is an array of identifier strings
       referring to arbitrary add-on packages that should be installed.
      
       platformVersions specifies which platform SDK
       versions should be included.
      
For each platform version that has been specified, we can apply the following options:
       includeSystemImages specifies whether a system image
       for each platform SDK should be included.
      
       includeSources specifies whether the sources for each
       SDK version should be included.
      
       useGoogleAPIs specifies that for each selected
       platform version the Google API should be included.
      
       useGoogleTVAddOns specifies that for each selected
       platform version the Google TV add-on should be included.
      
For each requested system image we can specify the following options:
       systemImageTypes specifies what kind of system images
       should be included. Defaults to: default.
      
       abiVersions specifies what kind of ABI version of
       each system image should be included. Defaults to:
       armeabi-v7a.
      
Most of the function arguments have reasonable default settings.
When building the above expression with:
$ nix-build
The Android SDK gets deployed with all desired plugin versions.
     We can also deploy subsets of the Android SDK. For example, to only the
     the platform-tools package, you can evaluate the
     following expression:
    
with import <nixpkgs> {};
let
  androidComposition = androidenv.composeAndroidPackages {
    # ...
  };
in
androidComposition.platform-tools
In addition to composing an Android package set manually, it is also possible to use a predefined composition that contains all basic packages for a specific Android version, such as version 9.0 (API-level 28).
The following Nix expression can be used to deploy the entire SDK with all basic plugins:
with import <nixpkgs> {};
androidenv.androidPkgs_9_0.androidsdk
It is also possible to use one plugin only:
with import <nixpkgs> {};
androidenv.androidPkgs_9_0.platform-tools
In addition to the SDK, it is also possible to build an Ant-based Android project and automatically deploy all the Android plugins that a project requires.
with import <nixpkgs> {};
androidenv.buildApp {
  name = "MyAndroidApp";
  src = ./myappsources;
  release = true;
  # If release is set to true, you need to specify the following parameters
  keyStore = ./keystore;
  keyAlias = "myfirstapp";
  keyStorePassword = "mykeystore";
  keyAliasPassword = "myfirstapp";
  # Any Android SDK parameters that install all the relevant plugins that a
  # build requires
  platformVersions = [ "24" ];
  # When we include the NDK, then ndk-build is invoked before Ant gets invoked
  includeNDK = true;
}
     Aside from the app-specific build parameters (name,
     src, release and keystore
     parameters), the buildApp {} function supports all the
     function parameters that the SDK composition function (the function shown
     in the previous section) supports.
    
This build function is particularly useful when it is desired to use Hydra: the Nix-based continuous integration solution to build Android apps. An Android APK gets exposed as a build product and can be installed on any Android device with a web browser by navigating to the build result page.
For testing purposes, it can also be quite convenient to automatically generate scripts that spawn emulator instances with all desired configuration settings.
     An emulator spawn script can be configured by invoking the
     emulateApp {} function:
    
with import <nixpkgs> {};
androidenv.emulateApp {
  name = "emulate-MyAndroidApp";
  platformVersion = "24";
  abiVersion = "armeabi-v7a"; # mips, x86 or x86_64
  systemImageType = "default";
  useGoogleAPIs = false;
}
It is also possible to specify an APK to deploy inside the emulator and the package and activity names to launch it:
with import <nixpkgs> {};
androidenv.emulateApp {
  name = "emulate-MyAndroidApp";
  platformVersion = "24";
  abiVersion = "armeabi-v7a"; # mips, x86 or x86_64
  systemImageType = "default";
  useGoogleAPIs = false;
  app = ./MyApp.apk;
  package = "MyApp";
  activity = "MainActivity";
}
     In addition to prebuilt APKs, you can also bind the APK parameter to a
     buildApp {} function invocation shown in the previous
     example.
    
When using any of the previously shown functions, it may be a bit inconvenient to find out what options are supported, since the Android SDK provides many plugins.
     A shell script in the
     pkgs/development/mobile/androidenv/ sub directory can
     be used to retrieve all possible options:
    
sh ./querypackages.sh packages build-tools
     The above command-line instruction queries all build-tools versions in the
     generated packages.nix expression.
    
In this document and related Nix expressions, we use the term, BEAM, to describe the environment. BEAM is the name of the Erlang Virtual Machine and, as far as we're concerned, from a packaging perspective, all languages that run on the BEAM are interchangeable. That which varies, like the build system, is transparent to users of any given BEAM package, so we make no distinction.
     All BEAM-related expressions are available via the top-level
     beam attribute, which includes:
    
       interpreters: a set of compilers running on the BEAM,
       including multiple Erlang/OTP versions
       (beam.interpreters.erlangR19, etc), Elixir
       (beam.interpreters.elixir) and LFE
       (beam.interpreters.lfe).
      
       packages: a set of package sets, each compiled with a
       specific Erlang/OTP version, e.g.
       beam.packages.erlangR19.
      
     The default Erlang compiler, defined by
     beam.interpreters.erlang, is aliased as
     erlang. The default BEAM package set is defined by
     beam.packages.erlang and aliased at the top level as
     beamPackages.
    
     To create a package set built with a custom Erlang version, use the
     lambda, beam.packagesWith, which accepts an Erlang/OTP
     derivation and produces a package set similar to
     beam.packages.erlang.
    
     Many Erlang/OTP distributions available in
     beam.interpreters have versions with ODBC and/or Java
     enabled. For example, there's
     beam.interpreters.erlangR19_odbc_javac, which
     corresponds to beam.interpreters.erlangR19.
    
     We also provide the lambda,
     beam.packages.erlang.callPackage, which simplifies
     writing BEAM package definitions by injecting all packages from
     beam.packages.erlang into the top-level context.
    
By default, Rebar3 wants to manage its own dependencies. This is perfectly acceptable in the normal, non-Nix setup, but in the Nix world, it is not. To rectify this, we provide two versions of Rebar3:
         rebar3: patched to remove the ability to download
         anything. When not running it via nix-shell or
         nix-build, it's probably not going to work as
         desired.
        
         rebar3-open: the normal, unmodified Rebar3. It
         should work exactly as would any other version of Rebar3. Any Erlang
         package should rely on rebar3 instead. See
         Section 9.2.5.1.1, “Rebar3 Packages”.
        
     BEAM packages are not registered at the top level, simply because they are
     not relevant to the vast majority of Nix users. They are installable using
     the beam.packages.erlang attribute set (aliased as
     beamPackages), which points to packages built by the
     default Erlang/OTP version in Nixpkgs, as defined by
     beam.interpreters.erlang. To list the available
     packages in beamPackages, use the following command:
    
$ nix-env -f "<nixpkgs>" -qaP -A beamPackages beamPackages.esqlite esqlite-0.2.1 beamPackages.goldrush goldrush-0.1.7 beamPackages.ibrowse ibrowse-4.2.2 beamPackages.jiffy jiffy-0.14.5 beamPackages.lager lager-3.0.2 beamPackages.meck meck-0.8.3 beamPackages.rebar3-pc pc-1.1.0
To install any of those packages into your profile, refer to them by their attribute path (first column):
$ nix-env -f "<nixpkgs>" -iA beamPackages.ibrowse
The attribute path of any BEAM package corresponds to the name of that particular package in Hex or its OTP Application/Release name.
       The Nix function, buildRebar3, defined in
       beam.packages.erlang.buildRebar3 and aliased at the
       top level, can be used to build a derivation that understands how to
       build a Rebar3 project. For example, we can build
       hex2nix
       as follows:
      
        { stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
        buildRebar3 rec {
          name = "hex2nix";
          version = "0.0.1";
          src = fetchFromGitHub {
            owner = "ericbmerritt";
            repo = "hex2nix";
            rev = "${version}";
            sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
          };
          beamDeps = [ ibrowse jsx erlware_commons ];
        }
      
       Such derivations are callable with
       beam.packages.erlang.callPackage (see
       Section 9.2.2, “Structure”). To call this package using the
       normal callPackage, refer to dependency packages via
       beamPackages, e.g.
       beamPackages.ibrowse.
      
       Notably, buildRebar3 includes
       beamDeps, while
       stdenv.mkDerivation does not. BEAM dependencies added
       there will be correctly handled by the system.
      
       If a package needs to compile native code via Rebar3's port compilation
       mechanism, add compilePort = true; to the derivation.
      
       Erlang.mk functions similarly to Rebar3, except we use
       buildErlangMk instead of
       buildRebar3.
      
        { buildErlangMk, fetchHex, cowlib, ranch }:
        buildErlangMk {
          name = "cowboy";
          version = "1.0.4";
          src = fetchHex {
            pkg = "cowboy";
            version = "1.0.4";
            sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
          };
          beamDeps = [ cowlib ranch ];
          meta = {
            description = ''
              Small, fast, modular HTTP server written in Erlang
            '';
            license = stdenv.lib.licenses.isc;
            homepage = https://github.com/ninenines/cowboy;
          };
        }
      
       Mix functions similarly to Rebar3, except we use
       buildMix instead of buildRebar3.
      
        { buildMix, fetchHex, plug, absinthe }:
        buildMix {
          name = "absinthe_plug";
          version = "1.0.0";
          src = fetchHex {
            pkg = "absinthe_plug";
            version = "1.0.0";
            sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
          };
          beamDeps = [ plug absinthe ];
          meta = {
            description = ''
              A plug for Absinthe, an experimental GraphQL toolkit
            '';
            license = stdenv.lib.licenses.bsd3;
            homepage = https://github.com/CargoSense/absinthe_plug;
          };
        }
      
       Alternatively, we can use buildHex as a shortcut:
      
        { buildHex, buildMix, plug, absinthe }:
        buildHex {
          name = "absinthe_plug";
          version = "1.0.0";
          sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
          builder = buildMix;
          beamDeps = [ plug absinthe ];
          meta = {
            description = ''
              A plug for Absinthe, an experimental GraphQL toolkit
            '';
            license = stdenv.lib.licenses.bsd3;
            homepage = https://github.com/CargoSense/absinthe_plug;
         };
       }
      
      Often, we simply want to access a valid environment that contains a
      specific package and its dependencies. We can accomplish that with the
      env attribute of a derivation. For example, let's say
      we want to access an Erlang REPL with ibrowse loaded
      up. We could do the following:
     
      $ nix-shell -A beamPackages.ibrowse.env --run "erl"
      Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
      Eshell V7.0  (abort with ^G)
      1> m(ibrowse).
      Module: ibrowse
      MD5: 3b3e0137d0cbb28070146978a3392945
      Compiled: January 10 2016, 23:34
      Object file: /nix/store/g1rlf65rdgjs4abbyj4grp37ry7ywivj-ibrowse-4.2.2/lib/erlang/lib/ibrowse-4.2.2/ebin/ibrowse.beam
      Compiler options:  [{outdir,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/ebin"},
      debug_info,debug_info,nowarn_shadow_vars,
      warn_unused_import,warn_unused_vars,warnings_as_errors,
      {i,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/include"}]
      Exports:
      add_config/1                  send_req_direct/7
      all_trace_off/0               set_dest/3
      code_change/3                 set_max_attempts/3
      get_config_value/1            set_max_pipeline_size/3
      get_config_value/2            set_max_sessions/3
      get_metrics/0                 show_dest_status/0
      get_metrics/2                 show_dest_status/1
      handle_call/3                 show_dest_status/2
      handle_cast/2                 spawn_link_worker_process/1
      handle_info/2                 spawn_link_worker_process/2
      init/1                        spawn_worker_process/1
      module_info/0                 spawn_worker_process/2
      module_info/1                 start/0
      rescan_config/0               start_link/0
      rescan_config/1               stop/0
      send_req/3                    stop_worker_process/1
      send_req/4                    stream_close/1
      send_req/5                    stream_next/1
      send_req/6                    terminate/2
      send_req_direct/4             trace_off/0
      send_req_direct/5             trace_off/2
      send_req_direct/6             trace_on/0
      trace_on/2
      ok
      2>
    
      Notice the -A beamPackages.ibrowse.env. That is the
      key to this functionality.
     
      Getting access to an environment often isn't enough to do real
      development. Usually, we need to create a shell.nix
      file and do our development inside of the environment specified therein.
      This file looks a lot like the packaging described above, except that
      src points to the project root and we call the package
      directly.
     
{ pkgs ? import "<nixpkgs"> {} }:
with pkgs;
let
  f = { buildRebar3, ibrowse, jsx, erlware_commons }:
      buildRebar3 {
        name = "hex2nix";
        version = "0.1.0";
        src = ./.;
        beamDeps = [ ibrowse jsx erlware_commons ];
      };
  drv = beamPackages.callPackage f {};
in
  drv
    We can leverage the support of the derivation, irrespective of the build derivation, by calling the commands themselves.
# =============================================================================
# Variables
# =============================================================================
NIX_TEMPLATES := "$(CURDIR)/nix-templates"
TARGET := "$(PREFIX)"
PROJECT_NAME := thorndyke
NIXPKGS=../nixpkgs
NIX_PATH=nixpkgs=$(NIXPKGS)
NIX_SHELL=nix-shell -I "$(NIX_PATH)" --pure
# =============================================================================
# Rules
# =============================================================================
.PHONY= all test clean repl shell build test analyze configure install \
        test-nix-install publish plt analyze
all: build
guard-%:
        @ if [ "${${*}}" == "" ]; then \
                echo "Environment variable $* not set"; \
                exit 1; \
        fi
clean:
        rm -rf _build
        rm -rf .cache
repl:
        $(NIX_SHELL) --run "iex -pa './_build/prod/lib/*/ebin'"
shell:
        $(NIX_SHELL)
configure:
        $(NIX_SHELL) --command 'eval "$$configurePhase"'
build: configure
        $(NIX_SHELL) --command 'eval "$$buildPhase"'
install:
        $(NIX_SHELL) --command 'eval "$$installPhase"'
test:
        $(NIX_SHELL) --command 'mix test --no-start --no-deps-check'
plt:
        $(NIX_SHELL) --run "mix dialyzer.plt --no-deps-check"
analyze: build plt
        $(NIX_SHELL) --run "mix dialyzer --no-compile"
    
       Using a shell.nix as described (see
       Section 9.2.6.2, “Creating a Shell”) should just work. Aside from
       test, plt, and
       analyze, the Make targets work just fine for all of
       the build derivations.
      
     Updating the Hex package set
     requires
     hex2nix.
     Given the path to the Erlang modules (usually
     pkgs/development/erlang-modules), it will dump a file
     called hex-packages.nix, containing all the packages
     that use a recognized build system in
     Hex. It can't be determined,
     however, whether every package is buildable.
    
     To make life easier for our users, try to build every
     Hex package and remove those that
     fail. To do that, simply run the following command in the root of your
     nixpkgs repository:
    
$ nix-build -A beamPackages
    
     That will attempt to build every package in
     beamPackages. Then manually remove those that fail.
     Hopefully, someone will improve
     hex2nix in
     the future to automate the process.
    
    Bower is a package manager for
    web site front-end components. Bower packages (comprising of build
    artefacts and sometimes sources) are stored in git
    repositories, typically on Github. The package registry is run by the Bower
    team with package metadata coming from the bower.json
    file within each package.
   
    The end result of running Bower is a bower_components
    directory which can be included in the web app's build process.
   
    Bower can be run interactively, by installing
    nodePackages.bower. More interestingly, the Bower
    components can be declared in a Nix derivation, with the help of
    nodePackages.bower2nix.
   
     Suppose you have a bower.json with the following
     contents:
     
bower.json
{
  "name": "my-web-app",
  "dependencies": {
    "angular": "~1.5.0",
    "bootstrap": "~3.3.6"
  }
}
    
Running bower2nix will produce something like the following output:
{ fetchbower, buildEnv }:
buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
  (fetchbower "angular" "1.5.3" "~1.5.0" "1749xb0firxdra4rzadm4q9x90v6pzkbd7xmcyjk6qfza09ykk9y")
  (fetchbower "bootstrap" "3.3.6" "~3.3.6" "1vvqlpbfcy0k5pncfjaiskj3y6scwifxygfqnw393sjfxiviwmbv")
  (fetchbower "jquery" "2.2.2" "1.9.1 - 2" "10sp5h98sqwk90y4k6hbdviwqzvzwqf47r3r51pakch5ii2y7js1")
]; }
     Using the bower2nix command line arguments, the output
     can be redirected to a file. A name like
     bower-packages.nix would be fine.
    
     The resulting derivation is a union of all the downloaded Bower packages
     (and their dependencies). To use it, they still need to be linked together
     by Bower, which is where buildBowerComponents is
     useful.
    
     The function is implemented in
     
     pkgs/development/bower-modules/generic/default.nix.
     Example usage:
     
    
In Example 9.2, “buildBowerComponents”, the following arguments are of special significance to the function:
| 
         | |
| 
         | 
     buildBowerComponents will run Bower to link together
     the output of bower2nix, resulting in a
     bower_components directory which can be used.
    
Here is an example of a web frontend build process using gulp. You might use grunt, or anything else.
gulpfile.js)
var gulp = require('gulp');
gulp.task('default', [], function () {
  gulp.start('build');
});
gulp.task('build', [], function () {
  console.log("Just a dummy gulp build");
  gulp
    .src(["./bower_components/**/*"])
    .pipe(gulp.dest("./gulpdist/"));
});
default.nix
{ myWebApp ? { outPath = ./.; name = "myWebApp"; }
, pkgs ? import <nixpkgs> {}
}:
pkgs.stdenv.mkDerivation {
  name = "my-web-app-frontend";
  src = myWebApp;
  buildInputs = [ pkgs.nodePackages.gulp ];
  bowerComponents = pkgs.buildBowerComponents {  name = "my-web-app";
    generated = ./bower-packages.nix;
    src = myWebApp;
  };
  buildPhase = ''
    cp --reflink=auto --no-preserve=mode -R $bowerComponents/bower_components .
    name = "my-web-app";
    generated = ./bower-packages.nix;
    src = myWebApp;
  };
  buildPhase = ''
    cp --reflink=auto --no-preserve=mode -R $bowerComponents/bower_components .  export HOME=$PWD
    export HOME=$PWD  ${pkgs.nodePackages.gulp}/bin/gulp build
    ${pkgs.nodePackages.gulp}/bin/gulp build  '';
  installPhase = "mv gulpdist $out";
}
  '';
  installPhase = "mv gulpdist $out";
}
     A few notes about Example 9.4, “Full example — default.nix”:
     
| 
        The result of  | |
| 
        Whether to symlink or copy the  | |
| 
        gulp requires  | |
| The actual build command. Other tools could be used. | 
ENOCACHE errors from buildBowerComponents
      
        This means that Bower was looking for a package version which doesn't
        exist in the generated bower-packages.nix.
       
        If bower.json has been updated, then run
        bower2nix again.
       
        It could also be a bug in bower2nix or
        fetchbower. If possible, try reformulating the
        version specification in bower.json.
       
    Coq libraries should be installed in
    $(out)/lib/coq/${coq.coq-version}/user-contrib/. Such
    directories are automatically added to the $COQPATH
    environment variable by the hook defined in the Coq derivation.
   
    Some extensions (plugins) might require OCaml and sometimes other OCaml
    packages. The coq.ocamlPackages attribute can be used to
    depend on the same package set Coq was built against.
   
    Coq libraries may be compatible with some specific versions of Coq only.
    The compatibleCoqVersions attribute is used to precisely
    select those versions of Coq that are compatible with this derivation.
   
    Here is a simple package example. It is a pure Coq library, thus it depends
    on Coq. It builds on the Mathematical Components library, thus it also
    takes mathcomp as buildInputs. Its
    Makefile has been generated using
    coq_makefile so we only have to set the
    $COQLIB variable at install time.
   
{ stdenv, fetchFromGitHub, coq, mathcomp }:
stdenv.mkDerivation rec {
  name = "coq${coq.coq-version}-multinomials-${version}";
  version = "1.0";
  src = fetchFromGitHub {
    owner = "math-comp";
    repo = "multinomials";
    rev = version;
    sha256 = "1qmbxp1h81cy3imh627pznmng0kvv37k4hrwi2faa101s6bcx55m";
  };
  buildInputs = [ coq ];
  propagatedBuildInputs = [ mathcomp ];
  installFlags = "COQLIB=$(out)/lib/coq/${coq.coq-version}/";
  meta = {
    description = "A Coq/SSReflect Library for Monoidal Rings and Multinomials";
    inherit (src.meta) homepage;
    license = stdenv.lib.licenses.cecill-b;
    inherit (coq.meta) platforms;
  };
  passthru = {
    compatibleCoqVersions = v: builtins.elem v [ "8.5" "8.6" "8.7" ];
  };
}
    The function buildGoPackage builds standard Go programs.
   
deis = buildGoPackage rec {
  name = "deis-${version}";
  version = "1.13.0";
  goPackagePath = "github.com/deis/deis";  subPackages = [ "client" ];
  subPackages = [ "client" ];  src = fetchFromGitHub {
    owner = "deis";
    repo = "deis";
    rev = "v${version}";
    sha256 = "1qv9lxqx7m18029lj8cw3k7jngvxs4iciwrypdy0gd2nnghc68sw";
  };
  goDeps = ./deps.nix;
  src = fetchFromGitHub {
    owner = "deis";
    repo = "deis";
    rev = "v${version}";
    sha256 = "1qv9lxqx7m18029lj8cw3k7jngvxs4iciwrypdy0gd2nnghc68sw";
  };
  goDeps = ./deps.nix;  buildFlags = "--tags release";
  buildFlags = "--tags release";  }
}
Example 9.5, “buildGoPackage” is an example expression using buildGoPackage, the following arguments are of special significance to the function:
| 
        | |
| 
        
       In this example only  | |
| 
        | |
| 
        | 
    The goDeps attribute can be imported from a separate
    nix file that defines which Go libraries are needed and
    should be included in GOPATH for
    buildPhase.
   
[{ goPackagePath = "gopkg.in/yaml.v2";
fetch = { type = "git";
url = "https://gopkg.in/yaml.v2"; rev = "a83829b6f1293c91addabc89d0571c246397bbf4"; sha256 = "1m4dsmk90sbi17571h6pld44zxz7jc4lrnl4f27dpd1l8g5xvjhh"; }; } { goPackagePath = "github.com/docopt/docopt-go"; fetch = { type = "git"; url = "https://github.com/docopt/docopt-go"; rev = "784ddc588536785e7299f7272f39101f7faccc3f"; sha256 = "0wwz48jl9fvl1iknvn9dqr4gfy1qs03gxaikrxxp9gry6773v3sj"; }; } ]
| 
        | |
| 
        | |
| 
        | 
    To extract dependency information from a Go package in automated way use
    go2nix. It can
    produce complete derivation and goDeps file for Go
    programs.
   
    buildGoPackage produces
    Multiple-output packages where
    bin includes program binaries. You can test build a Go
    binary as follows:
    $ nix-build -A deis.bin
  or build all outputs with:
    $ nix-build -A deis.all
  
    bin output will be installed by default with
    nix-env -i or systemPackages.
   
You may use Go packages installed into the active Nix profiles by adding the following to your ~/.bashrc:
for p in $NIX_PROFILES; do
    GOPATH="$p/share/go:$GOPATH"
done
Nixpkgs distributes build instructions for all Haskell packages registered on Hackage, but strangely enough normal Nix package lookups don’t seem to discover any of them, except for the default version of ghc, cabal-install, and stack:
$ nix-env -i alex error: selector ‘alex’ matches no derivations $ nix-env -qa ghc ghc-7.10.2
     The Haskell package set is not registered in the top-level namespace
     because it is huge. If all Haskell packages were
     visible to these commands, then name-based search/install operations would
     be much slower than they are now. We avoided that by keeping all
     Haskell-related packages in a separate attribute set called
     haskellPackages, which the following command will list:
    
$ nix-env -f "<nixpkgs>" -qaP -A haskellPackages haskellPackages.a50 a50-0.5 haskellPackages.abacate haskell-abacate-0.0.0.0 haskellPackages.abcBridge haskell-abcBridge-0.12 haskellPackages.afv afv-0.1.1 haskellPackages.alex alex-3.1.4 haskellPackages.Allure Allure-0.4.101.1 haskellPackages.alms alms-0.6.7 [... some 8000 entries omitted ...]
To install any of those packages into your profile, refer to them by their attribute path (first column):
nix-env -f "<nixpkgs>" -iA haskellPackages.Allure ...
     The attribute path of any Haskell packages corresponds to the name of that
     particular package on Hackage: the package
     cabal-install has the attribute
     haskellPackages.cabal-install, and so on. (Actually,
     this convention causes trouble with packages like
     3dmodels and 4Blocks, because these
     names are invalid identifiers in the Nix language. The issue of how to
     deal with these rare corner cases is currently unresolved.)
    
     Haskell packages whose Nix name (second column) begins with a
     haskell- prefix are packages that provide a library
     whereas packages without that prefix provide just executables. Libraries
     may provide executables too, though: the package
     haskell-pandoc, for example, installs both a library
     and an application. You can install and use Haskell executables just like
     any other program in Nixpkgs, but using Haskell libraries for development
     is a bit trickier and we’ll address that subject in great detail in
     section How to
     create a development environment.
    
     Attribute paths are deterministic inside of Nixpkgs, but the path
     necessary to reach Nixpkgs varies from system to system. We dodged that
     problem by giving nix-env an explicit -f
     "<nixpkgs>" parameter, but if you call
     nix-env without that flag, then chances are the
     invocation fails:
    
$ nix-env -iA haskellPackages.cabal-install
error: attribute ‘haskellPackages’ in selection path
       ‘haskellPackages.cabal-install’ not found
On NixOS, for example, Nixpkgs does not exist in the top-level namespace by default. To figure out the proper attribute path, it’s easiest to query for the path of a well-known Nixpkgs package, i.e.:
$ nix-env -qaP coreutils nixos.coreutils coreutils-8.23
     If your system responds like that (most NixOS installations will), then
     the attribute path to haskellPackages is
     nixos.haskellPackages. Thus, if you want to use
     nix-env without giving an explicit
     -f flag, then that’s the way to do it:
    
nix-env -qaP -A nixos.haskellPackages nix-env -iA nixos.haskellPackages.cabal-install
     Our current default compiler is GHC 7.10.x and the
     haskellPackages set contains packages built with that
     particular version. Nixpkgs contains the latest major release of every GHC
     since 6.10.4, however, and there is a whole family of package sets
     available that defines Hackage packages built with each of those
     compilers, too:
    
nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc6123 nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc763
     The name haskellPackages is really just a synonym for
     haskell.packages.ghc7102, because we prefer that
     package set internally and recommend it to our users as their default
     choice, but ultimately you are free to compile your Haskell packages with
     any GHC version you please. The following command displays the complete
     list of available compilers:
    
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler haskell.compiler.ghc6104 ghc-6.10.4 haskell.compiler.ghc6123 ghc-6.12.3 haskell.compiler.ghc704 ghc-7.0.4 haskell.compiler.ghc722 ghc-7.2.2 haskell.compiler.ghc742 ghc-7.4.2 haskell.compiler.ghc763 ghc-7.6.3 haskell.compiler.ghc784 ghc-7.8.4 haskell.compiler.ghc7102 ghc-7.10.2 haskell.compiler.ghcHEAD ghc-7.11.20150402 haskell.compiler.ghcNokinds ghc-nokinds-7.11.20150704 haskell.compiler.ghcjs ghcjs-0.1.0 haskell.compiler.jhc jhc-0.8.2 haskell.compiler.uhc uhc-1.1.9.0
     We have no package sets for jhc or
     uhc yet, unfortunately, but for every version of GHC
     listed above, there exists a package set based on that compiler. Also, the
     attributes haskell.compiler.ghcXYC and
     haskell.packages.ghcXYC.ghc are synonymous for the sake
     of convenience.
    
      A simple development environment consists of a Haskell compiler and one
      or both of the tools cabal-install and
      stack. We saw in section
      How to install Haskell
      packages how you can install those programs into your user
      profile:
     
nix-env -f "<nixpkgs>" -iA haskellPackages.ghc haskellPackages.cabal-install
      Instead of the default package set haskellPackages,
      you can also use the more precise name
      haskell.compiler.ghc7102, which has the advantage that
      it refers to the same GHC version regardless of what Nixpkgs considers
      “default” at any given time.
     
      Once you’ve made those tools available in
      $PATH, it’s possible to build Hackage packages
      the same way people without access to Nix do it all the time:
     
cabal get lens-4.11 && cd lens-4.11 cabal install -j --dependencies-only cabal configure cabal build
If you enjoy working with Cabal sandboxes, then that’s entirely possible too: just execute the command
cabal sandbox init
before installing the required dependencies.
      The nix-shell utility makes it easy to switch to a
      different compiler version; just enter the Nix shell environment with the
      command
     
nix-shell -p haskell.compiler.ghc784
      to bring GHC 7.8.4 into $PATH. Alternatively, you can
      use Stack instead of nix-shell directly to select
      compiler versions and other build tools per-project. It uses
      nix-shell under the hood when Nix support is turned
      on. See How to
      build a Haskell project using Stack.
     
      If you’re using cabal-install, re-running
      cabal configure inside the spawned shell switches your
      build to use that compiler instead. If you’re working on a project
      that doesn’t depend on any additional system libraries outside of
      GHC, then it’s even sufficient to just run the cabal
      configure command inside of the shell:
     
nix-shell -p haskell.compiler.ghc784 --command "cabal configure"
      Afterwards, all other commands like cabal build work
      just fine in any shell environment, because the configure phase recorded
      the absolute paths to all required tools like GHC in its build
      configuration inside of the dist/ directory. Please
      note, however, that nix-collect-garbage can break such
      an environment because the Nix store paths created by
      nix-shell aren’t “alive” anymore
      once nix-shell has terminated. If you find that your
      Haskell builds no longer work after garbage collection, then
      you’ll have to re-run cabal configure inside of
      a new nix-shell environment.
     
      GHC expects to find all installed libraries inside of its own
      lib directory. This approach works fine on traditional
      Unix systems, but it doesn’t work for Nix, because GHC’s
      store path is immutable once it’s built. We cannot install
      additional libraries into that location. As a consequence, our copies of
      GHC don’t know any packages except their own core libraries, like
      base, containers,
      Cabal, etc.
     
      We can register additional libraries to GHC, however, using a special
      build function called ghcWithPackages. That function
      expects one argument: a function that maps from an attribute set of
      Haskell packages to a list of packages, which determines the libraries
      known to that particular version of GHC. For example, the Nix expression
      ghcWithPackages (pkgs: [pkgs.mtl]) generates a copy of
      GHC that has the mtl library registered in addition to
      its normal core packages:
     
$ nix-shell -p "haskellPackages.ghcWithPackages (pkgs: [pkgs.mtl])"
[nix-shell:~]$ ghc-pkg list mtl
/nix/store/zy79...-ghc-7.10.2/lib/ghc-7.10.2/package.conf.d:
    mtl-2.2.1
      This function allows users to define their own development environment by
      means of an override. After adding the following snippet to
      ~/.config/nixpkgs/config.nix,
     
{
  packageOverrides = super: let self = super.pkgs; in
  {
    myHaskellEnv = self.haskell.packages.ghc7102.ghcWithPackages
                     (haskellPackages: with haskellPackages; [
                       # libraries
                       arrows async cgi criterion
                       # tools
                       cabal-install haskintex
                     ]);
  };
}
      it’s possible to install that compiler with nix-env -f
      "<nixpkgs>" -iA myHaskellEnv. If you’d like to
      switch that development environment to a different version of GHC, just
      replace the ghc7102 bit in the previous definition
      with the appropriate name. Of course, it’s also possible to define
      any number of these development environments! (You can’t install
      two of them into the same profile at the same time, though, because that
      would result in file conflicts.)
     
      The generated ghc program is a wrapper script that
      re-directs the real GHC executable to use a new lib
      directory — one that we specifically constructed to contain all
      those packages the user requested:
     
$ cat $(type -p ghc) #! /nix/store/xlxj...-bash-4.3-p33/bin/bash -e export NIX_GHC=/nix/store/19sm...-ghc-7.10.2/bin/ghc export NIX_GHCPKG=/nix/store/19sm...-ghc-7.10.2/bin/ghc-pkg export NIX_GHC_DOCDIR=/nix/store/19sm...-ghc-7.10.2/share/doc/ghc/html export NIX_GHC_LIBDIR=/nix/store/19sm...-ghc-7.10.2/lib/ghc-7.10.2 exec /nix/store/j50p...-ghc-7.10.2/bin/ghc "-B$NIX_GHC_LIBDIR" "$@"
      The variables $NIX_GHC,
      $NIX_GHCPKG, etc. point to the
      new store path ghcWithPackages
      constructed specifically for this environment. The last line of the
      wrapper script then executes the real ghc, but passes
      the path to the new lib directory using GHC’s
      -B flag.
     
      The purpose of those environment variables is to work around an impurity
      in the popular
      ghc-paths
      library. That library promises to give its users access to GHC’s
      installation paths. Only, the library can’t possible know that
      path when it’s compiled, because the path GHC considers its own is
      determined only much later, when the user configures it through
      ghcWithPackages. So we
      patched
      ghc-paths to return the paths found in those environment variables
      at run-time rather than trying to guess them at compile-time.
     
      To make sure that mechanism works properly all the time, we recommend
      that you set those variables to meaningful values in your shell
      environment, too, i.e. by adding the following code to your
      ~/.bashrc:
     
if type >/dev/null 2>&1 -p ghc; then eval "$(egrep ^export "$(type -p ghc)")" fi
      If you are certain that you’ll use only one GHC environment which
      is located in your user profile, then you can use the following code,
      too, which has the advantage that it doesn’t contain any paths
      from the Nix store, i.e. those settings always remain valid even if
      a nix-env -u operation updates the GHC environment in
      your profile:
     
if [ -e ~/.nix-profile/bin/ghc ]; then export NIX_GHC="$HOME/.nix-profile/bin/ghc" export NIX_GHCPKG="$HOME/.nix-profile/bin/ghc-pkg" export NIX_GHC_DOCDIR="$HOME/.nix-profile/share/doc/ghc/html" export NIX_GHC_LIBDIR="$HOME/.nix-profile/lib/ghc-$($NIX_GHC --numeric-version)" fi
      If you plan to use your environment for interactive programming, not just
      compiling random Haskell code, you might want to replace
      ghcWithPackages in all the listings above with
      ghcWithHoogle.
     
      This environment generator not only produces an environment with GHC and
      all the specified libraries, but also generates a
      hoogle and haddock indexes for all
      the packages, and provides a wrapper script around
      hoogle binary that uses all those things. A precise
      name for this thing would be
      “ghcWithPackagesAndHoogleAndDocumentationIndexes”,
      which is, regrettably, too long and scary.
     
For example, installing the following environment
{
  packageOverrides = super: let self = super.pkgs; in
  {
    myHaskellEnv = self.haskellPackages.ghcWithHoogle
                     (haskellPackages: with haskellPackages; [
                       # libraries
                       arrows async cgi criterion
                       # tools
                       cabal-install haskintex
                     ]);
  };
}
      allows one to browse module documentation index
      not
      too dissimilar to this for all the specified packages and their
      dependencies by directing a browser of choice to
      ~/.nix-profile/share/doc/hoogle/index.html (or
      /run/current-system/sw/share/doc/hoogle/index.html in
      case you put it in environment.systemPackages in
      NixOS).
     
      After you’ve marveled enough at that try adding the following to
      your ~/.ghc/ghci.conf
     
:def hoogle \s -> return $ ":! hoogle search -cl --count=15 \"" ++ s ++ "\"" :def doc \s -> return $ ":! hoogle search -cl --info \"" ++ s ++ "\""
      and test it by typing into ghci:
     
:hoogle a -> a :doc a -> a
      Be sure to note the links to haddock files in the
      output. With any modern and properly configured terminal emulator you can
      just click those links to navigate there.
     
Finally, you can run
hoogle server --local -p 8080
      and navigate to http://localhost:8080/ for your own local
      Hoogle. The
      --local flag makes the hoogle server serve files from
      your nix store over http, without the flag it will use
      file:// URIs. Note, however, that Firefox and possibly
      other browsers disallow navigation from http:// to
      file:// URIs for security reasons, which might be
      quite an inconvenience. Versions before v5 did not have this flag. See
      this
      page for workarounds.
     
      For NixOS users there’s a service which runs this exact command
      for you. Specify the packages you want documentation
      for and the haskellPackages set you want them to come
      from. Add the following to configuration.nix.
     
services.hoogle = {
enable = true;
packages = (hpkgs: with hpkgs; [text cryptonite]);
haskellPackages = pkgs.haskellPackages;
};
      Stack is a popular
      build tool for Haskell projects. It has first-class support for Nix.
      Stack can optionally use Nix to automatically select the right version of
      GHC and other build tools to build, test and execute apps in an existing
      project downloaded from somewhere on the Internet. Pass the
      --nix flag to any stack command to
      do so, e.g.
     
git clone --recursive http://github.com/yesodweb/wai cd wai stack --nix build
      If you want stack to use Nix by default, you can add a
      nix section to the stack.yaml file,
      as explained in the
      Stack
      documentation. For example:
     
nix: enable: true packages: [pkgconfig zeromq zlib]
      The example configuration snippet above tells Stack to create an ad hoc
      environment for nix-shell as in the below section, in
      which the pkgconfig, zeromq and
      zlib packages from Nixpkgs are available. All
      stack commands will implicitly be executed inside this
      ad hoc environment.
     
      Some projects have more sophisticated needs. For examples, some ad hoc
      environments might need to expose Nixpkgs packages compiled in a certain
      way, or with extra environment variables. In these cases, you’ll
      need a shell field instead of
      packages:
     
nix: enable: true shell-file: shell.nix
      For more on how to write a shell.nix file see the
      below section. You’ll need to express a derivation. Note that
      Nixpkgs ships with a convenience wrapper function around
      mkDerivation called
      haskell.lib.buildStackProject to help you create this
      derivation in exactly the way Stack expects. All of the same inputs as
      mkDerivation can be provided. For example, to build a
      Stack project that including packages that link against a version of the
      R library compiled with special options turned on:
     
with (import <nixpkgs> { });
let R = pkgs.R.override { enableStrictBarrier = true; };
in
haskell.lib.buildStackProject {
  name = "HaskellR";
  buildInputs = [ R zeromq zlib ];
}
      You can select a particular GHC version to compile with by setting the
      ghc attribute as an argument to
      buildStackProject. Better yet, let Stack choose what
      GHC version it wants based on the snapshot specified in
      stack.yaml (only works with Stack >= 1.1.3):
     
{nixpkgs ? import <nixpkgs> { }, ghc ? nixpkgs.ghc}:
with nixpkgs;
let R = pkgs.R.override { enableStrictBarrier = true; };
in
haskell.lib.buildStackProject {
  name = "HaskellR";
  buildInputs = [ R zeromq zlib ];
  inherit ghc;
}
      The easiest way to create an ad hoc development environment is to run
      nix-shell with the appropriate GHC environment given
      on the command-line:
     
nix-shell -p "haskellPackages.ghcWithPackages (pkgs: with pkgs; [mtl pandoc])"
      For more sophisticated use-cases, however, it’s more convenient to
      save the desired configuration in a file called
      shell.nix that looks like this:
     
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
let
  inherit (nixpkgs) pkgs;
  ghc = pkgs.haskell.packages.${compiler}.ghcWithPackages (ps: with ps; [
          monad-par mtl
        ]);
in
pkgs.stdenv.mkDerivation {
  name = "my-haskell-env-0";
  buildInputs = [ ghc ];
  shellHook = "eval $(egrep ^export ${ghc}/bin/ghc)";
}
      Now run nix-shell — or even nix-shell
      --pure — to enter a shell environment that has the
      appropriate compiler in $PATH. If you use
      --pure, then add all other packages that your
      development environment needs into the buildInputs
      attribute. If you’d like to switch to a different compiler
      version, then pass an appropriate compiler argument to
      the expression, i.e. nix-shell --argstr compiler
      ghc784.
     
      If you need such an environment because you’d like to compile a
      Hackage package outside of Nix — i.e. because you’re
      hacking on the latest version from Git —, then the package set
      provides suitable nix-shell environments for you already! Every Haskell
      package has an env attribute that provides a shell
      environment suitable for compiling that particular package. If
      you’d like to hack the lens library, for
      example, then you just have to check out the source code and enter the
      appropriate environment:
     
$ cabal get lens-4.11 && cd lens-4.11 Downloading lens-4.11... Unpacking to lens-4.11/ $ nix-shell "<nixpkgs>" -A haskellPackages.lens.env [nix-shell:/tmp/lens-4.11]$
      At point, you can run cabal configure, cabal
      build, and all the other development commands. Note that you
      need cabal-install installed in your
      $PATH already to use it here — the
      nix-shell environment does not provide it.
     
     If your own Haskell packages have build instructions for Cabal, then you
     can convert those automatically into build instructions for Nix using the
     cabal2nix utility, which you can install into your
     profile by running nix-env -i cabal2nix.
    
      For example, let’s assume that you’re working on a private
      project called foo. To generate a Nix build expression
      for it, change into the project’s top-level directory and run the
      command:
     
cabal2nix . > foo.nix
      Then write the following snippet into a file called
      default.nix:
     
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
nixpkgs.pkgs.haskell.packages.${compiler}.callPackage ./foo.nix { }
      Finally, store the following code in a file called
      shell.nix:
     
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
(import ./default.nix { inherit nixpkgs compiler; }).env
      At this point, you can run nix-build to have Nix
      compile your project and install it into a Nix store path. The local
      directory will contain a symlink called result after
      nix-build returns that points into that location. Of
      course, passing the flag --argstr compiler ghc763
      allows switching the build to any version of GHC currently supported.
     
      Furthermore, you can call nix-shell to enter an
      interactive development environment in which you can use cabal
      configure and cabal build to develop your
      code. That environment will automatically contain a proper GHC derivation
      with all the required libraries registered as well as all the
      system-level libraries your package might need.
     
If your package does not depend on any system-level libraries, then it’s sufficient to run
nix-shell --command "cabal configure"
      once to set up your build. cabal-install determines
      the absolute paths to all resources required for the build and writes
      them into a config file in the dist/ directory. Once
      that’s done, you can run cabal build and any
      other command for that project even outside of the
      nix-shell environment. This feature is particularly
      nice for those of us who like to edit their code with an IDE, like
      Emacs’ haskell-mode, because it’s not
      necessary to start Emacs inside of nix-shell just to make it find out the
      necessary settings for building the project;
      cabal-install has already done that for us.
     
      If you want to do some quick-and-dirty hacking and don’t want to
      bother setting up a default.nix and
      shell.nix file manually, then you can use the
      --shell flag offered by cabal2nix
      to have it generate a stand-alone nix-shell
      environment for you. With that feature, running
     
cabal2nix --shell . > shell.nix nix-shell --command "cabal configure"
      is usually enough to set up a build environment for any given Haskell
      package. You can even use that generated file to run
      nix-build, too:
     
nix-build shell.nix
      If you have multiple private Haskell packages that depend on each other,
      then you’ll have to register those packages in the Nixpkgs set to
      make them visible for the dependency resolution performed by
      callPackage. First of all, change into each of your
      projects top-level directories and generate a
      default.nix file with cabal2nix:
     
cd ~/src/foo && cabal2nix . > default.nix cd ~/src/bar && cabal2nix . > default.nix
      Then edit your ~/.config/nixpkgs/config.nix file to
      register those builds in the default Haskell package set:
     
{
  packageOverrides = super: let self = super.pkgs; in
  {
    haskellPackages = super.haskellPackages.override {
      overrides = self: super: {
        foo = self.callPackage ../src/foo {};
        bar = self.callPackage ../src/bar {};
      };
    };
  };
}
      Once that’s accomplished, nix-env -f "<nixpkgs>"
      -qA haskellPackages will show your packages like any other
      package from Hackage, and you can build them
     
nix-build "<nixpkgs>" -A haskellPackages.foo
or enter an interactive shell environment suitable for building them:
nix-shell "<nixpkgs>" -A haskellPackages.bar.env
      Every Haskell package set takes a function called
      overrides that you can use to manipulate the package
      as much as you please. One useful application of this feature is to
      replace the default mkDerivation function with one
      that enables library profiling for all packages. To accomplish that add
      the following snippet to your
      ~/.config/nixpkgs/config.nix file:
     
{
  packageOverrides = super: let self = super.pkgs; in
  {
    profiledHaskellPackages = self.haskellPackages.override {
      overrides = self: super: {
        mkDerivation = args: super.mkDerivation (args // {
          enableLibraryProfiling = true;
        });
      };
    };
  };
}
      Then, replace instances of haskellPackages in the
      cabal2nix-generated default.nix or
      shell.nix files with
      profiledHaskellPackages.
     
      Nixpkgs provides the latest version of
      ghc-events,
      which is 0.4.4.0 at the time of this writing. This is fine for users of
      GHC 7.10.x, but GHC 7.8.4 cannot compile that binary. Now, one way to
      solve that problem is to register an older version of
      ghc-events in the 7.8.x-specific package set. The
      first step is to generate Nix build instructions with
      cabal2nix:
     
cabal2nix cabal://ghc-events-0.4.3.0 > ~/.nixpkgs/ghc-events-0.4.3.0.nix
      Then add the override in ~/.config/nixpkgs/config.nix:
     
{
  packageOverrides = super: let self = super.pkgs; in
  {
    haskell = super.haskell // {
      packages = super.haskell.packages // {
        ghc784 = super.haskell.packages.ghc784.override {
          overrides = self: super: {
            ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
          };
        };
      };
    };
  };
}
This code is a little crazy, no doubt, but it’s necessary because the intuitive version
{ # ...
  haskell.packages.ghc784 = super.haskell.packages.ghc784.override {
    overrides = self: super: {
      ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
    };
  };
}
      doesn’t do what we want it to: that code replaces the
      haskell package set in Nixpkgs with one that contains
      only one entry,packages, which contains only one entry
      ghc784. This override loses the
      haskell.compiler set, and it loses the
      haskell.packages.ghcXYZ sets for all compilers but GHC
      7.8.4. To avoid that problem, we have to perform the convoluted little
      dance from above, iterating over each step in hierarchy.
     
      Once it’s accomplished, however, we can install a variant of
      ghc-events that’s compiled with GHC 7.8.4:
     
nix-env -f "<nixpkgs>" -iA haskell.packages.ghc784.ghc-events
      Unfortunately, it turns out that this build fails again while executing
      the test suite! Apparently, the release archive on Hackage is missing
      some data files that the test suite requires, so we cannot run it. We
      accomplish that by re-generating the Nix expression with the
      --no-check flag:
     
cabal2nix --no-check cabal://ghc-events-0.4.3.0 > ~/.nixpkgs/ghc-events-0.4.3.0.nix
Now the builds succeeds.
      Of course, in the concrete example of ghc-events this
      whole exercise is not an ideal solution, because
      ghc-events can analyze the output emitted by any
      version of GHC later than 6.12 regardless of the compiler version that
      was used to build the ghc-events executable, so
      strictly speaking there’s no reason to prefer one built with GHC
      7.8.x in the first place. However, for users who cannot use GHC 7.10.x at
      all for some reason, the approach of downgrading to an older version
      might be useful.
     
In the previous section we learned how to override a package in a single compiler-specific package set. You may have some overrides defined that you want to use across multiple package sets. To accomplish this you could use the technique that we learned in the previous section by repeating the overrides for all the compiler-specific package sets. For example:
{
  packageOverrides = super: let self = super.pkgs; in
  {
    haskell = super.haskell // {
      packages = super.haskell.packages // {
        ghc784 = super.haskell.packages.ghc784.override {
          overrides = self: super: {
            my-package = ...;
            my-other-package = ...;
          };
        };
        ghc822 = super.haskell.packages.ghc784.override {
          overrides = self: super: {
            my-package = ...;
            my-other-package = ...;
          };
        };
        ...
      };
    };
  };
}
However there’s a more convenient way to override all compiler-specific package sets at once:
{
  packageOverrides = super: let self = super.pkgs; in
  {
    haskell = super.haskell // {
      packageOverrides = self: super: {
        my-package = ...;
        my-other-package = ...;
      };
    };
  };
}
      When starting a Haskell project you can use
      developPackage to define a derivation for your package
      at the root path as well as source override versions
      for Hackage packages, like so:
     
# default.nix
{ compilerVersion ? "ghc842" }:
let
  # pinning nixpkgs using new Nix 2.0 builtin `fetchGit`
  pkgs = import (fetchGit (import ./version.nix)) { };
  compiler = pkgs.haskell.packages."${compilerVersion}";
  pkg = compiler.developPackage {
    root = ./.;
    source-overrides = {
      # Let's say the GHC 8.4.2 haskellPackages uses 1.6.0.0 and your test suite is incompatible with >= 1.6.0.0
      HUnit = "1.5.0.0";
    };
  };
in pkg
      This could be used in place of a simplified stack.yaml
      defining a Nix derivation for your Haskell package.
     
As you can see this allows you to specify only the source version found on Hackage and nixpkgs will take care of the rest.
      You can also specify buildInputs for your Haskell
      derivation for packages that directly depend on external libraries like
      so:
     
# default.nix
{ compilerVersion ? "ghc842" }:
let
  # pinning nixpkgs using new Nix 2.0 builtin `fetchGit`
  pkgs = import (fetchGit (import ./version.nix)) { };
  compiler = pkgs.haskell.packages."${compilerVersion}";
  pkg = compiler.developPackage {
    root = ./.;
    source-overrides = {
      HUnit = "1.5.0.0"; # Let's say the GHC 8.4.2 haskellPackages uses 1.6.0.0 and your test suite is incompatible with >= 1.6.0.0
    };
  };
  # in case your package source depends on any libraries directly, not just transitively.
  buildInputs = [ zlib ];
in pkg.overrideAttrs(attrs: {
  buildInputs = attrs.buildInputs ++ buildInputs;
})
      Notice that you will need to override (via
      overrideAttrs or similar) the derivation returned by
      the developPackage Nix lambda as there is no
      buildInputs named argument you can pass directly into
      the developPackage lambda.
     
GHC and distributed build farms don’t get along well:
https://ghc.haskell.org/trac/ghc/ticket/4012
When you see an error like this one
package foo-0.7.1.0 is broken due to missing package text-1.2.0.4-98506efb1b9ada233bb5c2b2db516d91
      then you have to download and re-install foo and all
      its dependents from scratch:
     
nix-store -q --referrers /nix/store/*-haskell-text-1.2.0.4 \ | xargs -L 1 nix-store --repair-path
      If you’re using additional Hydra servers other than
      hydra.nixos.org, then it might be necessary to purge
      the local caches that store data from those machines to disable these
      binary channels for the duration of the previous command, i.e. by
      running:
     
rm ~/.cache/nix/binary-cache*.sqlite
Users of GHC on Darwin have occasionally reported that builds fail, because the compiler complains about a missing include file:
fatal error: 'math.h' file not found
The issue has been discussed at length in ticket 6390, and so far no good solution has been proposed. As a work-around, users who run into this problem can configure the environment variables
export NIX_CFLAGS_COMPILE="-idirafter /usr/include" export NIX_CFLAGS_LINK="-L/usr/lib"
      in their ~/.bashrc file to avoid the compiler error.
     
-- While building package zlib-0.5.4.2 using: runhaskell -package=Cabal-1.22.4.0 -clear-package-db [... lots of flags ...] Process exited with code: ExitFailure 1 Logs have been written to: /home/foo/src/stack-ide/.stack-work/logs/zlib-0.5.4.2.log Configuring zlib-0.5.4.2... Setup.hs: Missing dependency on a foreign library: * Missing (or bad) header file: zlib.h This problem can usually be solved by installing the system package that provides this library (you may need the "-dev" version). If the library is already installed but in a non-standard location then you can use the flags --extra-include-dirs= and --extra-lib-dirs= to specify where it is. If the header file does exist, it may contain errors that are caught by the C compiler at the preprocessing stage. In this case you can re-run configure with the verbosity flag -v3 to see the error messages.
      When you run the build inside of the nix-shell environment, the system is
      configured to find libz.so without any special flags
      – the compiler and linker “just know” how to find it.
      Consequently, Cabal won’t record any search paths for
      libz.so in the package description, which means that
      the package works fine inside of nix-shell, but once you leave the shell
      the shared object can no longer be found. That issue is by no means
      specific to Stack: you’ll have that problem with any other Haskell
      package that’s built inside of nix-shell but run outside of that
      environment.
     
      You can remedy this issue in several ways. The easiest is to add a
      nix section to the stack.yaml like
      the following:
     
nix: enable: true packages: [ zlib ]
      Stack’s Nix support knows to add
      ${zlib.out}/lib and
      ${zlib.dev}/include as an
      --extra-lib-dirs and
      extra-include-dirs, respectively. Alternatively, you
      can achieve the same effect by hand. First of all, run
     
$ nix-build --no-out-link "<nixpkgs>" -A zlib /nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8
to find out the store path of the system’s zlib library. Now, you can
        add that path (plus a “/lib” suffix) to your
        $LD_LIBRARY_PATH environment variable to make sure
        your system linker finds libz.so automatically.
        It’s no pretty solution, but it will work.
       
        As a variant of (1), you can also install any number of system
        libraries into your user’s profile (or some other profile) and
        point $LD_LIBRARY_PATH to that profile instead, so
        that you don’t have to list dozens of those store paths all over
        the place.
       
        The solution I prefer is to call stack with an appropriate
        –extra-lib-dirs flag like so: shell stack
        --extra-lib-dirs=/nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8/lib
        build
       
      Typically, you’ll need --extra-include-dirs as
      well. It’s possible to add those flag to the project’s
      stack.yaml or your user’s global
      ~/.stack/global/stack.yaml file so that you
      don’t have to specify them manually every time. But again,
      you’re likely better off using Stack’s Nix support instead.
     
      The same thing applies to cabal configure, of course,
      if you’re building with cabal-install instead
      of Stack.
     
      There are two levels of static linking. The first option is to configure
      the build with the Cabal flag
      --disable-executable-dynamic. In Nix expressions, this
      can be achieved by setting the attribute:
     
enableSharedExecutables = false;
That gives you a binary with statically linked Haskell libraries and dynamically linked system libraries.
      To link both Haskell libraries and system libraries statically, the
      additional flags --ghc-option=-optl=-static
      --ghc-option=-optl=-pthread need to be used. In Nix, this is
      accomplished with:
     
configureFlags = [ "--ghc-option=-optl=-static" "--ghc-option=-optl=-pthread" ];
It’s important to realize, however, that most system libraries in Nix are built as shared libraries only, i.e. there is just no static library available that Cabal could link!
By default GHC implements the Integer type using the GNU Multiple Precision Arithmetic (GMP) library. The implementation can be found in the integer-gmp package.
A potential problem with this is that GMP is licensed under the GNU Lesser General Public License (LGPL), a kind of “copyleft” license. According to the terms of the LGPL, paragraph 5, you may distribute a program that is designed to be compiled and dynamically linked with the library under the terms of your choice (i.e., commercially) but if your program incorporates portions of the library, if it is linked statically, then your program is a “derivative”–a “work based on the library”–and according to paragraph 2, section c, you “must cause the whole of the work to be licensed” under the terms of the LGPL (including for free).
The LGPL licensing for GMP is a problem for the overall licensing of binary programs compiled with GHC because most distributions (and builds) of GHC use static libraries. (Dynamic libraries are currently distributed only for macOS.) The LGPL licensing situation may be worse: even though The Glasgow Haskell Compiler License is essentially a “free software” license (BSD3), according to paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL!
To work around these problems GHC can be build with a slower but LGPL-free alternative implemention for Integer called integer-simple.
      To get a GHC compiler build with integer-simple
      instead of integer-gmp use the attribute:
      haskell.compiler.integer-simple."${ghcVersion}". For
      example:
     
$ nix-build -E '(import <nixpkgs> {}).haskell.compiler.integer-simple.ghc802'
...
$ result/bin/ghc-pkg list | grep integer
    integer-simple-0.1.1.1
      The following command displays the complete list of GHC compilers build
      with integer-simple:
     
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler.integer-simple haskell.compiler.integer-simple.ghc7102 ghc-7.10.2 haskell.compiler.integer-simple.ghc7103 ghc-7.10.3 haskell.compiler.integer-simple.ghc722 ghc-7.2.2 haskell.compiler.integer-simple.ghc742 ghc-7.4.2 haskell.compiler.integer-simple.ghc783 ghc-7.8.3 haskell.compiler.integer-simple.ghc784 ghc-7.8.4 haskell.compiler.integer-simple.ghc801 ghc-8.0.1 haskell.compiler.integer-simple.ghc802 ghc-8.0.2 haskell.compiler.integer-simple.ghcHEAD ghc-8.1.20170106
      To get a package set supporting integer-simple use the
      attribute:
      haskell.packages.integer-simple."${ghcVersion}". For
      example use the following to get the scientific
      package build with integer-simple:
     
nix-build -A haskell.packages.integer-simple.ghc802.scientific
      The haskell.lib library includes a number of functions
      for checking for various imperfections in Haskell packages. It’s
      useful to apply these functions to your own Haskell packages and
      integrate that in a Continuous Integration server like
      hydra to assure your
      packages maintain a minimum level of quality. This section discusses some
      of these functions.
     
       Applying haskell.lib.failOnAllWarnings to a Haskell
       package enables the -Wall and
       -Werror GHC options to turn all warnings into build
       failures.
      
       Applying haskell.lib.buildStrictly to a Haskell
       package calls failOnAllWarnings on the given package
       to turn all warnings into build failures. Additionally the source of
       your package is gotten from first invoking cabal
       sdist to ensure all needed files are listed in the Cabal file.
      
       Applying haskell.lib.checkUnusedPackages to a Haskell
       package invokes the
       packunused
       tool on the package. packunused complains when it
       finds packages listed as build-depends in the Cabal file which are
       redundant. For example:
      
$ nix-build -E 'let pkgs = import <nixpkgs> {}; in pkgs.haskell.lib.checkUnusedPackages {} pkgs.haskellPackages.scientific'
these derivations will be built:
  /nix/store/3lc51cxj2j57y3zfpq5i69qbzjpvyci1-scientific-0.3.5.1.drv
...
detected package components
~~~~~~~~~~~~~~~~~~~~~~~~~~~
 - library
 - testsuite(s): test-scientific
 - benchmark(s): bench-scientific*
(component names suffixed with '*' are not configured to be built)
library
~~~~~~~
The following package dependencies seem redundant:
 - ghc-prim-0.5.0.0
testsuite(test-scientific)
~~~~~~~~~~~~~~~~~~~~~~~~~~
no redundant packages dependencies found
builder for ‘/nix/store/3lc51cxj2j57y3zfpq5i69qbzjpvyci1-scientific-0.3.5.1.drv’ failed with exit code 1
error: build of ‘/nix/store/3lc51cxj2j57y3zfpq5i69qbzjpvyci1-scientific-0.3.5.1.drv’ failed
       As you can see, packunused finds out that although
       the testsuite component has no redundant dependencies the library
       component of scientific-0.3.5.1 depends on
       ghc-prim which is unused in the library.
      
      Hackage package derivations are found in the
      hackage-packages.nix
      file within nixpkgs and are used as the initial
      package set for haskellPackages. The
      hackage-packages.nix file is not meant to be edited by
      hand, but rather autogenerated by
      hackage2nix,
      which by default uses the
      configuration-hackage2nix.yaml
      file to generate all the derivations.
     
      To modify the contents configuration-hackage2nix.yaml,
      follow the instructions on
      hackage2nix.
     
The Youtube video Nix Loves Haskell provides an introduction into Haskell NG aimed at beginners. The slides are available at http://cryp.to/nixos-meetup-3-slides.pdf and also – in a form ready for cut & paste – at https://github.com/NixOS/cabal2nix/blob/master/doc/nixos-meetup-3-slides.md.
Another Youtube video is Escaping Cabal Hell with Nix, which discusses the subject of Haskell development with Nix but also provides a basic introduction to Nix as well, i.e. it’s suitable for viewers with almost no prior Nix experience.
Oliver Charles wrote a very nice Tutorial how to develop Haskell packages with Nix.
The Journey into the Haskell NG infrastructure series of postings describe the new Haskell infrastructure in great detail:
Part 1 explains the differences between the old and the new code and gives instructions how to migrate to the new setup.
Part 2 looks in-depth at how to tweak and configure your setup by means of overrides.
Part 3 describes the infrastructure that keeps the Haskell package set in Nixpkgs up-to-date.
     The easiest way to get a working idris version is to install the
     idris attribute:
    
$ # On NixOS $ nix-env -i nixos.idris $ # On non-NixOS $ nix-env -i nixpkgs.idris
     This however only provides the prelude and
     base libraries. To install additional libraries:
    
$ nix-env -iE 'pkgs: pkgs.idrisPackages.with-packages (with pkgs.idrisPackages; [ contrib pruviloj ])'
To see all available Idris packages:
$ # On NixOS $ nix-env -qaPA nixos.idrisPackages $ # On non-NixOS $ nix-env -qaPA nixpkgs.idrisPackages
     Similarly, entering a nix-shell:
    
$ nix-shell -p 'idrisPackages.with-packages (with idrisPackages; [ contrib pruviloj ])'
     To have access to these libraries in idris, call it with an argument
     -p <library name> for each library:
    
$ nix-shell -p 'idrisPackages.with-packages (with idrisPackages; [ contrib pruviloj ])' [nix-shell:~]$ idris -p contrib -p pruviloj
     A listing of all available packages the Idris binary has access to is
     available via --listlibs:
    
$ idris --listlibs 00prelude-idx.ibc pruviloj base contrib prelude 00pruviloj-idx.ibc 00base-idx.ibc 00contrib-idx.ibc
     As an example of how a Nix expression for an Idris package can be created,
     here is the one for idrisPackages.yaml:
    
{ build-idris-package
, fetchFromGitHub
, contrib
, lightyear
, lib
}:
build-idris-package  {
  name = "yaml";
  version = "2018-01-25";
  # This is the .ipkg file that should be built, defaults to the package name
  # In this case it should build `Yaml.ipkg` instead of `yaml.ipkg`
  # This is only necessary because the yaml packages ipkg file is
  # different from its package name here.
  ipkgName = "Yaml";
  # Idris dependencies to provide for the build
  idrisDeps = [ contrib lightyear ];
  src = fetchFromGitHub {
    owner = "Heather";
    repo = "Idris.Yaml";
    rev = "5afa51ffc839844862b8316faba3bafa15656db4";
    sha256 = "1g4pi0swmg214kndj85hj50ccmckni7piprsxfdzdfhg87s0avw7";
  };
  meta = {
    description = "Idris YAML lib";
    homepage = https://github.com/Heather/Idris.Yaml;
    license = lib.licenses.mit;
    maintainers = [ lib.maintainers.brainrape ];
  };
}
     Assuming this file is saved as yaml.nix, it’s
     buildable using
    
$ nix-build -E '(import <nixpkgs> {}).idrisPackages.callPackage ./yaml.nix {}'
Or it’s possible to use
with import <nixpkgs> {};
{
  yaml = idrisPackages.callPackage ./yaml.nix {};
}
     in another file (say default.nix) to be able to build
     it with
    
$ nix-build -A yaml
This component is basically a wrapper/workaround that makes it possible to expose an Xcode installation as a Nix package by means of symlinking to the relevant executables on the host system.
Since Xcode can’t be packaged with Nix, nor we can publish it as a Nix package (because of its license) this is basically the only integration strategy making it possible to do iOS application builds that integrate with other components of the Nix ecosystem
The primary objective of this project is to use the Nix expression language to specify how iOS apps can be built from source code, and to automatically spawn iOS simulator instances for testing.
This component also makes it possible to use Hydra, the Nix-based continuous integration server to regularly build iOS apps and to do wireless ad-hoc installations of enterprise IPAs on iOS devices through Hydra.
The Xcode build environment implements a number of features.
The first use case is deploying a Nix package that provides symlinks to the Xcode installation on the host system. This package can be used as a build input to any build function implemented in the Nix expression language that requires Xcode.
let
  pkgs = import <nixpkgs> {};
  xcodeenv = import ./xcodeenv {
    inherit (pkgs) stdenv;
  };
in
xcodeenv.composeXcodeWrapper {
  version = "9.2";
  xcodeBaseDir = "/Applications/Xcode.app";
}
     By deploying the above expression with nix-build and
     inspecting its content you will notice that several Xcode-related
     executables are exposed as a Nix package:
    
$ ls result/bin lrwxr-xr-x 1 sander staff 94 1 jan 1970 Simulator -> /Applications/Xcode.app/Contents/Developer/Applications/Simulator.app/Contents/MacOS/Simulator lrwxr-xr-x 1 sander staff 17 1 jan 1970 codesign -> /usr/bin/codesign lrwxr-xr-x 1 sander staff 17 1 jan 1970 security -> /usr/bin/security lrwxr-xr-x 1 sander staff 21 1 jan 1970 xcode-select -> /usr/bin/xcode-select lrwxr-xr-x 1 sander staff 61 1 jan 1970 xcodebuild -> /Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild lrwxr-xr-x 1 sander staff 14 1 jan 1970 xcrun -> /usr/bin/xcrun
     We can build an iOS app executable for the simulator, or an IPA/xcarchive
     file for release purposes, e.g. ad-hoc, enterprise or store
     installations, by executing the xcodeenv.buildApp {}
     function:
    
let
  pkgs = import <nixpkgs> {};
  xcodeenv = import ./xcodeenv {
    inherit (pkgs) stdenv;
  };
in
xcodeenv.buildApp {
  name = "MyApp";
  src = ./myappsources;
  sdkVersion = "11.2";
  target = null; # Corresponds to the name of the app by default
  configuration = null; # Release for release builds, Debug for debug builds
  scheme = null; # -scheme will correspond to the app name by default
  sdk = null; # null will set it to 'iphonesimulator` for simulator builds or `iphoneos` to real builds
  xcodeFlags = "";
  release = true;
  certificateFile = ./mycertificate.p12;
  certificatePassword = "secret";
  provisioningProfile = ./myprovisioning.profile;
  signMethod = "ad-hoc"; # 'enterprise' or 'store'
  generateIPA = true;
  generateXCArchive = false;
  enableWirelessDistribution = true;
  installURL = "/installipa.php";
  bundleId = "mycompany.myapp";
  appVersion = "1.0";
  # Supports all xcodewrapper parameters as well
  xcodeBaseDir = "/Applications/Xcode.app";
}
     The above function takes a variety of parameters: * The
     name and src parameters are
     mandatory and specify the name of the app and the location where the
     source code resides * sdkVersion specifies which
     version of the iOS SDK to use.
    
     It also possile to adjust the xcodebuild parameters.
     This is only needed in rare circumstances. In most cases the default
     values should suffice:
    
       Specifies which xcodebuild target to build. By
       default it takes the target that has the same name as the app.
      
       The configuration parameter can be overridden if
       desired. By default, it will do a debug build for the simulator and a
       release build for real devices.
      
       The scheme parameter specifies which
       -scheme parameter to propagate to
       xcodebuild. By default, it corresponds to the app
       name.
      
       The sdk parameter specifies which SDK to use. By
       default, it picks iphonesimulator for simulator
       builds and iphoneos for release builds.
      
       The xcodeFlags parameter specifies arbitrary command
       line parameters that should be propagated to
       xcodebuild.
      
     By default, builds are carried out for the iOS simulator. To do release
     builds (builds for real iOS devices), you must set the
     release parameter to true. In
     addition, you need to set the following parameters:
    
       certificateFile refers to a P12 certificate file.
      
       certificatePassword specifies the password of the P12
       certificate.
      
       provisioningProfile refers to the provision profile
       needed to sign the app
      
       signMethod should refer to ad-hoc
       for signing the app with an ad-hoc certificate,
       enterprise for enterprise certificates and
       app-store for App store certificates.
      
       generateIPA specifies that we want to produce an IPA
       file (this is probably what you want)
      
       generateXCArchive specifies thet we want to produce
       an xcarchive file.
      
     When building IPA files on Hydra and when it is desired to allow iOS
     devices to install IPAs by browsing to the Hydra build products page, you
     can enable the enableWirelessDistribution parameter.
    
When enabled, you need to configure the following options:
       The installURL parameter refers to the URL of a PHP
       script that composes the itms-services:// URL
       allowing iOS devices to install the IPA file.
      
       bundleId refers to the bundle ID value of the app
      
       appVersion refers to the app’s version number
      
To use wireless adhoc distributions, you must also install the corresponding PHP script on a web server (see section: “Installing the PHP script for wireless ad hoc installations from Hydra” for more information).
     In addition to the build parameters, you can also specify any parameters
     that the xcodeenv.composeXcodeWrapper {} function
     takes. For example, the xcodeBaseDir parameter can be
     overridden to refer to a different Xcode version.
    
In addition to building iOS apps, we can also automatically spawn simulator instances:
let
  pkgs = import <nixpkgs> {};
  xcodeenv = import ./xcodeenv {
    inherit (pkgs) stdenv;
  };
in
xcode.simulateApp {
  name = "simulate";
  # Supports all xcodewrapper parameters as well
  xcodeBaseDir = "/Applications/Xcode.app";
}
The above expression produces a script that starts the simulator from the provided Xcode installation. The script can be started as follows:
./result/bin/run-test-simulator
By default, the script will show an overview of UDID for all available simulator instances and asks you to pick one. You can also provide a UDID as a command-line parameter to launch an instance automatically:
./result/bin/run-test-simulator 5C93129D-CF39-4B1A-955F-15180C3BD4B8
You can also extend the simulator script to automatically deploy and launch an app in the requested simulator instance:
let
  pkgs = import <nixpkgs> {};
  xcodeenv = import ./xcodeenv {
    inherit (pkgs) stdenv;
  };
in
xcode.simulateApp {
  name = "simulate";
  bundleId = "mycompany.myapp";
  app = xcode.buildApp {
    # ...
  };
  # Supports all xcodewrapper parameters as well
  xcodeBaseDir = "/Applications/Xcode.app";
}
     By providing the result of an xcode.buildApp {}
     function and configuring the app bundle id, the app gets deployed
     automatically and started.
    
Ant-based Java packages are typically built from source as follows:
stdenv.mkDerivation {
  name = "...";
  src = fetchurl { ... };
  buildInputs = [ jdk ant ];
  buildPhase = "ant";
}
    Note that jdk is an alias for the OpenJDK (self-built
    where available, or pre-built via Zulu). Platforms with OpenJDK not (yet)
    in Nixpkgs (Aarch32, Aarch64) point
    to the (unfree) oraclejdk.
   
    JAR files that are intended to be used by other packages should be
    installed in $out/share/java. JDKs have a stdenv setup
    hook that add any JARs in the share/java directories
    of the build inputs to the CLASSPATH environment variable.
    For instance, if the package libfoo installs a JAR named
    foo.jar in its share/java
    directory, and another package declares the attribute
buildInputs = [ jdk libfoo ];
    then CLASSPATH will be set to
    /nix/store/...-libfoo/share/java/foo.jar.
   
    Private JARs should be installed in a location like
    $out/share/.
   package-name
    If your Java package provides a program, you need to generate a wrapper
    script to run it using the OpenJRE. You can use
    makeWrapper for this:
buildInputs = [ makeWrapper ];
installPhase =
  ''
    mkdir -p $out/bin
    makeWrapper ${jre}/bin/java $out/bin/foo \
      --add-flags "-cp $out/share/java/foo.jar org.foo.Main"
  '';
    Note the use of jre, which is the part of the OpenJDK
    package that contains the Java Runtime Environment. By using
    ${jre}/bin/java instead of
    ${jdk}/bin/java, you prevent your package from depending
    on the JDK at runtime.
   
    Note all JDKs passthru home, so if your application
    requires environment variables like JAVA_HOME being set,
    that can be done in a generic fashion with the --set
    argument of makeWrapper:
  --set JAVA_HOME ${jdk.home}
It is possible to use a different Java compiler than javac from the OpenJDK. For instance, to use the GNU Java Compiler:
buildInputs = [ gcj ant ];
Here, Ant will automatically use gij (the GNU Java Runtime) instead of the OpenJRE.
    Lua packages are built by the buildLuaPackage function.
    This function is implemented in
    
    pkgs/development/lua-modules/generic/default.nix
    and works similarly to buildPerlPackage. (See
    Section 9.13, “Perl” for details.)
   
    Lua packages are defined in
    pkgs/top-level/lua-packages.nix.
    Most of them are simple. For example:
fileSystem = buildLuaPackage {
  name = "filesystem-1.6.2";
  src = fetchurl {
    url = "https://github.com/keplerproject/luafilesystem/archive/v1_6_2.tar.gz";
    sha256 = "1n8qdwa20ypbrny99vhkmx8q04zd2jjycdb5196xdhgvqzk10abz";
  };
  meta = {
    homepage = "https://github.com/keplerproject/luafilesystem";
    hydraPlatforms = stdenv.lib.platforms.linux;
    maintainers = with maintainers; [ flosse ];
  };
};
  
    Though, more complicated package should be placed in a seperate file in
    pkgs/development/lua-modules.
   
    Lua packages accept additional parameter disabled, which
    defines the condition of disabling package from luaPackages. For example,
    if package has disabled assigned to
    lua.luaversion != "5.1", it will not be included in any
    luaPackages except lua51Packages, making it only be built for lua 5.1.
   
    The pkgs/development/node-packages folder contains a
    generated collection of NPM
    packages that can be installed with the Nix package manager.
   
As a rule of thumb, the package set should only provide end user software packages, such as command-line utilities. Libraries should only be added to the package set if there is a non-NPM package that requires it.
    When it is desired to use NPM libraries in a development project, use the
    node2nix generator directly on the
    package.json configuration file of the project.
   
The package set also provides support for multiple Node.js versions. The policy is that a new package should be added to the collection for the latest stable LTS release (which is currently 10.x), unless there is an explicit reason to support a different release.
If your package uses native addons, you need to examine what kind of native build system it uses. Here are some examples:
      node-gyp
     
      node-gyp-builder
     
      node-pre-gyp
     
    After you have identified the correct system, you need to override your
    package expression while adding in build system as a build input. For
    example, dat requires node-gyp-build,
    so we override its expression in default-v10.nix:
   
dat = nodePackages.dat.override (oldAttrs: {
  buildInputs = oldAttrs.buildInputs ++ [ nodePackages.node-gyp-build ];
});
To add a package from NPM to nixpkgs:
      Modify
      pkgs/development/node-packages/node-packages-v10.json
      to add, update or remove package entries. (Or
      pkgs/development/node-packages/node-packages-v8.json
      for packages depending on Node.js 8.x)
     
      Run the script: (cd pkgs/development/node-packages &&
      ./generate.sh).
     
      Build your new package to test your changes: cd /path/to/nixpkgs
      && nix-build -A
      nodePackages.<new-or-updated-package>. To build against a
      specific Node.js version (e.g. 10.x): nix-build -A
      nodePackages_10_x.<new-or-updated-package>
     
Add and commit all modified and generated files.
    For more information about the generation process, consult the
    README.md
    file of the node2nix tool.
   
    OCaml libraries should be installed in
    $(out)/lib/ocaml/${ocaml.version}/site-lib/. Such
    directories are automatically added to the $OCAMLPATH
    environment variable when building another package that depends on them or
    when opening a nix-shell.
   
    Given that most of the OCaml ecosystem is now built with dune, nixpkgs
    includes a convenience build support function called
    buildDunePackage that will build an OCaml package using
    dune, OCaml and findlib and any additional dependencies provided as
    buildInputs or propagatedBuildInputs.
   
    Here is a simple package example. It defines an (optional) attribute
    minimumOCamlVersion that will be used to throw a
    descriptive evaluation error if building with an older OCaml is attempted.
    It uses the fetchFromGitHub fetcher to get its source.
    It sets the doCheck (optional) attribute to
    true which means that tests will be run with
    dune runtest -p angstrom after the build (dune
    build -p angstrom) is complete. It uses
    alcotest as a build input (because it is needed to run
    the tests) and bigstringaf and result
    as propagated build inputs (thus they will also be available to libraries
    depending on this library). The library will be installed using the
    angstrom.install file that dune generates.
   
{ stdenv, fetchFromGitHub, buildDunePackage, alcotest, result, bigstringaf }:
buildDunePackage rec {
  pname = "angstrom";
  version = "0.10.0";
  minimumOCamlVersion = "4.03";
  src = fetchFromGitHub {
    owner  = "inhabitedtype";
    repo   = pname;
    rev    = version;
    sha256 = "0lh6024yf9ds0nh9i93r9m6p5psi8nvrqxl5x7jwl13zb0r9xfpw";
  };
  buildInputs = [ alcotest ];
  propagatedBuildInputs = [ bigstringaf result ];
  doCheck = true;
  meta = {
    homepage = https://github.com/inhabitedtype/angstrom;
    description = "OCaml parser combinators built for speed and memory efficiency";
    license = stdenv.lib.licenses.bsd3;
    maintainers = with stdenv.lib.maintainers; [ sternenseemann ];
  };
}
 
    Here is a second example, this time using a source archive generated with
    dune-release. It is a good idea to use this archive when
    it is available as it will usually contain substituted variables such as a
    %%VERSION%% field. This library does not depend on any
    other OCaml library and no tests are run after building it.
   
{ stdenv, fetchurl, buildDunePackage }:
buildDunePackage rec {
  pname = "wtf8";
  version = "1.0.1";
  minimumOCamlVersion = "4.01";
  src = fetchurl {
    url = "https://github.com/flowtype/ocaml-${pname}/releases/download/v${version}/${pname}-${version}.tbz";
    sha256 = "1msg3vycd3k8qqj61sc23qks541cxpb97vrnrvrhjnqxsqnh6ygq";
  };
  meta = with stdenv.lib; {
    homepage = https://github.com/flowtype/ocaml-wtf8;
    description = "WTF-8 is a superset of UTF-8 that allows unpaired surrogates.";
    license = licenses.mit;
    maintainers = [ maintainers.eqyiel ];
  };
}
 
    Nixpkgs provides a function buildPerlPackage, a generic
    package builder function for any Perl package that has a standard
    Makefile.PL. It’s implemented in
    pkgs/development/perl-modules/generic.
   
    Perl packages from CPAN are defined in
    pkgs/top-level/perl-packages.nix,
    rather than pkgs/all-packages.nix. Most Perl packages
    are so straight-forward to build that they are defined here directly,
    rather than having a separate function for each package called from
    perl-packages.nix. However, more complicated packages
    should be put in a separate file, typically in
    pkgs/development/perl-modules. Here is an example of
    the former:
ClassC3 = buildPerlPackage rec {
  name = "Class-C3-0.21";
  src = fetchurl {
    url = "mirror://cpan/authors/id/F/FL/FLORA/${name}.tar.gz";
    sha256 = "1bl8z095y4js66pwxnm7s853pi9czala4sqc743fdlnk27kq94gz";
  };
};
    Note the use of mirror://cpan/, and the
    ${name} in the URL definition to ensure that the name
    attribute is consistent with the source that we’re actually
    downloading. Perl packages are made available in
    all-packages.nix through the variable
    perlPackages. For instance, if you have a package that
    needs ClassC3, you would typically write
foo = import ../path/to/foo.nix {
  inherit stdenv fetchurl ...;
  inherit (perlPackages) ClassC3;
};
    in all-packages.nix. You can test building a Perl
    package as follows:
$ nix-build -A perlPackages.ClassC3
    buildPerlPackage adds perl- to the
    start of the name attribute, so the package above is actually called
    perl-Class-C3-0.21. So to install it, you can say:
$ nix-env -i perl-Class-C3
    (Of course you can also install using the attribute name: nix-env
    -i -A perlPackages.ClassC3.)
   
    So what does buildPerlPackage do? It does the following:
    
       In the configure phase, it calls perl Makefile.PL to
       generate a Makefile. You can set the variable
       makeMakerFlags to pass flags to
       Makefile.PL
      
       It adds the contents of the PERL5LIB environment variable
       to #! .../bin/perl line of Perl scripts as
       -I flags. This ensures
       that a script can find its dependencies.
      dir
       In the fixup phase, it writes the propagated build inputs
       (propagatedBuildInputs) to the file
       $out/nix-support/propagated-user-env-packages.
       nix-env recursively installs all packages listed in
       this file when you install a package that has it. This ensures that a
       Perl package can find its dependencies.
      
    buildPerlPackage is built on top of
    stdenv, so everything can be customised in the usual
    way. For instance, the BerkeleyDB module has a
    preConfigure hook to generate a configuration file used
    by Makefile.PL:
{ buildPerlPackage, fetchurl, db }:
buildPerlPackage rec {
  name = "BerkeleyDB-0.36";
  src = fetchurl {
    url = "mirror://cpan/authors/id/P/PM/PMQS/${name}.tar.gz";
    sha256 = "07xf50riarb60l1h6m2dqmql8q5dij619712fsgw7ach04d8g3z1";
  };
  preConfigure = ''
    echo "LIB = ${db.out}/lib" > config.in
    echo "INCLUDE = ${db.dev}/include" >> config.in
  '';
}
    Dependencies on other Perl packages can be specified in the
    buildInputs and propagatedBuildInputs
    attributes. If something is exclusively a build-time dependency, use
    buildInputs; if it’s (also) a runtime dependency,
    use propagatedBuildInputs. For instance, this builds a
    Perl module that has runtime dependencies on a bunch of other modules:
ClassC3Componentised = buildPerlPackage rec {
  name = "Class-C3-Componentised-1.0004";
  src = fetchurl {
    url = "mirror://cpan/authors/id/A/AS/ASH/${name}.tar.gz";
    sha256 = "0xql73jkcdbq4q9m0b0rnca6nrlvf5hyzy8is0crdk65bynvs8q1";
  };
  propagatedBuildInputs = [
    ClassC3 ClassInspector TestException MROCompat
  ];
};
Nix expressions for Perl packages can be generated (almost) automatically from CPAN. This is done by the program nix-generate-from-cpan, which can be installed as follows:
$ nix-env -i nix-generate-from-cpan
This program takes a Perl module name, looks it up on CPAN, fetches and unpacks the corresponding package, and prints a Nix expression on standard output. For example:
$ nix-generate-from-cpan XML::Simple
  XMLSimple = buildPerlPackage rec {
    name = "XML-Simple-2.22";
    src = fetchurl {
      url = "mirror://cpan/authors/id/G/GR/GRANTM/${name}.tar.gz";
      sha256 = "b9450ef22ea9644ae5d6ada086dc4300fa105be050a2030ebd4efd28c198eb49";
    };
    propagatedBuildInputs = [ XMLNamespaceSupport XMLSAX XMLSAXExpat ];
    meta = {
      description = "An API for simple XML files";
      license = with stdenv.lib.licenses; [ artistic1 gpl1Plus ];
    };
  };
     The output can be pasted into
     pkgs/top-level/perl-packages.nix or wherever else you
     need it.
    
     Nixpkgs has experimental support for cross-compiling Perl modules. In many
     cases, it will just work out of the box, even for modules with native
     extensions. Sometimes, however, the Makefile.PL for a module may
     (indirectly) import a native module. In that case, you will need to make a
     stub for that module that will satisfy the Makefile.PL and install it into
     lib/perl5/site_perl/cross_perl/${perl.version}. See
     the postInstall for DBI for an
     example.
    
       Several versions of the Python interpreter are available on Nix, as well
       as a high amount of packages. The attribute python
       refers to the default interpreter, which is currently CPython 2.7. It is
       also possible to refer to specific versions,
       e.g. python35 refers to CPython 3.5, and
       pypy refers to the default PyPy interpreter.
      
       Python is used a lot, and in different ways. This affects also how it is
       packaged. In the case of Python on Nix, an important distinction is made
       between whether the package is considered primarily an application, or
       whether it should be used as a library, i.e., of primary interest are
       the modules in site-packages that should be
       importable.
      
In the Nixpkgs tree Python applications can be found throughout, depending on what they do, and are called from the main package set. Python libraries, however, are in separate sets, with one set per interpreter version.
       The interpreters have several common attributes. One of these attributes
       is pkgs, which is a package set of Python libraries
       for this specific interpreter. E.g., the toolz
       package corresponding to the default interpreter is
       python.pkgs.toolz, and the CPython 3.5 version is
       python35.pkgs.toolz. The main package set contains
       aliases to these package sets, e.g. pythonPackages
       refers to python.pkgs and
       python35Packages to python35.pkgs.
      
The Nix and NixOS manuals explain how packages are generally installed. In the case of Python and Nix, it is important to make a distinction between whether the package is considered an application or a library.
       Applications on Nix are typically installed into your user profile
       imperatively using nix-env -i, and on NixOS
       declaratively by adding the package name to
       environment.systemPackages in
       /etc/nixos/configuration.nix. Dependencies such as
       libraries are automatically installed and should not be installed
       explicitly.
      
       The same goes for Python applications and libraries. Python applications
       can be installed in your profile. But Python libraries you would like to
       use for development cannot be installed, at least not individually,
       because they won’t be able to find each other resulting in import
       errors. Instead, it is possible to create an environment with
       python.buildEnv or
       python.withPackages where the interpreter and other
       executables are able to find each other and all of the modules.
      
       In the following examples we create an environment with Python 3.5,
       numpy and toolz. As you may
       imagine, there is one limitation here, and that’s that you can
       install only one environment at a time. You will notice the complaints
       about collisions when you try to install a second environment.
      
        Create a file, e.g. build.nix, with the
        following expression
       
with import <nixpkgs> {};
python35.withPackages (ps: with ps; [ numpy toolz ])
and install it in your profile with
nix-env -if build.nix
        Now you can use the Python interpreter, as well as the extra packages
        (numpy, toolz) that you added to
        the environment.
       
        If you prefer to, you could also add the environment as a package
        override to the Nixpkgs set, e.g. using config.nix,
       
{ # ...
  packageOverrides = pkgs: with pkgs; {
    myEnv = python35.withPackages (ps: with ps; [ numpy toolz ]);
  };
}
and install it in your profile with
nix-env -iA nixpkgs.myEnv
        The environment is is installed by referring to the attribute, and
        considering the nixpkgs channel was used.
       
       The examples in the previous section showed how to install a Python
       environment into a profile. For development you may need to use multiple
       environments. nix-shell gives the possibility to
       temporarily load another environment, akin to
       virtualenv.
      
       There are two methods for loading a shell with Python packages. The
       first and recommended method is to create an environment with
       python.buildEnv or
       python.withPackages and load that. E.g.
      
$ nix-shell -p 'python35.withPackages(ps: with ps; [ numpy toolz ])'
opens a shell from which you can launch the interpreter
[nix-shell:~] python3
The other method, which is not recommended, does not create an environment and requires you to list the packages directly,
$ nix-shell -p python35.pkgs.numpy python35.pkgs.toolz
       Again, it is possible to launch the interpreter from the shell. The
       Python interpreter has the attribute pkgs which
       contains all Python libraries for that specific interpreter.
      
        As explained in the Nix manual, nix-shell can also
        load an expression from a .nix file. Say we want to
        have Python 3.5, numpy and toolz,
        like before, in an environment. Consider a shell.nix
        file with
       
with import <nixpkgs> {};
(python35.withPackages (ps: [ps.numpy ps.toolz])).env
        Executing nix-shell gives you again a Nix shell from
        which you can run Python.
       
What’s happening here?
          We begin with importing the Nix Packages collections. import
          <nixpkgs> imports the
          <nixpkgs> function, {}
          calls it and the with statement brings all
          attributes of nixpkgs in the local scope. These
          attributes form the main package set.
         
          Then we create a Python 3.5 environment with the
          withPackages function.
         
          The withPackages function expects us to provide a
          function as an argument that takes the set of all python packages and
          returns a list of packages to include in the environment. Here, we
          select the packages numpy and
          toolz from the package set.
         
        A convenient option with nix-shell is the
        --run option, with which you can execute a command
        in the nix-shell. We can e.g. directly open a
        Python shell
       
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
or run a script
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
        In fact, for the second use case, there is a more convenient method.
        You can add a
        shebang
        to your script specifying which dependencies
        nix-shell needs. With the following shebang, you can
        just execute ./myscript.py, and it will make
        available all dependencies and run the script in the
        python3 shell.
       
#! /usr/bin/env nix-shell #! nix-shell -i python3 -p "python3.withPackages(ps: [ps.numpy])" import numpy print(numpy.__version__)
Now that you know how to get a working Python environment with Nix, it is time to go forward and start actually developing with Python. We will first have a look at how Python packages are packaged on Nix. Then, we will look at how you can use development mode with your code.
       With Nix all packages are built by functions. The main function in Nix
       for building Python libraries is buildPythonPackage.
       Let’s see how we can build the toolz package.
      
{ lib, buildPythonPackage, fetchPypi }:
  toolz = buildPythonPackage rec {
    pname = "toolz";
    version = "0.7.4";
    src = fetchPypi {
      inherit pname version;
      sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
    };
    doCheck = false;
    meta = with lib; {
      homepage = https://github.com/pytoolz/toolz;
      description = "List processing tools and functional utilities";
      license = licenses.bsd3;
      maintainers = with maintainers; [ fridh ];
    };
  };
}
       What happens here? The function buildPythonPackage is
       called and as argument it accepts a set. In this case the set is a
       recursive set, rec. One of the arguments is the name
       of the package, which consists of a basename (generally following the
       name on PyPi) and a version. Another argument, src
       specifies the source, which in this case is fetched from PyPI using the
       helper function fetchPypi. The argument
       doCheck is used to set whether tests should be run
       when building the package. Furthermore, we specify some (optional) meta
       information. The output of the function is a derivation.
      
       An expression for toolz can be found in the Nixpkgs
       repository. As explained in the introduction of this Python section, a
       derivation of toolz is available for each interpreter
       version, e.g. python35.pkgs.toolz refers to the
       toolz derivation corresponding to the CPython 3.5
       interpreter. The above example works when you’re directly working
       on pkgs/top-level/python-packages.nix in the Nixpkgs
       repository. Often though, you will want to test a Nix expression outside
       of the Nixpkgs tree.
      
       The following expression creates a derivation for the
       toolz package, and adds it along with a
       numpy package to a Python environment.
      
with import <nixpkgs> {};
( let
    my_toolz = python35.pkgs.buildPythonPackage rec {
      pname = "toolz";
      version = "0.7.4";
      src = python35.pkgs.fetchPypi {
        inherit pname version;
        sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
      };
      doCheck = false;
      meta = {
        homepage = "https://github.com/pytoolz/toolz/";
        description = "List processing tools and functional utilities";
      };
    };
  in python35.withPackages (ps: [ps.numpy my_toolz])
).env
       Executing nix-shell will result in an environment in
       which you can use Python 3.5 and the toolz package.
       As you can see we had to explicitly mention for which Python version we
       want to build a package.
      
       So, what did we do here? Well, we took the Nix expression that we used
       earlier to build a Python environment, and said that we wanted to
       include our own version of toolz, named
       my_toolz. To introduce our own package in the scope
       of withPackages we used a let
       expression. You can see that we used ps.numpy to
       select numpy from the nixpkgs package set (ps). We
       did not take toolz from the Nixpkgs package set this
       time, but instead took our own version that we introduced with the
       let expression.
      
       Our example, toolz, does not have any dependencies on
       other Python packages or system libraries. According to the manual,
       buildPythonPackage uses the arguments
       buildInputs and
       propagatedBuildInputs to specify dependencies. If
       something is exclusively a build-time dependency, then the dependency
       should be included as a buildInput, but if it is
       (also) a runtime dependency, then it should be added to
       propagatedBuildInputs. Test dependencies are
       considered build-time dependencies and passed to
       checkInputs.
      
       The following example shows which arguments are given to
       buildPythonPackage in order to build
       datashape.
      
{ # ...
  datashape = buildPythonPackage rec {
    pname = "datashape";
    version = "0.4.7";
    src = fetchPypi {
      inherit pname version;
      sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278";
    };
    checkInputs = with self; [ pytest ];
    propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ];
    meta = with lib; {
      homepage = https://github.com/ContinuumIO/datashape;
      description = "A data description language";
      license = licenses.bsd2;
      maintainers = with maintainers; [ fridh ];
    };
  };
}
       We can see several runtime dependencies, numpy,
       multipledispatch, and dateutil.
       Furthermore, we have one buildInput,
       i.e. pytest. pytest is a test
       runner and is only used during the checkPhase and is
       therefore not added to propagatedBuildInputs.
      
       In the previous case we had only dependencies on other Python packages
       to consider. Occasionally you have also system libraries to consider.
       E.g., lxml provides Python bindings to
       libxml2 and libxslt. These
       libraries are only required when building the bindings and are therefore
       added as buildInputs.
      
{ # ...
  lxml = buildPythonPackage rec {
    pname = "lxml";
    version = "3.4.4";
    src = fetchPypi {
      inherit pname version;
      sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk";
    };
    buildInputs = with self; [ pkgs.libxml2 pkgs.libxslt ];
    meta = with lib; {
      description = "Pythonic binding for the libxml2 and libxslt libraries";
      homepage = https://lxml.de;
      license = licenses.bsd3;
      maintainers = with maintainers; [ sjourdois ];
    };
  };
}
       In this example lxml and Nix are able to work out
       exactly where the relevant files of the dependencies are. This is not
       always the case.
      
       The example below shows bindings to The Fastest Fourier Transform in the
       West, commonly known as FFTW. On Nix we have separate packages of FFTW
       for the different types of floats ("single",
       "double", "long-double"). The
       bindings need all three types, and therefore we add all three as
       buildInputs. The bindings don’t expect to find
       each of them in a different folder, and therefore we have to set
       LDFLAGS and CFLAGS.
      
{ # ...
  pyfftw = buildPythonPackage rec {
    pname = "pyFFTW";
    version = "0.9.2";
    src = fetchPypi {
      inherit pname version;
      sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074";
    };
    buildInputs = [ pkgs.fftw pkgs.fftwFloat pkgs.fftwLongDouble];
    propagatedBuildInputs = with self; [ numpy scipy ];
    # Tests cannot import pyfftw. pyfftw works fine though.
    doCheck = false;
    preConfigure = ''
      export LDFLAGS="-L${pkgs.fftw.dev}/lib -L${pkgs.fftwFloat.out}/lib -L${pkgs.fftwLongDouble.out}/lib"
      export CFLAGS="-I${pkgs.fftw.dev}/include -I${pkgs.fftwFloat.dev}/include -I${pkgs.fftwLongDouble.dev}/include"
    '';
    meta = with lib; {
      description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms";
      homepage = http://hgomersall.github.com/pyFFTW;
      license = with licenses; [ bsd2 bsd3 ];
      maintainers = with maintainers; [ fridh ];
    };
  };
}
       Note also the line doCheck = false;, we explicitly
       disabled running the test-suite.
      
       As a Python developer you’re likely aware of
       development
       mode (python setup.py develop); instead of
       installing the package this command creates a special link to the
       project code. That way, you can run updated code without having to
       reinstall after each and every change you make. Development mode is also
       available. Let’s see how you can use it.
      
       In the previous Nix expression the source was fetched from an url. We
       can also refer to a local source instead using src =
       ./path/to/source/tree;
      
       If we create a shell.nix file which calls
       buildPythonPackage, and if src is
       a local source, and if the local source has a
       setup.py, then development mode is activated.
      
       In the following example we create a simple environment that has a
       Python 3.5 version of our package in it, as well as its dependencies and
       other packages we like to have in the environment, all specified with
       propagatedBuildInputs. Indeed, we can just add any
       package we like to have in our environment to
       propagatedBuildInputs.
      
with import <nixpkgs> {};
with pkgs.python35Packages;
buildPythonPackage rec {
  name = "mypackage";
  src = ./path/to/package/source;
  propagatedBuildInputs = [ pytest numpy pkgs.libsndfile ];
}
It is important to note that due to how development mode is implemented on Nix it is not possible to have multiple packages simultaneously in development mode.
So far we discussed how you can use Python on Nix, and how you can develop with it. We’ve looked at how you write expressions to package Python packages, and we looked at how you can create environments in which specified packages are available.
      At some point you’ll likely have multiple packages which you would
      like to be able to use in different projects. In order to minimise
      unnecessary duplication we now look at how you can maintain a repository
      with your own packages. The important functions here are
      import and callPackage.
     
      Earlier we created a Python environment using
      withPackages, and included the
      toolz package via a let expression.
      Let’s split the package definition from the environment
      definition.
     
      We first create a function that builds toolz in
      ~/path/to/toolz/release.nix
     
{ lib, pkgs, buildPythonPackage }:
buildPythonPackage rec {
  pname = "toolz";
  version = "0.7.4";
  src = fetchPypi {
    inherit pname version;
    sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
  };
  meta = with lib; {
    homepage = "http://github.com/pytoolz/toolz/";
    description = "List processing tools and functional utilities";
    license = licenses.bsd3;
    maintainers = with maintainers; [ fridh ];
  };
}
      It takes two arguments, pkgs and
      buildPythonPackage. We now call this function using
      callPackage in the definition of our environment
     
with import <nixpkgs> {};
( let
    toolz = pkgs.callPackage /path/to/toolz/release.nix {
      pkgs = pkgs;
      buildPythonPackage = pkgs.python35Packages.buildPythonPackage;
    };
  in pkgs.python35.withPackages (ps: [ ps.numpy toolz ])
).env
      Important to remember is that the Python version for which the package is
      made depends on the python derivation that is passed
      to buildPythonPackage. Nix tries to automatically pass
      arguments when possible, which is why generally you don’t
      explicitly define which python derivation should be
      used. In the above example we use buildPythonPackage
      that is part of the set python35Packages, and in this
      case the python35 interpreter is automatically used.
     
      Versions 2.7, 3.5, 3.6 and 3.7 of the CPython interpreter are available
      as respectively python27, python35,
      python36 and python37. The aliases
      python2 and python3 correspond to
      respectively python27 and python37.
      The default interpreter, python, maps to
      python2. The PyPy interpreters compatible with Python
      2.7 and 3 are available as pypy27 and
      pypy3, with aliases pypy2 mapping
      to pypy27 and pypy mapping to
      pypy2. The Nix expressions for the interpreters can be
      found in pkgs/development/interpreters/python.
     
      All packages depending on any Python interpreter get appended
      out/{python.sitePackages} to
      $PYTHONPATH if such directory exists.
     
       To reduce closure size the
       Tkinter/tkinter is available as a
       separate package, pythonPackages.tkinter.
      
Each interpreter has the following attributes:
         libPrefix. Name of the folder in
         ${python}/lib/ for corresponding interpreter.
        
         interpreter. Alias for
         ${python}/bin/${executable}.
        
         buildEnv. Function to build python interpreter
         environments with extra packages bundled together. See section
         python.buildEnv function for usage and
         documentation.
        
         withPackages. Simpler interface to
         buildEnv. See section python.withPackages
         function for usage and documentation.
        
         sitePackages. Alias for
         lib/${libPrefix}/site-packages.
        
         executable. Name of the interpreter executable,
         e.g. python3.7.
        
         pkgs. Set of Python packages for that specific
         interpreter. The package set can be modified by overriding the
         interpreter and passing packageOverrides.
        
      Python libraries and applications that use setuptools
      or distutils are typically build with respectively the
      buildPythonPackage and
      buildPythonApplication functions. These two functions
      also support installing a wheel.
     
      All Python packages reside in
      pkgs/top-level/python-packages.nix and all
      applications elsewhere. In case a package is used as both a library and
      an application, then the package should be in
      pkgs/top-level/python-packages.nix since only those
      packages are made available for all interpreter versions. The preferred
      location for library expressions is in
      pkgs/development/python-modules. It is important that
      these packages are called from
      pkgs/top-level/python-packages.nix and not elsewhere,
      to guarantee the right version of the package is built.
     
      Based on the packages defined in
      pkgs/top-level/python-packages.nix an attribute set is
      created for each available Python interpreter. The available sets are
     
        pkgs.python27Packages
       
        pkgs.python35Packages
       
        pkgs.python36Packages
       
        pkgs.python37Packages
       
        pkgs.pypyPackages
       
and the aliases
        pkgs.python2Packages pointing to
        pkgs.python27Packages
       
        pkgs.python3Packages pointing to
        pkgs.python37Packages
       
        pkgs.pythonPackages pointing to
        pkgs.python2Packages
       
       The buildPythonPackage function is implemented in
       pkgs/development/interpreters/python/build-python-package.nix
      
The following is an example:
{ lib, buildPythonPackage, fetchPypi, hypothesis, setuptools_scm, attrs, py, setuptools, six, pluggy }:
buildPythonPackage rec {
  pname = "pytest";
  version = "3.3.1";
  src = fetchPypi {
    inherit pname version;
    sha256 = "cf8436dc59d8695346fcd3ab296de46425ecab00d64096cebe79fb51ecb2eb93";
  };
  postPatch = ''
    # don't test bash builtins
    rm testing/test_argcomplete.py
  '';
  checkInputs = [ hypothesis ];
  buildInputs = [ setuptools_scm ];
  propagatedBuildInputs = [ attrs py setuptools six pluggy ];
  meta = with lib; {
    maintainers = with maintainers; [ domenkozar lovek323 madjar lsix ];
    description = "Framework for writing tests";
  };
}
       The buildPythonPackage mainly does four things:
      
         In the buildPhase, it calls
         ${python.interpreter} setup.py bdist_wheel to build
         a wheel binary zipfile.
        
         In the installPhase, it installs the wheel file
         using pip install *.whl.
        
         In the postFixup phase, the
         wrapPythonPrograms bash function is called to wrap
         all programs in the $out/bin/* directory to include
         $PATH environment variable and add dependent
         libraries to script’s sys.path.
        
         In the installCheck phase,
         ${python.interpreter} setup.py test is ran.
        
       As in Perl, dependencies on other Python packages can be specified in
       the buildInputs and
       propagatedBuildInputs attributes. If something is
       exclusively a build-time dependency, use buildInputs;
       if it is (also) a runtime dependency, use
       propagatedBuildInputs.
      
       By default tests are run because doCheck = true. Test
       dependencies, like e.g. the test runner, should be added to
       checkInputs.
      
       By default meta.platforms is set to the same value as
       the interpreter unless overridden otherwise.
      
        All parameters from stdenv.mkDerivation function are
        still supported. The following are specific to
        buildPythonPackage:
       
          catchConflicts ? true: If true,
          abort package build if a package name appears more than once in
          dependency tree. Default is true.
         
          checkInputs ? []: Dependencies needed for running
          the checkPhase. These are added to
          buildInputs when doCheck =
          true.
         
          disabled ? false: If true,
          package is not build for the particular Python interpreter version.
         
          dontWrapPythonPrograms ? false: Skip wrapping of
          python programs.
         
          installFlags ? []: A list of strings. Arguments to
          be passed to pip install. To pass options to
          python setup.py install, use
          --install-option. E.g.,
          `installFlags=[“–install-option=‘–cpp_implementation’”].
         
          format ? "setuptools": Format of the source. Valid
          options are "setuptools",
          "flit", "wheel", and
          "other". "setuptools" is for
          when the source has a setup.py and
          setuptools is used to build a wheel,
          flit, in case flit should be
          used to build a wheel, and wheel in case a wheel
          is provided. Use other when a custom
          buildPhase and/or installPhase
          is needed.
         
          makeWrapperArgs ? []: A list of strings. Arguments
          to be passed to makeWrapper, which wraps generated
          binaries. By default, the arguments to makeWrapper
          set PATH and PYTHONPATH
          environment variables before calling the binary. Additional arguments
          here can allow a developer to set environment variables which will be
          available when the binary is run. For example,
          makeWrapperArgs = ["--set FOO BAR" "--set BAZ
          QUX"].
         
          namePrefix: Prepends text to
          ${name} parameter. In case of libraries, this
          defaults to "python3.5-" for Python 3.5, etc., and
          in case of applications to "".
         
          pythonPath ? []: List of packages to be added into
          $PYTHONPATH. Packages in
          pythonPath are not propagated (contrary to
          propagatedBuildInputs).
         
          preShellHook: Hook to execute commands before
          shellHook.
         
          postShellHook: Hook to execute commands after
          shellHook.
         
          removeBinByteCode ? true: Remove bytecode from
          /bin. Bytecode is only created when the filenames
          end with .py.
         
          setupPyBuildFlags ? []: List of flags passed to
          setup.py build_ext command.
         
        The buildPythonPackage function has a
        overridePythonAttrs method that can be used to
        override the package. In the following example we create an environment
        where we have the blaze package using an older
        version of pandas. We override first the Python
        interpreter and pass packageOverrides which contains
        the overrides for packages in the package set.
       
with import <nixpkgs> {};
(let
  python = let
    packageOverrides = self: super: {
      pandas = super.pandas.overridePythonAttrs(old: rec {
        version = "0.19.1";
        src =  super.fetchPypi {
          pname = "pandas";
          inherit version;
          sha256 = "08blshqj9zj1wyjhhw3kl2vas75vhhicvv72flvf1z3jvapgw295";
        };
      });
    };
  in pkgs.python3.override {inherit packageOverrides;};
in python.withPackages(ps: [ps.blaze])).env
       The buildPythonApplication function is practically
       the same as buildPythonPackage. The main purpose of
       this function is to build a Python package where one is interested only
       in the executables, and not importable modules. For that reason, when
       adding this package to a python.buildEnv, the modules
       won’t be made available.
      
       Another difference is that buildPythonPackage by
       default prefixes the names of the packages with the version of the
       interpreter. Because this is irrelevant for applications, the prefix is
       omitted.
      
       When packaging a python application with
       buildPythonApplication, it should be called with
       callPackage and passed python or
       pythonPackages (possibly specifying an interpreter
       version), like this:
      
{ lib, python3Packages }:
python3Packages.buildPythonApplication rec {
  pname = "luigi";
  version = "2.7.9";
  src = python3Packages.fetchPypi {
    inherit pname version;
    sha256 = "035w8gqql36zlan0xjrzz9j4lh9hs0qrsgnbyw07qs7lnkvbdv9x";
  };
  propagatedBuildInputs = with python3Packages; [ tornado_4 python-daemon ];
  meta = with lib; {
    ...
  };
}
       This is then added to all-packages.nix just as any
       other application would be.
      
luigi = callPackage ../applications/networking/cluster/luigi { };
       Since the package is an application, a consumer doesn’t need to
       care about python versions or modules, which is why they don’t go
       in pythonPackages.
      
       A distinction is made between applications and libraries, however,
       sometimes a package is used as both. In this case the package is added
       as a library to python-packages.nix and as an
       application to all-packages.nix. To reduce
       duplication the toPythonApplication can be used to
       convert a library to an application.
      
       The Nix expression shall use buildPythonPackage and
       be called from python-packages.nix. A reference shall
       be created from all-packages.nix to the attribute in
       python-packages.nix, and the
       toPythonApplication shall be applied to the
       reference:
      
youtube-dl = with pythonPackages; toPythonApplication youtube-dl;
       In some cases, such as bindings, a package is created using
       stdenv.mkDerivation and added as attribute in
       all-packages.nix. The Python bindings should be made
       available from python-packages.nix. The
       toPythonModule function takes a derivation and makes
       certain Python-specific modifications.
      
opencv = toPythonModule (pkgs.opencv.override {
  enablePython = true;
  pythonPackages = self;
});
Do pay attention to passing in the right Python version!
       Python environments can be created using the low-level
       pkgs.buildEnv function. This example shows how to
       create an environment that has the Pyramid Web Framework. Saving the
       following as default.nix
      
with import <nixpkgs> {};
python.buildEnv.override {
  extraLibs = [ pkgs.pythonPackages.pyramid ];
  ignoreCollisions = true;
}
       and running nix-build will create
      
/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env
       with wrapped binaries in bin/.
      
       You can also use the env attribute to create local
       environments with needed packages installed. This is somewhat comparable
       to virtualenv. For example, running
       nix-shell with the following
       shell.nix
      
with import <nixpkgs> {};
(python3.buildEnv.override {
  extraLibs = with python3Packages; [ numpy requests ];
}).env
will drop you into a shell where Python will have the specified packages in its path.
       The python.withPackages function provides a simpler
       interface to the python.buildEnv functionality. It
       takes a function as an argument that is passed the set of python
       packages and returns the list of the packages to be included in the
       environment. Using the withPackages function, the
       previous example for the Pyramid Web Framework environment can be
       written like this:
      
with import <nixpkgs> {};
python.withPackages (ps: [ps.pyramid])
       withPackages passes the correct package set for the
       specific interpreter version as an argument to the function. In the
       above example, ps equals
       pythonPackages. But you can also easily switch to
       using python3:
      
with import <nixpkgs> {};
python3.withPackages (ps: [ps.pyramid])
       Now, ps is set to python3Packages,
       matching the version of the interpreter.
      
       As python.withPackages simply uses
       python.buildEnv under the hood, it also supports the
       env attribute. The shell.nix file
       from the previous section can thus be also written like this:
      
with import <nixpkgs> {};
(python36.withPackages (ps: [ps.numpy ps.requests])).env
       In contrast to python.buildEnv,
       python.withPackages does not support the more
       advanced options such as ignoreCollisions = true or
       postBuild. If you need them, you have to use
       python.buildEnv.
      
       Python 2 namespace packages may provide __init__.py
       that collide. In that case python.buildEnv should be
       used with ignoreCollisions = true.
      
      Development or editable mode is supported. To develop Python packages
      buildPythonPackage has additional logic inside
      shellPhase to run pip install -e . --prefix
      $TMPDIR/for the package.
     
      Warning: shellPhase is executed only if
      setup.py exists.
     
      Given a default.nix:
     
with import <nixpkgs> {};
buildPythonPackage { name = "myproject";
buildInputs = with pkgs.pythonPackages; [ pyramid ];
src = ./.; }
      Running nix-shell with no arguments should give you
      the environment in which the package would be built with
      nix-build.
     
Shortcut to setup environments with C headers/libraries and python packages:
nix-shell -p pythonPackages.pyramid zlib libjpeg git
      Note: There is a boolean value lib.inNixShell set to
      true if nix-shell is invoked.
     
Packages inside nixpkgs are written by hand. However many tools exist in community to help save time. No tool is preferred at the moment.
python2nix by Vladimir Kirillov
pypi2nix by Rok Garbas
pypi2nix by Jaka Hudoklin
      The Python interpreters are now built deterministically. Minor
      modifications had to be made to the interpreters in order to generate
      deterministic bytecode. This has security implications and is relevant
      for those using Python in a nix-shell.
     
      When the environment variable DETERMINISTIC_BUILD is
      set, all bytecode will have timestamp 1. The
      buildPythonPackage function sets
      DETERMINISTIC_BUILD=1 and
      PYTHONHASHSEED=0.
      Both are also exported in nix-shell.
     
      It is recommended to test packages as part of the build process. Source
      distributions (sdist) often include test files, but
      not always.
     
      By default the command python setup.py test is run as
      part of the checkPhase, but often it is necessary to
      pass a custom checkPhase. An example of such a
      situation is when py.test is used.
     
         Non-working tests can often be deselected. By default
         buildPythonPackage runs python setup.py
         test. Most python modules follows the standard test protocol
         where the pytest runner can be used instead.
         py.test supports a -k parameter
         to ignore test methods or classes:
        
buildPythonPackage {
  # ...
  # assumes the tests are located in tests
  checkInputs = [ pytest ];
  checkPhase = ''
    py.test -k 'not function_name and not other_function' tests
  '';
}
         Unicode issues can typically be fixed by including
         glibcLocales in buildInputs and
         exporting LC_ALL=en_US.utf-8.
        
         Tests that attempt to access $HOME can be fixed by
         using the following work-around before running tests
         (e.g. preCheck): export HOME=$(mktemp
         -d)
        
      Consider the packages A and B that
      depend on each other. When packaging B, a solution is
      to override package A not to depend on
      B as an input. The same should also be done when
      packaging A.
     
      We can override the interpreter and pass
      packageOverrides. In the following example we rename
      the pandas package and build it.
     
with import <nixpkgs> {};
(let
  python = let
    packageOverrides = self: super: {
      pandas = super.pandas.overridePythonAttrs(old: {name="foo";});
    };
  in pkgs.python35.override {inherit packageOverrides;};
in python.withPackages(ps: [ps.pandas])).env
      Using nix-build on this expression will build an
      environment that contains the package pandas but with
      the new name foo.
     
      All packages in the package set will use the renamed package. A typical
      use case is to switch to another version of a certain package. For
      example, in the Nixpkgs repository we have multiple versions of
      django and scipy. In the following
      example we use a different version of scipy and create
      an environment that uses it. All packages in the Python package set will
      now use the updated scipy version.
     
with import <nixpkgs> {};
( let
    packageOverrides = self: super: {
      scipy = super.scipy_0_17;
    };
  in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
).env
      The requested package blaze depends on
      pandas which itself depends on
      scipy.
     
      If you want the whole of Nixpkgs to use your modifications, then you can
      use overlays as explained in this manual. In the
      following example we build a inkscape using a
      different version of numpy.
     
let
  pkgs = import <nixpkgs> {};
  newpkgs = import pkgs.path { overlays = [ (pkgsself: pkgssuper: {
    python27 = let
      packageOverrides = self: super: {
        numpy = super.numpy_1_10;
      };
    in pkgssuper.python27.override {inherit packageOverrides;};
  } ) ]; };
in newpkgs.inkscape
      Executing python setup.py bdist_wheel in a
      nix-shellfails with
     
ValueError: ZIP does not support timestamps before 1980
This is because files from the Nix store (which have a timestamp of the UNIX epoch of January 1, 1970) are included in the .ZIP, but .ZIP archives follow the DOS convention of counting timestamps from 1980.
      The command bdist_wheel reads the
      SOURCE_DATE_EPOCH environment variable, which
      nix-shell sets to 1. Unsetting this variable or giving
      it a value corresponding to 1980 or later enables building wheels.
     
Use 1980 as timestamp:
nix-shell --run "SOURCE_DATE_EPOCH=315532800 python3 setup.py bdist_wheel"
or the current time:
nix-shell --run "SOURCE_DATE_EPOCH=$(date +%s) python3 setup.py bdist_wheel"
      or unset SOURCE_DATE_EPOCH:
     
nix-shell --run "unset SOURCE_DATE_EPOCH; python3 setup.py bdist_wheel"
If you get the following error:
could not create '/nix/store/6l1bvljpy8gazlsw2aw9skwwp4pmvyxw-python-2.7.8/etc': Permission denied
      This is a
      known
      bug in setuptools. Setuptools
      install_data does not respect
      --prefix. An example of such package using the feature
      is pkgs/tools/X11/xpra/default.nix. As workaround
      install it as an extra preInstall step:
     
${python.interpreter} setup.py install_data --install-dir=$out --root=$out
sed -i '/ = data\_files/d' setup.py
      On most operating systems a global site-packages is
      maintained. This however becomes problematic if you want to run multiple
      Python versions or have multiple versions of certain libraries for your
      projects. Generally, you would solve such issues by creating virtual
      environments using virtualenv.
     
      On Nix each package has an isolated dependency tree which, in the case of
      Python, guarantees the right versions of the interpreter and libraries or
      packages are available. There is therefore no need to maintain a global
      site-packages.
     
      If you want to create a Python environment for development, then the
      recommended method is to use nix-shell, either with or
      without the python.buildEnv function.
     
      This is an example of a default.nix for a
      nix-shell, which allows to consume a
      virtualenv environment, and install python modules
      through pip the traditional way.
     
      Create this default.nix file, together with a
      requirements.txt and simply execute
      nix-shell.
     
with import <nixpkgs> {};
with pkgs.python27Packages;
stdenv.mkDerivation {
  name = "impurePythonEnv";
  buildInputs = [
    # these packages are required for virtualenv and pip to work:
    #
    python27Full
    python27Packages.virtualenv
    python27Packages.pip
    # the following packages are related to the dependencies of your python
    # project.
    # In this particular example the python modules listed in the
    # requirements.txt require the following packages to be installed locally
    # in order to compile any binary extensions they may require.
    #
    taglib
    openssl
    git
    libxml2
    libxslt
    libzip
    stdenv
    zlib ];
  src = null;
  shellHook = ''
  # set SOURCE_DATE_EPOCH so that we can use python wheels
  SOURCE_DATE_EPOCH=$(date +%s)
  virtualenv --no-setuptools venv
  export PATH=$PWD/venv/bin:$PATH
  pip install -r requirements.txt
  '';
}
      Note that the pip install is an imperative action. So
      every time nix-shell is executed it will attempt to
      download the python modules listed in requirements.txt. However these
      will be cached locally within the virtualenv folder
      and not downloaded again.
     
      If you need to change a package’s attribute(s) from
      configuration.nix you could do:
     
  nixpkgs.config.packageOverrides = super: {
    python = super.python.override {
      packageOverrides = python-self: python-super: {
        zerobin = python-super.zerobin.overrideAttrs (oldAttrs: {
          src = super.fetchgit {
            url = "https://github.com/sametmax/0bin";
            rev = "a344dbb18fe7a855d0742b9a1cede7ce423b34ec";
            sha256 = "16d769kmnrpbdr0ph0whyf4yff5df6zi4kmwx7sz1d3r6c8p6xji";
          };
        });
      };
    };
  };
      pythonPackages.zerobin is now globally overridden. All
      packages and also the zerobin NixOS service use the
      new definition. Note that python-super refers to the
      old package set and python-self to the new, overridden
      version.
     
To modify only a Python package set instead of a whole Python derivation, use this snippet:
  myPythonPackages = pythonPackages.override {
    overrides = self: super: {
      zerobin = ...;
    };
  }
Use the following overlay template:
self: super: {
  python = super.python.override {
    packageOverrides = python-self: python-super: {
      zerobin = python-super.zerobin.overrideAttrs (oldAttrs: {
        src = super.fetchgit {
          url = "https://github.com/sametmax/0bin";
          rev = "a344dbb18fe7a855d0742b9a1cede7ce423b34ec";
          sha256 = "16d769kmnrpbdr0ph0whyf4yff5df6zi4kmwx7sz1d3r6c8p6xji";
        };
      });
    };
  };
}
      A site.cfg is created that configures BLAS based on
      the blas parameter of the numpy
      derivation. By passing in mkl,
      numpy and packages depending on
      numpy will be built with mkl.
     
      The following is an overlay that configures numpy to
      use mkl:
     
self: super: {
  python37 = super.python37.override {
    packageOverrides = python-self: python-super: {
      numpy = python-super.numpy.override {
        blas = super.pkgs.mkl;
      };
    };
  };
}
      mkl requires an openmp
      implementation when running with multiple processors. By default,
      mkl will use Intel’s iomp
      implementation if no other is specified, but this is a runtime-only
      dependency and binary compatible with the LLVM implementation. To use
      that one instead, Intel recommends users set it with
      LD_PRELOAD.
     
      Note that mkl is only available on
      x86_64-{linux,darwin} platforms; moreover, Hydra is
      not building and distributing pre-compiled binaries using it.
     
Following rules are desired to be respected:
        Python libraries are called from python-packages.nix
        and packaged with buildPythonPackage. The expression
        of a library should be in
        pkgs/development/python-modules/<name>/default.nix.
        Libraries in pkgs/top-level/python-packages.nix are
        sorted quasi-alphabetically to avoid merge conflicts.
       
        Python applications live outside of
        python-packages.nix and are packaged with
        buildPythonApplication.
       
Make sure libraries build for all Python interpreters.
By default we enable tests. Make sure the tests are found and, in the case of libraries, are passing for all interpreters. If certain tests fail they can be disabled individually. Try to avoid disabling the tests altogether. In any case, when you disable tests, leave a comment explaining why.
        Commit names of Python libraries should reflect that they are Python
        libraries, so write for example pythonPackages.numpy: 1.11
        -> 1.12.
       
        Attribute names in python-packages.nix should be
        normalized according to
        PEP
        0503. This means that characters should be converted to
        lowercase and . and _ should be
        replaced by a single - (foo-bar-baz instead of
        Foo__Bar.baz )
       
Qt is a comprehensive desktop and mobile application development toolkit for C++. Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features, but older versions are typically retained until their support window ends. The most important consideration in packaging Qt-based software is ensuring that each package and all its dependencies use the same version of Qt 5; this consideration motivates most of the tools described below.
     Whenever possible, libraries that use Qt 5 should be built with each
     available version. Packages providing libraries should be added to the
     top-level function mkLibsForQt5, which is used to build
     a set of libraries for every Qt 5 version. A special
     callPackage function is used in this scope to ensure
     that the entire dependency tree uses the same Qt 5 version. Import
     dependencies unqualified, i.e., qtbase not
     qt5.qtbase. Do not import a
     package set such as qt5 or
     libsForQt5.
    
     If a library does not support a particular version of Qt 5, it is best to
     mark it as broken by setting its meta.broken attribute.
     A package may be marked broken for certain versions by testing the
     qtbase.version attribute, which will always give the
     current Qt 5 version.
    
     Call your application expression using
     libsForQt5.callPackage instead of
     callPackage. Import dependencies unqualified, i.e.,
     qtbase not qt5.qtbase. Do
     not import a package set such as qt5 or
     libsForQt5.
    
     Qt 5 maintains strict backward compatibility, so it is generally best to
     build an application package against the latest version using the
     libsForQt5 library set. In case a package does not
     build with the latest Qt version, it is possible to pick a set pinned to a
     particular version, e.g. libsForQt55 for Qt 5.5, if
     that is the latest version the package supports. If a package must be
     pinned to an older Qt version, be sure to file a bug upstream; because Qt
     is strictly backwards-compatible, any incompatibility is by definition a
     bug in the application.
    
     When testing applications in Nixpkgs, it is a common practice to build the
     package with nix-build and run it using the created
     symbolic link. This will not work with Qt applications, however, because
     they have many hard runtime requirements that can only be guaranteed if
     the package is actually installed. To test a Qt application, install it
     with nix-env or run it inside
     nix-shell.
    
Define an environment for R that contains all the libraries that you’d like to use by adding the following snippet to your $HOME/.config/nixpkgs/config.nix file:
{
    packageOverrides = super: let self = super.pkgs; in
    {
        rEnv = super.rWrapper.override {
            packages = with self.rPackages; [
                devtools
                ggplot2
                reshape2
                yaml
                optparse
                ];
        };
    };
}
     Then you can use nix-env -f "<nixpkgs>" -iA rEnv
     to install it into your user profile. The set of available libraries can
     be discovered by running the command nix-env -f "<nixpkgs>"
     -qaP -A rPackages. The first column from that output is the name
     that has to be passed to rWrapper in the code snipped above.
    
     However, if you’d like to add a file to your project source to make
     the environment available for other contributors, you can create a
     default.nix file like so:
    
let
  pkgs = import <nixpkgs> {};
  stdenv = pkgs.stdenv;
in with pkgs; {
  myProject = stdenv.mkDerivation {
    name = "myProject";
    version = "1";
    src = if pkgs.lib.inNixShell then null else nix;
    buildInputs = with rPackages; [
      R
      ggplot2
      knitr
    ];
  };
}
     and then run nix-shell . to be dropped into a shell
     with those packages available.
    
     RStudio uses a standard set of packages and ignores any custom R
     environments or installed packages you may have. To create a custom
     environment, see rstudioWrapper, which functions
     similarly to rWrapper:
    
{
    packageOverrides = super: let self = super.pkgs; in
    {
        rstudioEnv = super.rstudioWrapper.override {
            packages = with self.rPackages; [
                dplyr
                ggplot2
                reshape2
                ];
        };
    };
}
     Then like above, nix-env -f "<nixpkgs>" -iA
     rstudioEnv will install this into your user profile.
    
     Alternatively, you can create a self-contained
     shell.nix without the need to modify any configuration
     files:
    
{ pkgs ? import <nixpkgs> {}
}:
pkgs.rstudioWrapper.override {
  packages = with pkgs.rPackages; [ dplyr ggplot2 reshape2 ];
}
     Executing nix-shell will then drop you into an
     environment equivalent to the one above. If you need additional packages
     just add them to the list and re-enter the shell.
    
nix-shell generate-shell.nix Rscript generate-r-packages.R cran > cran-packages.nix.new mv cran-packages.nix.new cran-packages.nix Rscript generate-r-packages.R bioc > bioc-packages.nix.new mv bioc-packages.nix.new bioc-packages.nix
     generate-r-packages.R <repo> reads
     <repo>-packages.nix, therefor the renaming.
    
    There currently is support to bundle applications that are packaged as Ruby
    gems. The utility "bundix" allows you to write a
    Gemfile, let bundler create a
    Gemfile.lock, and then convert this into a nix
    expression that contains all Gem dependencies automatically.
   
For example, to package sensu, we did:
$ cd pkgs/servers/monitoring
$ mkdir sensu
$ cd sensu
$ cat > Gemfile
source 'https://rubygems.org'
gem 'sensu'
$ $(nix-build '<nixpkgs>' -A bundix --no-out-link)/bin/bundix --magic
$ cat > default.nix
{ lib, bundlerEnv, ruby }:
bundlerEnv rec {
  name = "sensu-${version}";
  version = (import gemset).sensu.version;
  inherit ruby;
  # expects Gemfile, Gemfile.lock and gemset.nix in the same directory
  gemdir = ./.;
  meta = with lib; {
    description = "A monitoring framework that aims to be simple, malleable, and scalable";
    homepage    = http://sensuapp.org/;
    license     = with licenses; mit;
    maintainers = with maintainers; [ theuni ];
    platforms   = platforms.unix;
  };
}
    Please check in the Gemfile,
    Gemfile.lock and the gemset.nix
    so future updates can be run easily.
   
Updating Ruby packages can then be done like this:
$ cd pkgs/servers/monitoring/sensu $ nix-shell -p bundler --run 'bundle lock --update' $ nix-shell -p bundix --run 'bundix'
    For tools written in Ruby - i.e. where the desire is to install a package
    and then execute e.g. rake at the command line, there is
    an alternative builder called bundlerApp. Set up the
    gemset.nix the same way, and then, for example:
   
{ lib, bundlerApp }:
bundlerApp {
  pname = "corundum";
  gemdir = ./.;
  exes = [ "corundum-skel" ];
  meta = with lib; {
    description = "Tool and libraries for maintaining Ruby gems.";
    homepage    = https://github.com/nyarly/corundum;
    license     = licenses.mit;
    maintainers = [ maintainers.nyarly ];
    platforms   = platforms.unix;
  };
}
    The chief advantage of bundlerApp over
    bundlerEnv is the executables introduced in the
    environment are precisely those selected in the exes
    list, as opposed to bundlerEnv which adds all the
    executables made available by gems in the gemset, which can mean e.g.
    rspec or rake in unpredictable
    versions available from various packages.
   
    Resulting derivations for both builders also have two helpful attributes,
    env and wrappedRuby. The first one
    allows one to quickly drop into nix-shell with the
    specified environment present. E.g. nix-shell -A
    sensu.env would give you an environment with Ruby preset so it
    has all the libraries necessary for sensu in its paths.
    The second one can be used to make derivations from custom Ruby scripts
    which have Gemfiles with their dependencies specified.
    It is a derivation with ruby wrapped so it can find all
    the needed dependencies. For example, to make a derivation
    my-script for a my-script.rb (which
    should be placed in bin) you should run
    bundix as specified above and then use
    bundlerEnv like this:
   
let env = bundlerEnv {
  name = "my-script-env";
  inherit ruby;
  gemfile = ./Gemfile;
  lockfile = ./Gemfile.lock;
  gemset = ./gemset.nix;
};
in stdenv.mkDerivation {
  name = "my-script";
  buildInputs = [ env.wrappedRuby ];
  script = ./my-script.rb;
  buildCommand = ''
    install -D -m755 $script $out/bin/my-script
    patchShebangs $out/bin/my-script
  '';
}
To install the rust compiler and cargo put
rustc cargo
    into the environment.systemPackages or bring them into
    scope with nix-shell -p rustc cargo.
   
If you are using NixOS and you want to use rust without a nix expression you probably want to add the following in your
configuration.nixto build crates with C dependencies.environment.systemPackages = [binutils gcc gnumake openssl pkgconfig]
For daily builds (beta and nightly) use either rustup from nixpkgs or use the Rust nightlies overlay.
     Rust applications are packaged by using the
     buildRustPackage helper from
     rustPlatform:
    
rustPlatform.buildRustPackage rec {
  name = "ripgrep-${version}";
  version = "0.4.0";
  src = fetchFromGitHub {
    owner = "BurntSushi";
    repo = "ripgrep";
    rev = "${version}";
    sha256 = "0y5d1n6hkw85jb3rblcxqas2fp82h3nghssa4xqrhqnz25l799pj";
  };
  cargoSha256 = "0q68qyl2h6i0qsz82z840myxlnjay8p1w5z7hfyr8fqp7wgwa9cx";
  meta = with stdenv.lib; {
    description = "A fast line-oriented regex search tool, similar to ag and ack";
    homepage = https://github.com/BurntSushi/ripgrep;
    license = licenses.unlicense;
    maintainers = [ maintainers.tailhook ];
    platforms = platforms.all;
  };
}
     buildRustPackage requires a
     cargoSha256 attribute which is computed over all crate
     sources of this package. Currently it is obtained by inserting a fake
     checksum into the expression and building the package once. The correct
     checksum can be then take from the failed build.
    
     When the Cargo.lock, provided by upstream, is not in
     sync with the Cargo.toml, it is possible to use
     cargoPatches to update it. All patches added in
     cargoPatches will also be prepended to the patches in
     patches at build-time.
    
      When run, cargo build produces a file called
      Cargo.lock, containing pinned versions of all
      dependencies. Nixpkgs contains a tool called carnix
      (nix-env -iA nixos.carnix), which can be used to turn
      a Cargo.lock into a Nix expression.
     
      That Nix expression calls rustc directly (hence
      bypassing Cargo), and can be used to compile a crate and all its
      dependencies. Here is an example for a minimal hello
      crate:
     
$ cargo new hello $ cd hello $ cargo build Compiling hello v0.1.0 (file:///tmp/hello) Finished dev [unoptimized + debuginfo] target(s) in 0.20 secs $ carnix -o hello.nix --src ./. Cargo.lock --standalone $ nix-build hello.nix -A hello_0_1_0
      Now, the file produced by the call to carnix, called
      hello.nix, looks like:
     
# Generated by carnix 0.6.5: carnix -o hello.nix --src ./. Cargo.lock --standalone
{ lib, stdenv, buildRustCrate, fetchgit }:
let kernel = stdenv.buildPlatform.parsed.kernel.name;
    # ... (content skipped)
in
rec {
  hello = f: hello_0_1_0 { features = hello_0_1_0_features { hello_0_1_0 = f; }; };
  hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
    crateName = "hello";
    version = "0.1.0";
    authors = [ "pe@pijul.org <pe@pijul.org>" ];
    src = ./.;
    inherit dependencies buildDependencies features;
  };
  hello_0_1_0 = { features?(hello_0_1_0_features {}) }: hello_0_1_0_ {};
  hello_0_1_0_features = f: updateFeatures f (rec {
        hello_0_1_0.default = (f.hello_0_1_0.default or true);
    }) [ ];
}
      In particular, note that the argument given as --src
      is copied verbatim to the source. If we look at a more complicated
      dependencies, for instance by adding a single line
      libc="*" to our Cargo.toml, we
      first need to run cargo build to update the
      Cargo.lock. Then, carnix needs to
      be run again, and produces the following nix file:
     
# Generated by carnix 0.6.5: carnix -o hello.nix --src ./. Cargo.lock --standalone
{ lib, stdenv, buildRustCrate, fetchgit }:
let kernel = stdenv.buildPlatform.parsed.kernel.name;
    # ... (content skipped)
in
rec {
  hello = f: hello_0_1_0 { features = hello_0_1_0_features { hello_0_1_0 = f; }; };
  hello_0_1_0_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
    crateName = "hello";
    version = "0.1.0";
    authors = [ "pe@pijul.org <pe@pijul.org>" ];
    src = ./.;
    inherit dependencies buildDependencies features;
  };
  libc_0_2_36_ = { dependencies?[], buildDependencies?[], features?[] }: buildRustCrate {
    crateName = "libc";
    version = "0.2.36";
    authors = [ "The Rust Project Developers" ];
    sha256 = "01633h4yfqm0s302fm0dlba469bx8y6cs4nqc8bqrmjqxfxn515l";
    inherit dependencies buildDependencies features;
  };
  hello_0_1_0 = { features?(hello_0_1_0_features {}) }: hello_0_1_0_ {
    dependencies = mapFeatures features ([ libc_0_2_36 ]);
  };
  hello_0_1_0_features = f: updateFeatures f (rec {
    hello_0_1_0.default = (f.hello_0_1_0.default or true);
    libc_0_2_36.default = true;
  }) [ libc_0_2_36_features ];
  libc_0_2_36 = { features?(libc_0_2_36_features {}) }: libc_0_2_36_ {
    features = mkFeatures (features.libc_0_2_36 or {});
  };
  libc_0_2_36_features = f: updateFeatures f (rec {
    libc_0_2_36.default = (f.libc_0_2_36.default or true);
    libc_0_2_36.use_std =
      (f.libc_0_2_36.use_std or false) ||
      (f.libc_0_2_36.default or false) ||
      (libc_0_2_36.default or false);
  }) [];
}
      Here, the libc crate has no src
      attribute, so buildRustCrate will fetch it from
      crates.io. A
      sha256 attribute is still needed for Nix purity.
     
      Some crates require external libraries. For crates from
      crates.io, such libraries can
      be specified in defaultCrateOverrides package in
      nixpkgs itself.
     
Starting from that file, one can add more overrides, to add features or build inputs by overriding the hello crate in a seperate file.
with import <nixpkgs> {};
((import ./hello.nix).hello {}).override {
  crateOverrides = defaultCrateOverrides // {
    hello = attrs: { buildInputs = [ openssl ]; };
  };
}
      Here, crateOverrides is expected to be a attribute
      set, where the key is the crate name without version number and the value
      a function. The function gets all attributes passed to
      buildRustCrate as first argument and returns a set
      that contains all attribute that should be overwritten.
     
      For more complicated cases, such as when parts of the crate’s
      derivation depend on the the crate’s version, the
      attrs argument of the override above can be read, as
      in the following example, which patches the derivation:
     
with import <nixpkgs> {};
((import ./hello.nix).hello {}).override {
  crateOverrides = defaultCrateOverrides // {
    hello = attrs: lib.optionalAttrs (lib.versionAtLeast attrs.version "1.0")  {
      postPatch = ''
        substituteInPlace lib/zoneinfo.rs \
          --replace "/usr/share/zoneinfo" "${tzdata}/share/zoneinfo"
      '';
    };
  };
}
      Another situation is when we want to override a nested dependency. This
      actually works in the exact same way, since the
      crateOverrides parameter is forwarded to the
      crate’s dependencies. For instance, to override the build inputs
      for crate libc in the example above, where
      libc is a dependency of the main crate, we could do:
     
with import <nixpkgs> {};
((import hello.nix).hello {}).override {
  crateOverrides = defaultCrateOverrides // {
    libc = attrs: { buildInputs = []; };
  };
}
Actually, the overrides introduced in the previous section are more general. A number of other parameters can be overridden:
The version of rustc used to compile the crate:
(hello {}).override { rust = pkgs.rust; };
Whether to build in release mode or debug mode (release mode by default):
(hello {}).override { release = false; };
        Whether to print the commands sent to rustc when building (equivalent
        to --verbose in cargo:
       
(hello {}).override { verbose = false; };
        Extra arguments to be passed to rustc:
       
(hello {}).override { extraRustcOpts = "-Z debuginfo=2"; };
        Phases, just like in any other derivation, can be specified using the
        following attributes: preUnpack,
        postUnpack, prePatch,
        patches, postPatch,
        preConfigure (in the case of a Rust crate, this is
        run before calling the “build” script),
        postConfigure (after the “build”
        script),preBuild, postBuild,
        preInstall and postInstall. As an
        example, here is how to create a new module before running the build
        script:
       
(hello {}).override {
  preConfigure = ''
     echo "pub const PATH=\"${hi.out}\";" >> src/path.rs"
  '';
};
      One can also supply features switches. For example, if we want to compile
      diesel_cli only with the postgres
      feature, and no default features, we would write:
     
(callPackage ./diesel.nix {}).diesel {
  default = false;
  postgres = true;
}
      Where diesel.nix is the file generated by Carnix, as
      explained above.
     
     Oftentimes you want to develop code from within
     nix-shell. Unfortunately
     buildRustCrate does not support common
     nix-shell operations directly (see
     this
     issue) so we will use stdenv.mkDerivation
     instead.
    
     Using the example hello project above, we want to do
     the following: - Have access to cargo and
     rustc - Have the openssl library
     available to a crate through it’s normal
     compilation mechanism (pkg-config).
    
     A typical shell.nix might look like:
    
with import <nixpkgs> {};
stdenv.mkDerivation {
  name = "rust-env";
  nativeBuildInputs = [
    rustc cargo
    # Example Build-time Additional Dependencies
    pkgconfig
  ];
  buildInputs = [
    # Example Run-time Additional Dependencies
    openssl
  ];
  # Set Environment Variables
  RUST_BACKTRACE = 1;
}
You should now be able to run the following:
$ nix-shell --pure $ cargo build $ cargo test
      To control your rust version (i.e. use nightly) from within
      shell.nix (or other nix expressions) you can use the
      following shell.nix
     
# Latest Nightly
with import <nixpkgs> {};
let src = fetchFromGitHub {
      owner = "mozilla";
      repo = "nixpkgs-mozilla";
      # commit from: 2018-03-27
      rev = "2945b0b6b2fd19e7d23bac695afd65e320efcebe";
      sha256 = "034m1dryrzh2lmjvk3c0krgip652dql46w5yfwpvh7gavd3iypyw";
   };
in
with import "${src.out}/rust-overlay.nix" pkgs pkgs;
stdenv.mkDerivation {
  name = "rust-env";
  buildInputs = [
    # Note: to use use stable, just replace `nightly` with `stable`
    latest.rustChannels.nightly.rust
    # Add some extra dependencies from `pkgs`
    pkgconfig openssl
  ];
  # Set Environment Variables
  RUST_BACKTRACE = 1;
}
Now run:
$ rustc --version rustc 1.26.0-nightly (188e693b3 2018-03-26)
To see that you are using nightly.
Mozilla provides an overlay for nixpkgs to bring a nightly version of Rust into scope. This overlay can also be used to install recent unstable or stable versions of Rust, if desired.
     To use this overlay, clone
     nixpkgs-mozilla,
     and create a symbolic link to the file
     rust-overlay.nix
     in the ~/.config/nixpkgs/overlays directory.
    
$ git clone https://github.com/mozilla/nixpkgs-mozilla.git $ mkdir -p ~/.config/nixpkgs/overlays $ ln -s $(pwd)/nixpkgs-mozilla/rust-overlay.nix ~/.config/nixpkgs/overlays/rust-overlay.nix
The latest version can be installed with the following command:
$ nix-env -Ai nixos.latest.rustChannels.stable.rust
Or using the attribute with nix-shell:
$ nix-shell -p nixos.latest.rustChannels.stable.rust
To install the beta or nightly channel, “stable” should be substituted by “nightly” or “beta”, or use the function provided by this overlay to pull a version based on a build date.
The overlay automatically updates itself as it uses the same source as rustup.
    Since release 15.09 there is a new TeX Live packaging that lives entirely
    under attribute texlive.
   
       For basic usage just pull
       texlive.combined.scheme-basic for an environment with
       basic LaTeX support.
      
It typically won't work to use separately installed packages together. Instead, you can build a custom set of packages like this:
texlive.combine {
  inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
}
      There are all the schemes, collections and a few thousand packages, as defined upstream (perhaps with tiny differences).
       By default you only get executables and files needed during runtime, and
       a little documentation for the core packages. To change that, you need
       to add pkgFilter function to
       combine.
texlive.combine {
  # inherit (texlive) whatever-you-want;
  pkgFilter = pkg:
    pkg.tlType == "run" || pkg.tlType == "bin" || pkg.pname == "cm-super";
  # elem tlType [ "run" "bin" "doc" "source" ]
  # there are also other attributes: version, name
}
      
You can list packages e.g. by nix repl.
$ nix repl nix-repl> :l <nixpkgs> nix-repl> texlive.collection-<TAB>
       Note that the wrapper assumes that the result has a chance to be useful.
       For example, the core executables should be present, as well as some
       core data files. The supported way of ensuring this is by including some
       scheme, for example scheme-basic, into the
       combination.
      
Some tools are still missing, e.g. luajittex;
some apps aren't packaged/tested yet (asymptote, biber, etc.);
       feature/bug: when a package is rejected by pkgFilter,
       its dependencies are still propagated;
      
in case of any bugs or feature requests, file a github issue or better a pull request and /cc @vcunat.
The Nixpkgs repository contains facilities to deploy a variety of versions of the Titanium SDK versions, a cross-platform mobile app development framework using JavaScript as an implementation language, and includes a function abstraction making it possible to build Titanium applications for Android and iOS devices from source code.
Not all Titanium features supported – currently, it can only be used to build Android and iOS apps.
     We can build a Titanium app from source for Android or iOS and for
     debugging or release purposes by invoking the
     titaniumenv.buildApp {} function:
    
titaniumenv.buildApp {
  name = "myapp";
  src = ./myappsource;
  preBuild = "";
  target = "android"; # or 'iphone'
  tiVersion = "7.1.0.GA";
  release = true;
  androidsdkArgs = {
    platformVersions = [ "25" "26" ];
  };
  androidKeyStore = ./keystore;
  androidKeyAlias = "myfirstapp";
  androidKeyStorePassword = "secret";
  xcodeBaseDir = "/Applications/Xcode.app";
  xcodewrapperArgs = {
    version = "9.3";
  };
  iosMobileProvisioningProfile = ./myprovisioning.profile;
  iosCertificateName = "My Company";
  iosCertificate = ./mycertificate.p12;
  iosCertificatePassword = "secret";
  iosVersion = "11.3";
  iosBuildStore = false;
  enableWirelessDistribution = true;
  installURL = "/installipa.php";
}
     The titaniumenv.buildApp {} function takes the
     following parameters:
    
       The name parameter refers to the name in the Nix
       store.
      
       The src parameter refers to the source code location
       of the app that needs to be built.
      
       preRebuild contains optional build instructions that
       are carried out before the build starts.
      
       target indicates for which device the app must be
       built. Currently only “android” and “iphone”
       (for iOS) are supported.
      
       tiVersion can be used to optionally override the
       requested Titanium version in tiapp.xml. If not
       specified, it will use the version in tiapp.xml.
      
       release should be set to true when building an app
       for submission to the Google Playstore or Apple Appstore. Otherwise, it
       should be false.
      
     When the target has been set to
     android, we can configure the following parameters:
    
       The androidSdkArgs parameter refers to an attribute
       set that propagates all parameters to the
       androidenv.composeAndroidPackages {} function. This
       can be used to install all relevant Android plugins that may be needed
       to perform the Android build. If no parameters are given, it will deploy
       the platform SDKs for API-levels 25 and 26 by default.
      
     When the release parameter has been set to true, you
     need to provide parameters to sign the app:
    
       androidKeyStore is the path to the keystore file
      
       androidKeyAlias is the key alias
      
       androidKeyStorePassword refers to the password to
       open the keystore file.
      
     When the target has been set to
     iphone, we can configure the following parameters:
    
       The xcodeBaseDir parameter refers to the location
       where Xcode has been installed. When none value is given, the above
       value is the default.
      
       The xcodewrapperArgs parameter passes arbitrary
       parameters to the xcodeenv.composeXcodeWrapper {}
       function. This can, for example, be used to adjust the default version
       of Xcode.
      
     When release has been set to true, you also need to
     provide the following parameters:
    
       iosMobileProvisioningProfile refers to a mobile
       provisioning profile needed for signing.
      
       iosCertificateName refers to the company name in the
       P12 certificate.
      
       iosCertificate refers to the path to the P12 file.
      
       iosCertificatePassword contains the password to open
       the P12 file.
      
       iosVersion refers to the iOS SDK version to use. It
       defaults to the latest version.
      
       iosBuildStore should be set to
       true when building for the Apple Appstore submission.
       For enterprise or ad-hoc builds it should be set to
       false.
      
     When enableWirelessDistribution has been enabled, you
     must also provide the path of the PHP script
     (installURL) (that is included with the iOS build
     environment) to enable wireless ad-hoc installations.
    
Both Neovim and Vim can be configured to include your favorite plugins and additional libraries.
Loading can be deferred; see examples.
At the moment we support three different methods for managing plugins:
Vim packages (recommend)
VAM (=vim-addon-manager)
Pathogen
vim-plug
Adding custom .vimrc lines can be done using the following code:
vim_configurable.customize {
  # `name` specifies the name of the executable and package
  name = "vim-with-plugins";
  vimrcConfig.customRC = ''
    set hidden
  '';
}
     This configuration is used when vim is invoked with the command specified
     as name, in this case vim-with-plugins.
    
     For Neovim the configure argument can be overridden to
     achieve the same:
    
neovim.override {
  configure = {
    customRC = ''
      # here your custom configuration goes!
    '';
  };
}
     If you want to use neovim-qt as a graphical editor, you
     can configure it by overriding neovim in an overlay or passing it an
     overridden neovimn:
    
neovim-qt.override {
  neovim = neovim.override {
    configure = {
      customRC = ''
        # your custom configuration
      '';
    };
  };
}
     To store you plugins in Vim packages (the native vim plugin manager, see
     :help packages) the following example can be used:
    
vim_configurable.customize {
  vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
    # loaded on launch
    start = [ youcompleteme fugitive ];
    # manually loadable by calling `:packadd $plugin-name`
    # however, if a vim plugin has a dependency that is not explicitly listed in
    # opt that dependency will always be added to start to avoid confusion.
    opt = [ phpCompletion elm-vim ];
    # To automatically load a plugin when opening a filetype, add vimrc lines like:
    # autocmd FileType php :packadd phpCompletion
  };
}
     myVimPackage is an arbitrary name for the generated
     package. You can choose any name you like. For Neovim the syntax is:
    
neovim.override {
  configure = {
    customRC = ''
      # here your custom configuration goes!
    '';
    packages.myVimPackage = with pkgs.vimPlugins; {
      # see examples below how to use custom packages
      start = [ ];
      # If a vim plugin has a dependency that is not explicitly listed in
      # opt that dependency will always be added to start to avoid confusion.
      opt = [ ];
    };
  };
}
     The resulting package can be added to packageOverrides
     in ~/.nixpkgs/config.nix to make it installable:
    
{
  packageOverrides = pkgs: with pkgs; {
    myVim = vim_configurable.customize {
      # `name` specifies the name of the executable and package
      name = "vim-with-plugins";
      # add here code from the example section
    };
    myNeovim = neovim.override {
      configure = {
      # add here code from the example section
      };
    };
  };
}
     After that you can install your special grafted myVim
     or myNeovim packages.
    
To use vim-plug to manage your Vim plugins the following example can be used:
vim_configurable.customize {
  vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
    # loaded on launch
    plug.plugins = [ youcompleteme fugitive phpCompletion elm-vim ];
  };
}
For Neovim the syntax is:
neovim.override {
  configure = {
    customRC = ''
      # here your custom configuration goes!
    '';
    plug.plugins = with pkgs.vimPlugins; [
      vim-go
    ];
  };
}
VAM introduced .json files supporting dependencies without versioning assuming that “using latest version” is ok most of the time.
First create a vim-scripts file having one plugin name per line. Example:
"tlib"
{'name': 'vim-addon-sql'}
{'filetype_regex': '\%(vim)$', 'names': ['reload', 'vim-dev-plugin']}
Such vim-scripts file can be read by VAM as well like this:
call vam#Scripts(expand('~/.vim-scripts'), {})
Create a default.nix file:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
nixpkgs.vim_configurable.customize { name = "vim"; vimrcConfig.vam.pluginDictionaries = [ "vim-addon-vim2nix" ]; }
Create a generate.vim file:
ActivateAddons vim-addon-vim2nix
let vim_scripts = "vim-scripts"
call nix#ExportPluginsForNix({
\  'path_to_nixpkgs': eval('{"'.substitute(substitute(substitute($NIX_PATH, ':', ',', 'g'), '=',':', 'g'), '\([:,]\)', '"\1"',"g").'"}')["nixpkgs"],
\  'cache_file': '/tmp/vim2nix-cache',
\  'try_catch': 0,
\  'plugin_dictionaries': ["vim-addon-manager"]+map(readfile(vim_scripts), 'eval(v:val)')
\ })
Then run
nix-shell -p vimUtils.vim_with_vim2nix --command "vim -c 'source generate.vim'"
You should get a Vim buffer with the nix derivations (output1) and vam.pluginDictionaries (output2). You can add your vim to your system’s configuration file like this and start it by “vim-my”:
my-vim =
 let plugins = let inherit (vimUtils) buildVimPluginFrom2Nix; in {
      copy paste output1 here
 }; in vim_configurable.customize {
   name = "vim-my";
   vimrcConfig.vam.knownPlugins = plugins; # optional
   vimrcConfig.vam.pluginDictionaries = [
      copy paste output2 here
   ];
   # Pathogen would be
   # vimrcConfig.pathogen.knownPlugins = plugins; # plugins
   # vimrcConfig.pathogen.pluginNames = ["tlib"];
 };
Sample output1:
"reload" = buildVimPluginFrom2Nix { # created by nix#NixDerivation
  name = "reload";
  src = fetchgit {
    url = "git://github.com/xolox/vim-reload";
    rev = "0a601a668727f5b675cb1ddc19f6861f3f7ab9e1";
    sha256 = "0vb832l9yxj919f5hfg6qj6bn9ni57gnjd3bj7zpq7d4iv2s4wdh";
  };
  dependencies = ["nim-misc"];
};
[...]
Sample output2:
[
  ''vim-addon-manager''
  ''tlib''
  { "name" = ''vim-addon-sql''; }
  { "filetype_regex" = ''\%(vim)$$''; "names" = [ ''reload'' ''vim-dev-plugin'' ]; }
]
     In pkgs/misc/vim-plugins/vim-plugin-names we store the
     plugin names for all vim plugins we automatically generate plugins for.
     The format of this file github username/github
     repository: For example https://github.com/scrooloose/nerdtree
     becomes scrooloose/nerdtree. After adding your plugin
     to this file run the ./update.py in the same folder.
     This will updated a file called generated.nix and make
     your plugin accessible in the vimPlugins attribute set
     (vimPlugins.nerdtree in our example). If additional
     steps to the build process of the plugin are required, add an override to
     the pkgs/misc/vim-plugins/default.nix in the same
     directory.
    
Emscripten: An LLVM-to-JavaScript Compiler
    This section of the manual covers how to use emscripten
    in nixpkgs.
   
Minimal requirements:
nix
nixpkgs
    Modes of use of emscripten:
   
Imperative usage (on the command line):
      If you want to work with emcc,
      emconfigure and emmake as you are
      used to from Ubuntu and similar distributions you can use these commands:
     
        nix-env -i emscripten
       
        nix-shell -p emscripten
       
Declarative usage:
      This mode is far more power full since this makes use of
      nix for dependency management of emscripten libraries
      and targets by using the mkDerivation which is
      implemented by pkgs.emscriptenStdenv and
      pkgs.buildEmscriptenPackage. The source for the
      packages is in pkgs/top-level/emscripten-packages.nix
      and the abstraction behind it in
      pkgs/development/em-modules/generic/default.nix.
     
build and install all packages:
          nix-env -iA emscriptenPackages
         
dev-shell for zlib implementation hacking:
          nix-shell -A emscriptenPackages.zlib
         
A few things to note:
       export EMCC_DEBUG=2 is nice for debugging
      
       ~/.emscripten, the build artifact cache sometimes
       creates issues and needs to be removed from time to time
      
     Let’s see two different examples from
     pkgs/top-level/emscripten-packages.nix:
    
       pkgs.zlib.override
      
       pkgs.buildEmscriptenPackage
      
Both are interesting concepts.
     A special requirement of the
     pkgs.buildEmscriptenPackage is the doCheck =
     true is a default meaning that each emscriptenPackage requires a
     checkPhase implemented.
    
       Use export EMCC_DEBUG=2 from within a
       emscriptenPackage’s phase to get more detailed
       debug output what is going wrong.
      
       ~/.emscripten cache is requiring us to set
       HOME=$TMPDIR in individual phases. This makes
       compilation slower but also makes it more deterministic.
      
      This example uses zlib from nixpkgs but instead of
      compiling C to
      ELF it compiles
      C to
      JS since we were using
      pkgs.zlib.override and changed stdenv to
      pkgs.emscriptenStdenv. A few adaptions and hacks were
      set in place to make it working. One advantage is that when
      pkgs.zlib is updated, it will automatically update
      this package as well. However, this can also be the downside…
     
      See the zlib example:
     
zlib = (pkgs.zlib.override {
  stdenv = pkgs.emscriptenStdenv;
}).overrideDerivation
(old: rec {
  buildInputs = old.buildInputs ++ [ pkgconfig ];
  # we need to reset this setting!
  NIX_CFLAGS_COMPILE="";
  configurePhase = ''
    # FIXME: Some tests require writing at $HOME
    HOME=$TMPDIR
    runHook preConfigure
    #export EMCC_DEBUG=2
    emconfigure ./configure --prefix=$out --shared
    runHook postConfigure
  '';
  dontStrip = true;
  outputs = [ "out" ];
  buildPhase = ''
    emmake make
  '';
  installPhase = ''
    emmake make install
  '';
  checkPhase = ''
    echo "================= testing zlib using node ================="
    echo "Compiling a custom test"
    set -x
    emcc -O2 -s EMULATE_FUNCTION_POINTER_CASTS=1 test/example.c -DZ_SOLO \
    libz.so.${old.version} -I . -o example.js
    echo "Using node to execute the test"
    ${pkgs.nodejs}/bin/node ./example.js 
    set +x
    if [ $? -ne 0 ]; then
      echo "test failed for some reason"
      exit 1;
    else
      echo "it seems to work! very good."
    fi
    echo "================= /testing zlib using node ================="
  '';
  postPatch = pkgs.stdenv.lib.optionalString pkgs.stdenv.isDarwin ''
    substituteInPlace configure \
      --replace '/usr/bin/libtool' 'ar' \
      --replace 'AR="libtool"' 'AR="ar"' \
      --replace 'ARFLAGS="-o"' 'ARFLAGS="-r"'
  '';
});
      This xmlmirror example features a emscriptenPackage
      which is defined completely from this context and no
      pkgs.zlib.override is used.
     
xmlmirror = pkgs.buildEmscriptenPackage rec {
  name = "xmlmirror";
  buildInputs = [ pkgconfig autoconf automake libtool gnumake libxml2 nodejs openjdk json_c ];
  nativeBuildInputs = [ pkgconfig zlib ];
  src = pkgs.fetchgit {
    url = "https://gitlab.com/odfplugfest/xmlmirror.git";
    rev = "4fd7e86f7c9526b8f4c1733e5c8b45175860a8fd";
    sha256 = "1jasdqnbdnb83wbcnyrp32f36w3xwhwp0wq8lwwmhqagxrij1r4b";
  };
  configurePhase = ''
    rm -f fastXmlLint.js*
    # a fix for ERROR:root:For asm.js, TOTAL_MEMORY must be a multiple of 16MB, was 234217728
    # https://gitlab.com/odfplugfest/xmlmirror/issues/8
    sed -e "s/TOTAL_MEMORY=234217728/TOTAL_MEMORY=268435456/g" -i Makefile.emEnv
    # https://github.com/kripken/emscripten/issues/6344
    # https://gitlab.com/odfplugfest/xmlmirror/issues/9
    sed -e "s/\$(JSONC_LDFLAGS) \$(ZLIB_LDFLAGS) \$(LIBXML20_LDFLAGS)/\$(JSONC_LDFLAGS) \$(LIBXML20_LDFLAGS) \$(ZLIB_LDFLAGS) /g" -i Makefile.emEnv
    # https://gitlab.com/odfplugfest/xmlmirror/issues/11
    sed -e "s/-o fastXmlLint.js/-s EXTRA_EXPORTED_RUNTIME_METHODS='[\"ccall\", \"cwrap\"]' -o fastXmlLint.js/g" -i Makefile.emEnv
  '';
  buildPhase = ''
    HOME=$TMPDIR
    make -f Makefile.emEnv
  '';
  outputs = [ "out" "doc" ];
  installPhase = ''
    mkdir -p $out/share
    mkdir -p $doc/share/${name}
    cp Demo* $out/share
    cp -R codemirror-5.12 $out/share
    cp fastXmlLint.js* $out/share
    cp *.xsd $out/share
    cp *.js $out/share
    cp *.xhtml $out/share
    cp *.html $out/share
    cp *.json $out/share
    cp *.rng $out/share
    cp README.md $doc/share/${name}
  '';
  checkPhase = ''
  '';
}; 
      Use nix-shell -I nixpkgs=/some/dir/nixpkgs -A
      emscriptenPackages.libz and from there you can go trough the
      individual steps. This makes it easy to build a good unit
      test or list the files of the project.
     
        nix-shell -I nixpkgs=/some/dir/nixpkgs -A
        emscriptenPackages.libz
       
        cd /tmp/
       
        unpackPhase
       
cd libz-1.2.3
        configurePhase
       
        buildPhase
       
… happy hacking…
     Using this toolchain makes it easy to leverage nix from
     NixOS, MacOSX or even Windows (WSL+ubuntu+nix). This toolchain is
     reproducible, behaves like the rest of the packages from nixpkgs and
     contains a set of well working examples to learn and adapt from.
    
If in trouble, ask the maintainers.
Some common issues when packaging software for Darwin:
      The Darwin stdenv uses clang instead of gcc. When
      referring to the compiler $CC or cc
      will work in both cases. Some builds hardcode gcc/g++ in their build
      scripts, that can usually be fixed with using something like
      makeFlags = [ "CC=cc" ]; or by patching the build
      scripts.
     
      stdenv.mkDerivation {
        name = "libfoo-1.2.3";
        # ...
        buildPhase = ''
          $CC -o hello hello.c
        '';
      }
    
      On Darwin, libraries are linked using absolute paths, libraries are
      resolved by their install_name at link time. Sometimes
      packages won't set this correctly causing the library lookups to fail at
      runtime. This can be fixed by adding extra linker flags or by running
      install_name_tool -id during the
      fixupPhase.
     
      stdenv.mkDerivation {
        name = "libfoo-1.2.3";
        # ...
        makeFlags = stdenv.lib.optional stdenv.isDarwin "LDFLAGS=-Wl,-install_name,$(out)/lib/libfoo.dylib";
      }
    
      Even if the libraries are linked using absolute paths and resolved via
      their install_name correctly, tests can sometimes fail
      to run binaries. This happens because the checkPhase
      runs before the libraries are installed.
     
      This can usually be solved by running the tests after the
      installPhase or alternatively by using
      DYLD_LIBRARY_PATH. More information about this
      variable can be found in the dyld(1) manpage.
     
      dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib
      Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq
      Reason: image not found
      ./tests/jqtest: line 5: 75779 Abort trap: 6
    
      stdenv.mkDerivation {
        name = "libfoo-1.2.3";
        # ...
        doInstallCheck = true;
        installCheckTarget = "check";
      }
    
      Some packages assume xcode is available and use xcrun
      to resolve build tools like clang, etc. This causes
      errors like xcode-select: error: no developer tools were found at
      '/Applications/Xcode.app' while the build doesn't actually depend
      on xcode.
     
      stdenv.mkDerivation {
        name = "libfoo-1.2.3";
        # ...
        prePatch = ''
          substituteInPlace Makefile \
              --replace '/usr/bin/xcrun clang' clang
        '';
      }
    
      The package xcbuild can be used to build projects that
      really depend on Xcode. However, this replacement is not 100% compatible
      with Xcode and can occasionally cause issues.
     
This chapter contains information about how to use and maintain the Nix expressions for a number of specific packages, such as the Linux kernel or X.org.
    The Nix expressions to build the Linux kernel are in
    pkgs/os-specific/linux/kernel.
   
    The function that builds the kernel has an argument
    kernelPatches which should be a list of {name,
    patch, extraConfig} attribute sets, where name
    is the name of the patch (which is included in the kernel’s
    meta.description attribute), patch is
    the patch itself (possibly compressed), and extraConfig
    (optional) is a string specifying extra options to be concatenated to the
    kernel configuration file (.config).
   
    The kernel derivation exports an attribute features
    specifying whether optional functionality is or isn’t enabled. This
    is used in NixOS to implement kernel-specific behaviour. For instance, if
    the kernel has the iwlwifi feature (i.e. has built-in
    support for Intel wireless chipsets), then NixOS doesn’t have to
    build the external iwlwifi package:
modulesTree = [kernel] ++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi ++ ...;
How to add a new (major) version of the Linux kernel to Nixpkgs:
       Copy the old Nix expression (e.g. linux-2.6.21.nix)
       to the new one (e.g. linux-2.6.22.nix) and update
       it.
      
       Add the new kernel to all-packages.nix (e.g.,
       create an attribute kernel_2_6_22).
      
       Now we’re going to update the kernel configuration. First unpack
       the kernel. Then for each supported platform (i686,
       x86_64, uml) do the following:
       
          Make an copy from the old config (e.g.
          config-2.6.21-i686-smp) to the new one (e.g.
          config-2.6.22-i686-smp).
         
          Copy the config file for this platform (e.g.
          config-2.6.22-i686-smp) to
          .config in the kernel source tree.
         
          Run make oldconfig
          ARCH= and answer
          all questions. (For the uml configuration, also add
          {i386,x86_64,um}SHELL=bash.) Make sure to keep the configuration
          consistent between platforms (i.e. don’t enable some feature
          on i686 and disable it on
          x86_64).
         
          If needed you can also run make menuconfig:
$ nix-env -i ncurses
$ export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=arch
          Copy .config over the new config file (e.g.
          config-2.6.22-i686-smp).
         
       Test building the kernel: nix-build -A kernel_2_6_22.
       If it compiles, ship it! For extra credit, try booting NixOS with it.
      
       It may be that the new kernel requires updating the external kernel
       modules and kernel-dependent packages listed in the
       linuxPackagesFor function in
       all-packages.nix (such as the NVIDIA drivers, AUFS,
       etc.). If the updated packages aren’t backwards compatible with
       older kernels, you may need to keep the older versions around.
      
    The Nix expressions for the X.org packages reside in
    pkgs/servers/x11/xorg/default.nix. This file is
    automatically generated from lists of tarballs in an X.org release. As such
    it should not be modified directly; rather, you should modify the lists,
    the generator script or the file
    pkgs/servers/x11/xorg/overrides.nix, in which you can
    override or add to the derivations produced by the generator.
   
The generator is invoked as follows:
$ cd pkgs/servers/x11/xorg $ cat tarballs-7.5.list extra.list old.list \ | perl ./generate-expr-from-tarballs.pl
    For each of the tarballs in the .list files, the
    script downloads it, unpacks it, and searches its
    configure.ac and *.pc.in files
    for dependencies. This information is used to generate
    default.nix. The generator caches downloaded tarballs
    between runs. Pay close attention to the NOT FOUND:
     messages at the end of the run,
    since they may indicate missing dependencies. (Some might be optional
    dependencies, however.)
   name
    A file like tarballs-7.5.list contains all tarballs in
    a X.org release. It can be generated like this:
$ export i="mirror://xorg/X11R7.4/src/everything/"
$ cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
  | perl -e 'while (<>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
  | sort > tarballs-7.4.list
    extra.list contains libraries that aren’t part
    of X.org proper, but are closely related to it, such as
    libxcb. old.list contains some
    packages that were removed from X.org, but are still needed by some people
    or by other packages (such as imake).
   
    If the expression for a package requires derivation attributes that the
    generator cannot figure out automatically (say, patches
    or a postInstall hook), you should modify
    pkgs/servers/x11/xorg/overrides.nix.
   
    The Nix expressions related to the Eclipse platform and IDE are in
    pkgs/applications/editors/eclipse.
   
Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
Once an Eclipse variant is installed it can be run using the eclipse command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
    If you prefer to install plugins in a more declarative manner then Nixpkgs
    also offer a number of Eclipse plugins that can be installed in an
    Eclipse environment. This type of environment is
    created using the function eclipseWithPlugins found
    inside the nixpkgs.eclipses attribute set. This function
    takes as argument { eclipse, plugins ? [], jvmArgs ? []
    } where eclipse is a one of the Eclipse
    packages described above, plugins is a list of plugin
    derivations, and jvmArgs is a list of arguments given to
    the JVM running the Eclipse. For example, say you wish to install the
    latest Eclipse Platform with the popular Eclipse Color Theme plugin and
    also allow Eclipse to use more RAM. You could then add
packageOverrides = pkgs: {
  myEclipse = with pkgs.eclipses; eclipseWithPlugins {
    eclipse = eclipse-platform;
    jvmArgs = [ "-Xmx2048m" ];
    plugins = [ plugins.color-theme ];
  };
}
    to your Nixpkgs configuration
    (~/.config/nixpkgs/config.nix) and install it by
    running nix-env -f '<nixpkgs>' -iA myEclipse and
    afterward run Eclipse as usual. It is possible to find out which plugins
    are available for installation using eclipseWithPlugins
    by running
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
    If there is a need to install plugins that are not available in Nixpkgs
    then it may be possible to define these plugins outside Nixpkgs using the
    buildEclipseUpdateSite and
    buildEclipsePlugin functions found in the
    nixpkgs.eclipses.plugins attribute set. Use the
    buildEclipseUpdateSite function to install a plugin
    distributed as an Eclipse update site. This function takes { name,
    src } as argument where src indicates the
    Eclipse update site archive. All Eclipse features and plugins within the
    downloaded update site will be installed. When an update site archive is
    not available then the buildEclipsePlugin function can
    be used to install a plugin that consists of a pair of feature and plugin
    JARs. This function takes an argument { name, srcFeature,
    srcPlugin } where srcFeature and
    srcPlugin are the feature and plugin JARs, respectively.
   
Expanding the previous example with two plugins using the above functions we have
packageOverrides = pkgs: {
  myEclipse = with pkgs.eclipses; eclipseWithPlugins {
    eclipse = eclipse-platform;
    jvmArgs = [ "-Xmx2048m" ];
    plugins = [
      plugins.color-theme
      (plugins.buildEclipsePlugin {
        name = "myplugin1-1.0";
        srcFeature = fetchurl {
          url = "http://…/features/myplugin1.jar";
          sha256 = "123…";
        };
        srcPlugin = fetchurl {
          url = "http://…/plugins/myplugin1.jar";
          sha256 = "123…";
        };
      });
      (plugins.buildEclipseUpdateSite {
        name = "myplugin2-1.0";
        src = fetchurl {
          stripRoot = false;
          url = "http://…/myplugin2.zip";
          sha256 = "123…";
        };
      });
    ];
  };
}
    To update Elm compiler, see
    nixpkgs/pkgs/development/compilers/elm/README.md.
   
To package Elm applications, read about elm2nix.
Some packages provide the shell integration to be more useful. But unlike other systems, nix doesn't have a standard share directory location. This is why a bunch PACKAGE-share scripts are shipped that print the location of the corresponding shared folder. Current list of such packages is as following:
       autojump: autojump-share
      
       fzf: fzf-share
      
    E.g. autojump can then used in the .bashrc like this:
source "$(autojump-share)/autojump.bash"
     Steam is distributed as a .deb file, for now only as
     an i686 package (the amd64 package only has documentation). When unpacked,
     it has a script called steam that in ubuntu (their
     target distro) would go to /usr/bin . When run for
     the first time, this script copies some files to the user's home, which
     include another script that is the ultimate responsible for launching the
     steam binary, which is also in $HOME.
    
Nix problems and constraints:
        We don't have /bin/bash and many scripts point
        there. Similarly for /usr/bin/python .
       
        We don't have the dynamic loader in /lib .
       
        The steam.sh script in $HOME can not be patched,
        as it is checked and rewritten by steam.
       
The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented here. This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
For 64-bit systems it's important to have
hardware.opengl.driSupport32Bit = true;
     in your /etc/nixos/configuration.nix. You'll also
     need
hardware.pulseaudio.support32Bit = true;
if you are using PulseAudio - this will enable 32bit ALSA apps integration. To use the Steam controller or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro, you need to add
hardware.steam-hardware.enable = true;
to your configuration.
Try to run
strace steam
to see what is causing steam to fail.
           The newStdcpp parameter was removed since NixOS
           17.09 and should not be needed anymore.
          
Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
steam.sh: line 713: 7842 Segmentation fault (core dumped)
have a look at this pull request.
There is no java in steam chrootenv by default. If you get a message like
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
You need to add
 steam.override { withJava = true; };to your configuration.
     The Emacs package comes with some extra helpers to make it easier to
     configure. emacsWithPackages allows you to manage
     packages from ELPA. This means that you will not have to install that
     packages from within Emacs. For instance, if you wanted to use
     company, counsel,
     flycheck, ivy,
     magit, projectile, and
     use-package you could use this as a
     ~/.config/nixpkgs/config.nix override:
    
{
  packageOverrides = pkgs: with pkgs; {
    myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
      company
      counsel
      flycheck
      ivy
      magit
      projectile
      use-package
    ]));
  }
}
     You can install it like any other packages via nix-env -iA
     myEmacs. However, this will only install those packages. It will
     not configure them for us. To do this, we need to
     provide a configuration file. Luckily, it is possible to do this from
     within Nix! By modifying the above example, we can make Emacs load a
     custom config file. The key is to create a package that provide a
     default.el file in
     /share/emacs/site-start/. Emacs knows to load this
     file automatically when it starts.
    
{
  packageOverrides = pkgs: with pkgs; rec {
    myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
  (require 'use-package))
;; load some packages
(use-package company
  :bind ("<C-tab>" . company-complete)
  :diminish company-mode
  :commands (company-mode global-company-mode)
  :defer 1
  :config
  (global-company-mode))
(use-package counsel
  :commands (counsel-descbinds)
  :bind (([remap execute-extended-command] . counsel-M-x)
         ("C-x C-f" . counsel-find-file)
         ("C-c g" . counsel-git)
         ("C-c j" . counsel-git-grep)
         ("C-c k" . counsel-ag)
         ("C-x l" . counsel-locate)
         ("M-y" . counsel-yank-pop)))
(use-package flycheck
  :defer 2
  :config (global-flycheck-mode))
(use-package ivy
  :defer 1
  :bind (("C-c C-r" . ivy-resume)
         ("C-x C-b" . ivy-switch-buffer)
         :map ivy-minibuffer-map
         ("C-j" . ivy-call))
  :diminish ivy-mode
  :commands ivy-mode
  :config
  (ivy-mode 1))
(use-package magit
  :defer
  :if (executable-find "git")
  :bind (("C-x g" . magit-status)
         ("C-x G" . magit-dispatch-popup))
  :init
  (setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
  :commands projectile-mode
  :bind-keymap ("C-c p" . projectile-command-map)
  :defer 5
  :config
  (projectile-global-mode))
    '';
    myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
      (runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
      company
      counsel
      flycheck
      ivy
      magit
      projectile
      use-package
    ]));
  };
}
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing -q to the Emacs command.
     Sometimes emacsWithPackages is not enough, as this
     package set has some priorities imposed on packages (with the lowest
     priority assigned to Melpa Unstable, and the highest for packages manually
     defined in pkgs/top-level/emacs-packages.nix). But
     you can't control this priorities when some package is installed as a
     dependency. You can override it on per-package-basis, providing all the
     required dependencies manually - but it's tedious and there is always a
     possibility that an unwanted dependency will sneak in through some other
     package. To completely override such a package you can use
     overrideScope'.
    
overrides = self: super: rec {
  haskell-mode = self.melpaPackages.haskell-mode;
  ...
};
((emacsPackagesNgGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
  # here both these package will use haskell-mode of our own choice
  ghc-mod
  dante
])
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
weechat.override {configure = {availablePlugins, ...}: {
    plugins = with availablePlugins; [ python perl ];
  }
}
    If the configure function returns an attrset without the
    plugins attribute, availablePlugins
    will be used automatically.
   
    The plugins currently available are python,
    perl, ruby, guile,
    tcl and lua.
   
    The python and perl plugins allows the addition of extra libraries. For
    instance, the inotify.py script in weechat-scripts
    requires D-Bus or libnotify, and the fish.py script
    requires pycrypto. To use these scripts, use the plugin's
    withPackages attribute:
weechat.override { configure = {availablePlugins, ...}: {
    plugins = with availablePlugins; [
            (python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
        ];
    };
}
In order to also keep all default plugins installed, it is possible to use the following method:
weechat.override { configure = { availablePlugins, ... }: {
  plugins = builtins.attrValues (availablePlugins // {
    python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
  });
}; }
    WeeChat allows to set defaults on startup using the
    --run-command. The configure method
    can be used to pass commands to the program:
weechat.override {
  configure = { availablePlugins, ... }: {
    init = ''
      /set foo bar
      /server add freenode chat.freenode.org
    '';
  };
}
    Further values can be added to the list of commands when running
    weechat --run-command "your-commands".
   
    Additionally it's possible to specify scripts to be loaded when starting
    weechat. These will be loaded before the commands from
    init:
weechat.override {
  configure = { availablePlugins, ... }: {
    scripts = with pkgs.weechatScripts; [
      weechat-xmpp weechat-matrix-bridge wee-slack
    ];
    init = ''
      /set plugins.var.python.jabber.key "val"
    '':
  };
}
    In nixpkgs there's a subpackage which contains
    derivations for WeeChat scripts. Such derivations expect a
    passthru.scripts attribute which contains a list of all
    scripts inside the store path. Furthermore all scripts have to live in
    $out/share. An exemplary derivation looks like this:
{ stdenv, fetchurl }:
stdenv.mkDerivation {
  name = "exemplary-weechat-script";
  src = fetchurl {
    url = "https://scripts.tld/your-scripts.tar.gz";
    sha256 = "...";
  };
  passthru.scripts = [ "foo.py" "bar.lua" ];
  installPhase = ''
    mkdir $out/share
    cp foo.py $out/share
    cp bar.lua $out/share
  '';
}
The Citrix Receiver is a remote desktop viewer which provides access to XenDesktop installations.
     The tarball archive needs to be downloaded manually as the licenses
     agreements of the vendor need to be accepted first. This is available at
     the
     download
     page at citrix.com. Then run nix-prefetch-url
     file://$PWD/linuxx64-$version.tar.gz. With the archive available
     in the store the package can be built and installed with Nix.
    
     Note: it's recommended to install Citrix
     Receiver using nix-env -i or globally to
     ensure that the .desktop files are installed properly
     into $XDG_CONFIG_DIRS. Otherwise it won't be possible
     to open .ica files automatically from the browser to
     start a Citrix connection.
    
     The Citrix Receiver in nixpkgs
     trusts several certificates
     from the
     Mozilla database by default. However several companies using Citrix
     might require their own corporate certificate. On distros with imperative
     packaging these certs can be stored easily in
     $ICAROOT,
     however this directory is a store path in nixpkgs. In
     order to work around this issue the package provides a simple mechanism to
     add custom certificates without rebuilding the entire package using
     symlinkJoin:
with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
  inherit extraCerts;
}
This package is an ibus-based completion method to speed up typing.
     IBus needs to be configured accordingly to activate
     typing-booster. The configuration depends on the
     desktop manager in use. For detailed instructions, please refer to the
     upstream
     docs.
    
     On NixOS you need to explicitly enable ibus with given
     engines before customizing your desktop to use
     typing-booster. This can be achieved using the
     ibus module:
{ pkgs, ... }: {
  i18n.inputMethod = {
    enabled = "ibus";
    ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
  };
}
     The IBus engine is based on hunspell to support
     completion in many languages. By default the dictionaries
     de-de, en-us,
     es-es, it-it,
     sv-se and sv-fi are in use. To add
     another dictionary, the package can be overridden like this:
ibus-engines.typing-booster.override {
  langs = [ "de-at" "en-gb" ];
}
     Note: each language passed to langs must be
     an attribute name in pkgs.hunspellDicts.
    
     The ibus-engines.typing-booster package contains a
     program named emoji-picker. To display all emojis
     correctly, a special font such as noto-fonts-emoji is
     needed:
    
On NixOS it can be installed using the following expression:
{ pkgs, ... }: {
  fonts.fonts = with pkgs; [ noto-fonts-emoji ];
}
This chapter describes how to extend and change Nixpkgs using overlays. Overlays are used to add layers in the fixed-point used by Nixpkgs to compose the set of all packages.
Nixpkgs can be configured with a list of overlays, which are applied in order. This means that the order of the overlays can be significant if multiple layers override the same package.
    The list of overlays can be set either explicitly in a Nix expression, or
    through <nixpkgs-overlays> or user configuration
    files.
   
     On a NixOS system the value of the nixpkgs.overlays
     option, if present, is passed to the system Nixpkgs directly as an
     argument. Note that this does not affect the overlays for non-NixOS
     operations (e.g. nix-env), which are
     looked up independently.
    
     The list of overlays can be passed explicitly when importing nixpkgs, for
     example import <nixpkgs> { overlays = [ overlay1 overlay2
     ]; }.
    
     Further overlays can be added by calling the
     pkgs.extend or pkgs.appendOverlays,
     although it is often preferable to avoid these functions, because they
     recompute the Nixpkgs fixpoint, which is somewhat expensive to do.
    
The list of overlays is determined as follows.
        First, if an
        overlays
        argument to the Nixpkgs function itself is given, then that is
        used and no path lookup will be performed.
       
        Otherwise, if the Nix path entry
        <nixpkgs-overlays> exists, we look for
        overlays at that path, as described below.
       
        See the section on NIX_PATH in the Nix manual for
        more details on how to set a value for
        <nixpkgs-overlays>.
       
        If one of ~/.config/nixpkgs/overlays.nix and
        ~/.config/nixpkgs/overlays/ exists, then we look
        for overlays at that path, as described below. It is an error if both
        exist.
       
If we are looking for overlays at a path, then there are two cases:
If the path is a file, then the file is imported as a Nix expression and used as the list of overlays.
If the path is a directory, then we take the content of the directory, order it lexicographically, and attempt to interpret each as an overlay by:
           Importing the file, if it is a .nix file.
          
           Importing a top-level default.nix file, if it
           is a directory.
          
     Because overlays that are set in NixOS configuration do not affect
     non-NixOS operations such as nix-env, the
     overlays.nix option provides a convenient way to use
     the same overlays for a NixOS system configuration and user configuration:
     the same file can be used as overlays.nix and
     imported as the value of nixpkgs.overlays.
    
    Overlays are Nix functions which accept two arguments, conventionally
    called self and super, and return a
    set of packages. For example, the following is a valid overlay.
   
self: super:
{
  boost = super.boost.override {
    python = self.python3;
  };
  rr = super.callPackage ./pkgs/rr {
    stdenv = self.stdenv_32bit;
  };
}
    The first argument (self) corresponds to the final
    package set. You should use this set for the dependencies of all packages
    specified in your overlay. For example, all the dependencies of
    rr in the example above come from
    self, as well as the overridden dependencies used in the
    boost override.
   
    The second argument (super) corresponds to the result of
    the evaluation of the previous stages of Nixpkgs. It does not contain any
    of the packages added by the current overlay, nor any of the following
    overlays. This set should be used either to refer to packages you wish to
    override, or to access functions defined in Nixpkgs. For example, the
    original recipe of boost in the above example, comes
    from super, as well as the
    callPackage function.
   
    The value returned by this function should be a set similar to
    pkgs/top-level/all-packages.nix, containing overridden
    and/or new packages.
   
    Overlays are similar to other methods for customizing Nixpkgs, in
    particular the packageOverrides attribute described in
    Section 6.5, “Modify packages via packageOverrides”. Indeed,
    packageOverrides acts as an overlay with only the
    super argument. It is therefore appropriate for basic
    use, but overlays are more powerful and easier to distribute.
   
Use 2 spaces of indentation per indentation level in Nix expressions, 4 spaces in shell scripts.
      Do not use tab characters, i.e. configure your editor to use soft tabs.
      For instance, use (setq-default indent-tabs-mode nil)
      in Emacs. Everybody has different tab settings so it’s asking for
      trouble.
     
      Use lowerCamelCase for variable names, not
      UpperCamelCase. Note, this rule does not apply to
      package attribute names, which instead follow the rules in
      Section 13.2, “Package naming”.
     
Function calls with attribute set arguments are written as
foo {
  arg = ...;
}
not
foo
{
  arg = ...;
}
Also fine is
foo { arg = ...; }
if it's a short call.
In attribute sets or lists that span multiple lines, the attribute names or list elements should be aligned:
# A long list.
list = [
  elem1
  elem2
  elem3
];
# A long attribute set.
attrs = {
  attr1 = short_expr;
  attr2 =
    if true then big_expr else big_expr;
};
# Combined
listOfAttrs = [
  {
    attr1 = 3;
    attr2 = "fff";
  }
  {
    attr1 = 5;
    attr2 = "ggg";
  }
];
Short lists or attribute sets can be written on one line:
# A short list.
list = [ elem1 elem2 elem3 ];
# A short set.
attrs = { x = 1280; y = 1024; };
Breaking in the middle of a function argument can give hard-to-read code, like
someFunction { x = 1280;
  y = 1024; } otherArg
  yetAnotherArg
(especially if the argument is very large, spanning multiple lines).
Better:
someFunction
  { x = 1280; y = 1024; }
  otherArg
  yetAnotherArg
or
let res = { x = 1280; y = 1024; };
in someFunction res otherArg yetAnotherArg
The bodies of functions, asserts, and withs are not indented to prevent a lot of superfluous indentation levels, i.e.
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
not
{ arg1, arg2 }:
  assert system == "i686-linux";
    stdenv.mkDerivation { ...
Function formal arguments are written as:
{ arg1, arg2, arg3 }:
but if they don't fit on one line they're written as:
{ arg1, arg2, arg3
, arg4, ...
, # Some comment...
  argN
}:
Functions should list their expected arguments as precisely as possible. That is, write
{ stdenv, fetchurl, perl }: ...
instead of
args: with args; ...
or
{ stdenv, fetchurl, perl, ... }: ...
      For functions that are truly generic in the number of arguments (such as
      wrappers around mkDerivation) that have some required
      arguments, you should write them using an @-pattern:
{ stdenv, doCoverageAnalysis ? false, ... } @ args:
stdenv.mkDerivation (args // {
  ... if doCoverageAnalysis then "bla" else "" ...
})
instead of
args:
args.stdenv.mkDerivation (args // {
  ... if args ? doCoverageAnalysis && args.doCoverageAnalysis then "bla" else "" ...
})
The key words must, must not, required, shall, shall not, should, should not, recommended, may, and optional in this section are to be interpreted as described in RFC 2119. Only emphasized words are to be interpreted in this way.
In Nixpkgs, there are generally three different names associated with a package:
       The name attribute of the derivation (excluding the
       version part). This is what most users see, in particular when using
       nix-env.
      
       The variable name used for the instantiated package in
       all-packages.nix, and when passing it as a
       dependency to other functions. Typically this is called the
       package attribute name. This is what Nix expression
       authors see. It can also be used when installing using nix-env
       -iA.
      
The filename for (the directory containing) the Nix expression.
    Most of the time, these are the same. For instance, the package
    e2fsprogs has a name attribute
    "e2fsprogs-, is bound
    to the variable name version"e2fsprogs in
    all-packages.nix, and the Nix expression is in
    pkgs/os-specific/linux/e2fsprogs/default.nix.
   
There are a few naming guidelines:
       The name attribute should be
       identical to the upstream package name.
      
       The name attribute must not
       contain uppercase letters — e.g.,
       "mplayer-1.0rc2" instead of
       "MPlayer-1.0rc2".
      
       The version part of the name attribute
       must start with a digit (following a dash) —
       e.g., "hello-0.3.1rc2".
      
       If a package is not a release but a commit from a repository, then the
       version part of the name must be the date of that
       (fetched) commit. The date must be in
       "YYYY-MM-DD" format. Also append
       "unstable" to the name - e.g.,
       "pkgname-unstable-2014-09-23".
      
       Dashes in the package name should be preserved in
       new variable names, rather than converted to underscores or camel cased
       — e.g., http-parser instead of
       http_parser or httpParser. The
       hyphenated style is preferred in all three package names.
      
       If there are multiple versions of a package, this
       should be reflected in the variable names in
       all-packages.nix, e.g.
       json-c-0-9 and json-c-0-11. If
       there is an obvious “default” version, make an attribute
       like json-c = json-c-0-9;. See also
       Section 13.3.2, “Versioning”
      
    Names of files and directories should be in lowercase, with dashes between
    words — not in camel case. For instance, it should be
    all-packages.nix, not
    allPackages.nix or
    AllPackages.nix.
   
     Each package should be stored in its own directory somewhere in the
     pkgs/ tree, i.e. in
     pkgs/.
     Below are some rules for picking the right category for a package. Many
     packages fall under several categories; what matters is the
     primary purpose of a package. For example, the
     category/subcategory/.../pkgnamelibxml2 package builds both a library and some tools;
     but it’s a library foremost, so it goes under
     pkgs/development/libraries.
    
     When in doubt, consider refactoring the pkgs/ tree,
     e.g. creating new categories or splitting up an existing category.
    
           development/libraries (e.g.
           libxml2)
          
           development/compilers (e.g.
           gcc)
          
           development/interpreters (e.g.
           guile)
          
              development/tools/parsing (e.g.
              bison, flex)
             
              development/tools/build-managers (e.g.
              gnumake)
             
              development/tools/misc (e.g.
              binutils)
             
           development/misc
          
(A tool is a relatively small program, especially one intended to be used non-interactively.)
           tools/networking (e.g.
           wget)
          
           tools/text (e.g.
           diffutils)
          
           tools/system (e.g. cron)
          
           tools/archivers (e.g. zip,
           tar)
          
           tools/compression (e.g.
           gzip, bzip2)
          
           tools/security (e.g. nmap,
           gnupg)
          
           tools/misc
          
        shells (e.g. bash)
       
           servers/http (e.g.
           apache-httpd)
          
           servers/x11 (e.g. xorg
           — this includes the client libraries and programs)
          
           servers/misc
          
        desktops (e.g. kde,
        gnome, enlightenment)
       
        applications/window-managers (e.g.
        awesome, stumpwm)
       
A (typically large) program with a distinct user interface, primarily used interactively.
           applications/version-management (e.g.
           subversion)
          
           applications/video (e.g.
           vlc)
          
           applications/graphics (e.g.
           gimp)
          
              applications/networking/mailreaders (e.g.
              thunderbird)
             
              applications/networking/newsreaders (e.g.
              pan)
             
              applications/networking/browsers (e.g.
              firefox)
             
              applications/networking/misc
             
           applications/misc
          
           data/fonts
          
              data/sgml+xml/schemas/xml-dtd (e.g.
              docbook)
             
(Okay, these are executable...)
              data/sgml+xml/stylesheets/xslt (e.g.
              docbook-xsl)
             
        games
       
        misc
       
Because every version of a package in Nixpkgs creates a potential maintenance burden, old versions of a package should not be kept unless there is a good reason to do so. For instance, Nixpkgs contains several versions of GCC because other packages don’t build with the latest version of GCC. Other examples are having both the latest stable and latest pre-release version of a package, or to keep several major releases of an application that differ significantly in functionality.
     If there is only one version of a package, its Nix expression should be
     named e2fsprogs/default.nix. If there are multiple
     versions, this should be reflected in the filename, e.g.
     e2fsprogs/1.41.8.nix and
     e2fsprogs/1.41.9.nix. The version in the filename
     should leave out unnecessary detail. For instance, if we keep the latest
     Firefox 2.0.x and 3.5.x versions in Nixpkgs, they should be named
     firefox/2.0.nix and
     firefox/3.5.nix, respectively (which, at a given
     point, might contain versions 2.0.0.20 and
     3.5.4). If a version requires many auxiliary files, you
     can use a subdirectory for each version, e.g.
     firefox/2.0/default.nix and
     firefox/3.5/default.nix.
    
     All versions of a package must be included in
     all-packages.nix to make sure that they evaluate
     correctly.
    
    There are multiple ways to fetch a package source in nixpkgs. The general
    guideline is that you should package reproducible sources with a high
    degree of availability. Right now there is only one fetcher which has
    mirroring support and that is fetchurl. Note that you
    should also prefer protocols which have a corresponding proxy environment
    variable.
   
    You can find many source fetch helpers in
    pkgs/build-support/fetch*.
   
    In the file pkgs/top-level/all-packages.nix you can find
    fetch helpers, these have names on the form fetchFrom*.
    The intention of these are to provide snapshot fetches but using the same
    api as some of the version controlled fetchers from
    pkgs/build-support/. As an example going from bad to
    good:
    
       Bad: Uses git:// which won't be proxied.
src = fetchgit {
  url = "git://github.com/NixOS/nix.git";
  rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
  sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
Better: This is ok, but an archive fetch will still be faster.
src = fetchgit {
  url = "https://github.com/NixOS/nix.git";
  rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
  sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
Best: Fetches a snapshot archive and you get the rev you want.
src = fetchFromGitHub {
  owner = "NixOS";
  repo = "nix";
  rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
  sha256 = "1i2yxndxb6yc9l6c99pypbd92lfq5aac4klq7y2v93c9qvx2cgpc";
}
       Find the value to put as sha256 by running
       nix run -f '<nixpkgs>' nix-prefetch-github -c
       nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS
       nix or nix-prefetch-url --unpack
       https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz.
      
Preferred source hash type is sha256. There are several ways to get it.
      Prefetch URL (with nix-prefetch-, where
      XXX
      URLXXX is one of url,
      git, hg, cvs,
      bzr, svn). Hash is printed to
      stdout.
     
      Prefetch by package source (with nix-prefetch-url
      '<nixpkgs>' -A ,
      where PACKAGE.srcPACKAGE is package attribute name). Hash
      is printed to stdout.
     
      This works well when you've upgraded existing package version and want to
      find out new hash, but is useless if package can't be accessed by
      attribute or package has multiple sources (.srcs,
      architecture-dependent sources, etc).
     
      Upstream provided hash: use it when upstream provides
      sha256 or sha512 (when upstream
      provides md5, don't use it, compute
      sha256 instead).
     
      A little nuance is that nix-prefetch-* tools produce
      hash encoded with base32, but upstream usually
      provides hexadecimal (base16) encoding. Fetchers
      understand both formats. Nixpkgs does not standardize on any one format.
     
You can convert between formats with nix-hash, for example:
$ nix-hash --type sha256 --to-base32 HASH
      Extracting hash from local source tarball can be done with
      sha256sum. Use nix-prefetch-url
      file:///path/to/tarball  if you want base32 hash.
     
Fake hash: set fake hash in package expression, perform build and extract correct hash from error Nix prints.
      For package updates it is enough to change one symbol to make hash fake.
      For new packages, you can use lib.fakeSha256,
      lib.fakeSha512 or any other fake hash.
     
      This is last resort method when reconstructing source URL is non-trivial
      and nix-prefetch-url -A isn't applicable (for example,
      
      one of kodi dependencies). The easiest way then
      would be replace hash with a fake one and rebuild. Nix build will fail
      and error message will contain desired hash.
     
This method has security problems. Check below for details.
Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead of fetching source you can fetch malware, and instead of source hash you get hash of malware. Here are security considerations for this scenario:
       http:// URLs are not secure to prefetch hash from;
      
hashes from upstream (in method 3) should be obtained via secure protocol;
       https:// URLs are secure in methods 1, 2, 3;
      
       https:// URLs are not secure in method 5. When
       obtaining hashes with fake hash method, TLS checks are disabled. So
       refetch source hash from several different networks to exclude MITM
       scenario. Alternatively, use fake hash method to make Nix error, but
       instead of extracting hash from error, extract
       https:// URL and prefetch it with method 1.
      
    Patches available online should be retrieved using
    fetchpatch.
   
patches = [
  (fetchpatch {
    name = "fix-check-for-using-shared-freetype-lib.patch";
    url = "http://git.ghostscript.com/?p=ghostpdl.git;a=patch;h=8f5d285";
    sha256 = "1f0k043rng7f0rfl9hhb89qzvvksqmkrikmm38p61yfx51l325xr";
  })
];
    Otherwise, you can add a .patch file to the
    nixpkgs repository. In the interest of keeping our
    maintenance burden to a minimum, only patches that are unique to
    nixpkgs should be added in this way.
   
patches = [ ./0001-changes.patch ];
If you do need to do create this sort of patch file, one way to do so is with git:
Move to the root directory of the source code you're patching.
$ cd the/program/source
If a git repository is not already present, create one and stage all of the source files.
$ git init $ git add .
Edit some files to make whatever changes need to be included in the patch.
Use git to create a diff, and pipe the output to a patch file:
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch
./result/bin/)Fork the repository on GitHub.
Create a branch for your future fix.
You can make branch from a commit of your local nixos-version. That will help you to avoid additional local compilations. Because you will receive packages from binary cache.
For example: nixos-version returns 15.05.git.0998212 (Dingo). So you can do:
$ git checkout 0998212 $ git checkout -b 'fix/pkg-name-update'
Please avoid working directly on the master branch.
Make commits of logical units.
If you removed pkgs, made some major NixOS changes etc., write about them in nixos/doc/manual/release-notes/rl-unstable.xml.
Check for unnecessary whitespace with git diff --check before committing.
Format the commit in a following way:
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc) Additional information.
Examples:
nginx: init at 2.0.1
firefox: 54.0.1 -> 55.0
nixos/hydra: add bazBaz option
nixos/nginx: refactor config generation
Test your changes. If you work with
nixpkgs:
update pkg ->
nix-env -i pkg-name -f <path to your local nixpkgs folder>
add pkg ->
Make sure it's in pkgs/top-level/all-packages.nix
nix-env -i pkg-name -f <path to your local nixpkgs folder>
If you don't want to install pkg in you profile.
nix-build -A pkg-attribute-name <path to your local nixpkgs folder>/default.nix and check results in the folder result. It will appear in the same directory where you did nix-build.
If you did nix-env -i pkg-name you can do nix-env -e pkg-name to uninstall it from your system.
NixOS and its modules:
You can add new module to your NixOS configuration file (usually it's /etc/nixos/configuration.nix). And do sudo nixos-rebuild test -I nixpkgs=<path to your local nixpkgs folder> --fast.
If you have commits pkg-name: oh, forgot to insert whitespace: squash commits in this case. Use git rebase -i.
Rebase you branch against current master.
Push your changes to your fork of nixpkgs.
Create pull request:
Write the title in format (pkg-name | nixos/<module>): improvement.
If you update the pkg, write versions from -> to.
Write in comment if you have tested your patch. Do not rely much on TravisCI.
If you make an improvement, write about your motivation.
Notify maintainers of the package. For example add to the message: cc @jagajaga @domenkozar.
The pull request template helps determine what steps have been made for a contribution so far, and will help guide maintainers on the status of a change. The motivation section of the PR should include any extra details the title does not address and link any existing issues related to the pull request.
When a PR is created, it will be pre-populated with some checkboxes detailed below:
     When sandbox builds are enabled, Nix will setup an isolated environment
     for each build process. It is used to remove further hidden dependencies
     set by the build environment to improve reproducibility. This includes
     access to the network during the build outside of
     fetch* functions and files outside the Nix store.
     Depending on the operating system access to other resources are blocked as
     well (ex. inter process communication is isolated on Linux); see
     build-use-sandbox
     in Nix manual for details.
    
     Sandboxing is not enabled by default in Nix due to a small performance hit
     on each build. In pull requests for
     nixpkgs people
     are asked to test builds with sandboxing enabled (see Tested
     using sandboxing in the pull request template) because
     inhttps://nixos.org/hydra/
     sandboxing is also used.
    
Depending if you use NixOS or other platforms you can use one of the following methods to enable sandboxing before building the package:
        Globally enable sandboxing on NixOS:
        add the following to configuration.nix
nix.useSandbox = true;
        Globally enable sandboxing on non-NixOS
        platforms: add the following to:
        /etc/nix/nix.conf
build-use-sandbox = true
Many Nix packages are designed to run on multiple platforms. As such, it's important to let the maintainer know which platforms your changes have been tested on. It's not always practical to test a change on all platforms, and is not required for a pull request to be merged. Only check the systems you tested the build on in this section.
Packages with automated tests are much more likely to be merged in a timely fashion because it doesn't require as much manual testing by the maintainer to verify the functionality of the package. If there are existing tests for the package, they should be run to verify your changes do not break the tests. Tests only apply to packages with NixOS modules defined and can only be run on Linux. For more details on writing and running tests, see the section in the NixOS manual.
     If you are updating a package's version, you can use nox to make sure all
     packages that depend on the updated package still compile correctly. This
     can be done using the nox utility. The nox-review
     utility can look for and build all dependencies either based on uncommited
     changes with the wip option or specifying a github pull
     request number.
    
review uncommitted changes:
nix-shell -p nox --run "nox-review wip"
review changes from pull request number 12345:
nix-shell -p nox --run "nox-review pr 12345"
     It's important to test any executables generated by a build when you
     change or create a package in nixpkgs. This can be done by looking in
     ./result/bin and running any files in there, or at a
     minimum, the main executable for the package. For example, if you make a
     change to texlive, you probably would only check the
     binaries associated with the change you made rather than testing all of
     them.
    
The last checkbox is fits CONTRIBUTING.md. The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
Make the appropriate changes in you branch.
Don't create additional commits, do
git rebase -i
git push --force to your branch.
Commits must be sufficiently tested before being merged, both for the master and staging branches.
Hydra builds for master and staging should not be used as testing platform, it's a build farm for changes that have been already tested.
When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people's installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from @edolstra.
It's only for non-breaking mass-rebuild commits. That means it's not to be used for testing, and changes must have been well tested already. Read policy here.
If the branch is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days, merge into master, then resume development on staging. Keep an eye on the staging evaluations here. If any fixes for staging happen to be already in master, then master can be merged into staging.
If you're cherry-picking a commit to a stable release branch, always use git cherry-pick -xe and ensure the message contains a clear description about why this needs to be included in the stable branch.
An example of a cherry-picked commit would look like this:
nixos: Refactor the world.
The original commit message describing the reason why the world was torn apart.
(cherry picked from commit abcdef)
Reason: I just had a gut feeling that this would also be wanted by people from
the stone age.
      The following section is a draft, and the policy for reviewing is still being discussed in issues such as #11166 and #20836 .
The Nixpkgs project receives a fairly high number of contributions via GitHub pull requests. Reviewing and approving these is an important task and a way to contribute to the project.
The high change rate of Nixpkgs makes any pull request that remains open for too long subject to conflicts that will require extra work from the submitter or the merger. Reviewing pull requests in a timely manner and being responsive to the comments is the key to avoid this issue. GitHub provides sort filters that can be used to see the most recently and the least recently updated pull requests. We highly encourage looking at this list of ready to merge, unreviewed pull requests.
When reviewing a pull request, please always be nice and polite. Controversial changes can lead to controversial opinions, but it is important to respect every community member and their work.
GitHub provides reactions as a simple and quick way to provide feedback to pull requests or any comments. The thumb-down reaction should be used with care and if possible accompanied with some explanation so the submitter has directions to improve their contribution.
pull request reviews should include a list of what has been reviewed in a comment, so other reviewers and mergers can know the state of the review.
All the review template samples provided in this section are generic and meant as examples. Their usage is optional and the reviewer is free to adapt them to their liking.
A package update is the most trivial and common type of pull request. These pull requests mainly consist of updating the version part of the package name and the source hash.
It can happen that non-trivial updates include patches or more complex changes.
Reviewing process:
Add labels to the pull request. (Requires commit rights)
        8.has: package (update) and any topic label that fit
        the updated package.
       
Ensure that the package versioning fits the guidelines.
Ensure that the commit text fits the guidelines.
Ensure that the package maintainers are notified.
CODEOWNERS will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
Ensure that the meta field information is correct.
License can change with version updates, so it should be checked to match the upstream license.
If the package has no maintainer, a maintainer must be set. This can be the update submitter or a community member that accepts to take maintainership of the package.
Ensure that the code contains no typos.
Building the package locally.
pull requests are often targeted to the master or staging branch, and building the pull request locally when it is submitted can trigger many source builds.
It is possible to rebase the changes on nixos-unstable or nixpkgs-unstable for easier review by running the following commands from a nixpkgs clone.
$ git remote add channels https://github.com/NixOS/nixpkgs-channels.git$ git fetch channels nixos-unstable
$ git fetch origin pull/PRNUMBER/head
$ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD

| This should be done only once to be able to fetch channel branches from the nixpkgs-channels repository. | |
| Fetching the nixos-unstable branch. | |
| 
           Fetching the pull request changes,  | |
| Rebasing the pull request changes to the nixos-unstable branch. | 
        The nox tool
        can be used to review a pull request content in a single command. It
        doesn't rebase on a channel branch so it might trigger multiple source
        builds. PRNUMBER should be replaced by the number at
        the end of the pull request title.
       
$ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
Running every binary.
##### Reviewed points - [ ] package name fits guidelines - [ ] package version fits guidelines - [ ] package build on ARCHITECTURE - [ ] executables tested on ARCHITECTURE - [ ] all depending packages build ##### Possible improvements ##### Comments
New packages are a common type of pull requests. These pull requests consists in adding a new nix-expression for a package.
Reviewing process:
Add labels to the pull request. (Requires commit rights)
        8.has: package (new) and any topic label that fit
        the new package.
       
Ensure that the package versioning is fitting the guidelines.
Ensure that the commit name is fitting the guidelines.
Ensure that the meta field contains correct information.
License must be checked to be fitting upstream license.
Platforms should be set or the package will not get binary substitutes.
A maintainer must be set. This can be the package submitter or a community member that accepts to take maintainership of the package.
Ensure that the code contains no typos.
Ensure the package source.
Mirrors urls should be used when available.
        The most appropriate function should be used (e.g. packages from GitHub
        should use fetchFromGitHub).
       
Building the package locally.
Running every binary.
##### Reviewed points - [ ] package path fits guidelines - [ ] package name fits guidelines - [ ] package version fits guidelines - [ ] package build on ARCHITECTURE - [ ] executables tested on ARCHITECTURE - [ ] `meta.description` is set and fits guidelines - [ ] `meta.license` fits upstream license - [ ] `meta.platforms` is set - [ ] `meta.maintainers` is set - [ ] build time only dependencies are declared in `nativeBuildInputs` - [ ] source is fetched using the appropriate function - [ ] phases are respected - [ ] patches that are remotely available are fetched with `fetchpatch` ##### Possible improvements ##### Comments
Module updates are submissions changing modules in some ways. These often contains changes to the options or introduce new options.
Reviewing process
Add labels to the pull request. (Requires commit rights)
        8.has: module (update) and any topic label that fit
        the module.
       
Ensure that the module maintainers are notified.
CODEOWNERS will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
Ensure that the module tests, if any, are succeeding.
Ensure that the introduced options are correct.
        Type should be appropriate (string related types differs in their
        merging capabilities, optionSet and
        string types are deprecated).
       
Description, default and example should be provided.
Ensure that option changes are backward compatible.
        mkRenamedOptionModule and
        mkAliasOptionModule functions provide way to make
        option changes backward compatible.
       
      Ensure that removed options are declared with
      mkRemovedOptionModule
     
Ensure that changes that are not backward compatible are mentioned in release notes.
Ensure that documentations affected by the change is updated.
##### Reviewed points - [ ] changes are backward compatible - [ ] removed options are declared with `mkRemovedOptionModule` - [ ] changes that are not backward compatible are documented in release notes - [ ] module tests succeed on ARCHITECTURE - [ ] options types are appropriate - [ ] options description is set - [ ] options example is provided - [ ] documentation affected by the changes is updated ##### Possible improvements ##### Comments
New modules submissions introduce a new module to NixOS.
Add labels to the pull request. (Requires commit rights)
        8.has: module (new) and any topic label that fit the
        module.
       
Ensure that the module tests, if any, are succeeding.
Ensure that the introduced options are correct.
        Type should be appropriate (string related types differs in their
        merging capabilities, optionSet and
        string types are deprecated).
       
Description, default and example should be provided.
      Ensure that module meta field is present
     
        Maintainers should be declared in meta.maintainers.
       
        Module documentation should be declared with
        meta.doc.
       
Ensure that the module respect other modules functionality.
For example, enabling a module should not open firewall ports by default.
##### Reviewed points - [ ] module path fits the guidelines - [ ] module tests succeed on ARCHITECTURE - [ ] options have appropriate types - [ ] options have default - [ ] options have example - [ ] options have descriptions - [ ] No unneeded package is added to environment.systemPackages - [ ] meta.maintainers is set - [ ] module documentation is declared in meta.doc ##### Possible improvements ##### Comments
Other type of submissions requires different reviewing steps.
If you consider having enough knowledge and experience in a topic and would like to be a long-term reviewer for related submissions, please contact the current reviewers for that topic. They will give you information about the reviewing process. The main reviewers for a topic can be hard to find as there is no list, but checking past pull requests to see who reviewed or git-blaming the code to see who committed to that topic can give some hints.
Container system, boot system and library changes are some examples of the pull requests fitting this category.
It is possible for community members that have enough knowledge and experience on a special topic to contribute by merging pull requests.
TODO: add the procedure to request merging rights.
In a case a contributor definitively leaves the Nix community, they should create an issue or post on Discourse with references of packages and modules they maintain so the maintainership can be taken over by other contributors.
   The DocBook sources of the Nixpkgs manual are in the
   doc
   subdirectory of the Nixpkgs repository.
  
You can quickly check your edits with make:
$ cd /path/to/nixpkgs/doc $ nix-shell [nix-shell]$ make
If you experience problems, run make debug to help understand the docbook errors.
After making modifications to the manual, it's important to build it before committing. You can do that as follows:
$ cd /path/to/nixpkgs/doc $ nix-shell [nix-shell]$ make clean [nix-shell]$ nix-build .
   If the build succeeds, the manual will be in
   ./result/share/doc/nixpkgs/manual.html.