on dependencies

img
Digital hardware design implies dealing with a lot of scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code. Each one comes with its own version history. This is not a problem, as we have #git. But when it comes to #dependencies between different sets, how do we handle the complexity ?
Dependencies create a graph between each of its components (nodes). This way, documentation refers to a package, which complies with some scripts, related to project files for a given software. This forms a tree of dependencies, closely related, including duplicated dependencies, as one document may correspond to different pieces of code, forming a loop. And yet, this is not the whole picture.
Each node in the graph incorporates its own history of revisions: a given version of the documentation refers to a particular version of a firmware module, which links to a specific version of a data sheet, sending the user to a certain version of a bibliographic reference. This happens with the whole graph and is usually referred to as dependency hell. The complexity comes from the fact that, when one upgrades a node (to a more recent version), its new dependencies (new versions) are pulled along with it. But, in the most general case, those are also necessary for some other node in the graph, which prevents the upgrade from happening, unless more than one version of the node exists at some point in time. The mess arises.
Classical examples are software package managers, python being its most relevant example (specially due to the amount of dependency managers which coexist). Another good example is operating systems: when one decides to upgrade #librewolf web browser, all the necessary dynamic library dependencies of the new release must be pulled too; some of them maybe happens to be necessary for #emacs, which breaks. Package managers need to keep track of the whole history of graph of dependencies and deal with upgrades, software install and removal so as to keep a coherence between the whole system. Hell in heaven. OS must be seen as a set of nodes in a dependency graph moving forward in time, with obsoleted old releases impossible to reach anymore.
As engineers, this rings a bell. What happens when we upgrade proprietary software to support new hardware (Vivado, looking at you now) ?. A new software release implies new versions of IP cores, sometimes incompatible with previous versions, with different versions of documentation, sometimes incorporating its own set of regressions. #Tcl scripts may break, too, or require an upgrade of interpreters. The hell goes for long ... How do we check that our code performs correctly after an upgrade ? How to ensure that no side effects come with it ? Is it possible to be sure that data sheets are still relevant ? Is this paper compatible with the new upgrade ? Can we automatize all of that, or do we need to proceed by hand ? I guess you catch the idea.
#modernhw makes it mandatory to use a tool to deal with dependencies. And a very good one, if you want my opinion, including all the necessary scripting capabilities and flexibility we need to cope with our scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code.
Do we have such a tool ? Sure. Its name is #guix.