<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>tcl &amp;mdash; csantosb</title>
    <link>https://infosec.press/csantosb/tag:tcl</link>
    <description>Random thoughts</description>
    <pubDate>Wed, 13 May 2026 23:07:33 +0000</pubDate>
    <item>
      <title>guix</title>
      <link>https://infosec.press/csantosb/use-guix</link>
      <description>&lt;![CDATA[img br/&#xA;As one understands with time and a bit of experience, keeping track of the whole bunch of #dependencies necessary to handle daily when doing digital hardware design may reveal as an error-prone task. And yet, this is not to speak about regressions, incompatibilities and most important, #reproducibility of results. Luckyly enough, this is precisely the problem that #guix intends to solve, in an elegant, minimalistic and open way, using only #freesoftware. !--more-- br/&#xA;&#xA;Nix&#xA;&#xA;Functional package management is a paradigm pioneer by nix and developed by Eelco Dolstra in his influential PhD Thesis. It is based on building each of the nodes in the graph of dependencies based solely in its inputs, contents, and node definition, producing a new node. The process repeats for every single node in the graph. In what concerns operating systems and software management, this comes to a radical different approach to classical package management. Simply put, every new build lives its life, regardless of the remaining builds. This makes it possible to have access to the kind of advanced utilities which ease our lives: declarative configurations, profiles, rollbacks, generations, etc. Forget the dependency hell. br/&#xA;Guix is founded on a similar approach, keeping its own set of rules as it only packages #freesoftware. There are around 30.000 of them as for today, including all the usual suspects. Within the context of #modernhw, guix is to be understood as a dependency management tool with advanced capabilities. Sure, it handles software, but there is no reason to use it exclusively as a software manager. It may handle IP blocks, documentation, bibliographic references, and more generally, all what concerns #plaintext files in a #gitforge. It understands versions, licences, dependencies, repositories and all kinds of relationships between them. Furthermore, it embeds a pragmatic language, guile, to script package definitions, declaring the behavior of nodes in our graph of dependencies. br/&#xA;&#xA;Reproducibility&#xA;&#xA;The most relevant feature of guix turns out to be its bootstraping capabilities. Full source bootstrap comes to building the whole dependency graph right from the bottom, based on a core minimum of trusted binary seeds. From that point upwards the whole distribution is self-contained, as all that it builds is included in guix itself. Any available package is founded on a package definition included in guix, its source code available online in a #git repository, and its dependencies. Each of the latest, follows the same rules, down to the bottom of the graph where a trust seed is necessary. br/&#xA;Why is this necessary and useful for #modernhw ? Because it provides #reproducibility for free, as reproducible builds are granted here. Turns out that this is at the very heart of guix and produces #determinism, meaning that same operations will produce same outputs, no matter when, no matter what, no matter where. Game over to ambiguity. Determinism, coupled to its declarative nature, reveals as a simple means to track our dependencies history without ambiguity. br/&#xA;Let’s take an example, and say we have a Vivado project in the form of a set of #tcl files. To build the logic of our favourite #fpga, we require also a couple of external firmware dependencies as IP block in their own git repositories, with tagged revisions, being mutually dependent and incompatible, following the tag in use. Not all of them are compliant with our project. Each of the firmware modules incorporates its own set of VHDL versioned dependencies, along with its associated documentation. We need to provide a python testing framework (you guess it), along with its verification libraries. We need to create a static web site with the instructions on how to download, instantiate, compile and deploy each different version of our project for a couple of thousand users out there, each with a different #gnulinux system, version and configuration of installed software and libraries. Take it for granted, each user need a different version of our project, as they need to guarantee compatibility with their own internal developments. And we need to provide the correct version of every tool, compatible with our code and scripts. The right version, I do mean. Not any random version. br/&#xA;Can you imagine the pain ? Now, suppose that you could describe in a couple of #plaintext manifest files the status of your project. End of the history. Users willing to reproduce your project and dependencies only need to git clone the manifest files and install them locally, regardless of their system, of their present libraries or of their abilities. No problem if the host OS doesn’t provide the necessary software, guix handles the situation. Users just do make and the whole project is deployed, tested and simulated, using the right version of each node in the graph. This is Guix at its best. br/&#xA;Not yet convinced ? Take a look at here. br/&#xA;Feeling like tempted ? Start by a crash course to guix. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/guix.png" alt="img"> <br/>
As one understands with time and a bit of experience, keeping track of the whole bunch of <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a> necessary to handle daily when doing digital hardware design may reveal as an error-prone task. And yet, this is not to speak about regressions, incompatibilities and most important, <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> of results. Luckyly enough, this is precisely the problem that <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> intends to solve, in an elegant, minimalistic and open way, using only <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>.  <br/></p>

<h1 id="nix">Nix</h1>

<p>Functional package management is a paradigm pioneer by <a href="https://nixos.org/" rel="nofollow">nix</a> and developed by <a href="https://edolstra.github.io/" rel="nofollow">Eelco Dolstra</a> in his influential <a href="https://edolstra.github.io/pubs/phd-thesis.pdf" rel="nofollow">PhD Thesis</a>. It is based on building each of the nodes in the graph of dependencies based solely in its inputs, contents, and node definition, producing a new node. The process repeats for every single node in the graph. In what concerns operating systems and software management, this comes to a radical different approach to classical package management. Simply put, every new build lives its life, regardless of the remaining builds. This makes it possible to have access to the kind of advanced utilities which ease our lives: declarative configurations, profiles, rollbacks, generations, etc. Forget the <a href="https://en.wikipedia.org/wiki/Dependency_hell" rel="nofollow">dependency hell</a>. <br/>
<a href="https://guix.gnu.org/" rel="nofollow">Guix</a> is founded on a similar approach, keeping its own set of rules as it only packages <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>. There are around <a href="https://packages.guix.gnu.org/" rel="nofollow">30.000 of them</a> as for today, including all the usual suspects. Within the context of <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, guix is to be understood as a dependency management tool with advanced capabilities. Sure, it handles software, but there is no reason to use it exclusively as a software manager. It may handle IP blocks, documentation, bibliographic references, and more generally, all what concerns <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> files in a <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a>. It understands versions, licences, dependencies, repositories and all kinds of relationships between them. Furthermore, it embeds a pragmatic language, <a href="https://www.gnu.org/software/guile/" rel="nofollow">guile</a>, to script package definitions, declaring the behavior of nodes in our graph of dependencies. <br/></p>

<h1 id="reproducibility">Reproducibility</h1>

<p>The most relevant feature of guix turns out to be its <a href="https://guix.gnu.org/manual/en/html_node/Bootstrapping.html" rel="nofollow">bootstraping</a> capabilities. <a href="https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/" rel="nofollow">Full source bootstrap</a> comes to building the whole dependency graph right from the bottom, based on a core minimum of trusted binary seeds. From that point upwards the whole distribution is self-contained, as all that it builds is included in guix itself. Any available package is founded on a package definition included in guix, its source code available online in a <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repository, and its dependencies. Each of the latest, follows the same rules, down to the bottom of the graph where a trust seed is necessary. <br/>
Why is this necessary and useful for <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> ? Because it provides <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> for free, as <a href="https://reproducible-builds.org/" rel="nofollow">reproducible builds</a> are granted here. Turns out that this is at the very heart of guix and produces <a href="/csantosb/tag:determinism" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">determinism</span></a>, meaning that same operations will produce same outputs, no matter when, no matter what, no matter where. Game over to ambiguity. Determinism, coupled to its declarative nature, reveals as a simple means to track our dependencies history without ambiguity. <br/>
Let’s take an example, and say we have a <a href="https://www.amd.com/es/products/software/adaptive-socs-and-fpgas/vivado.html" rel="nofollow">Vivado</a> project in the form of a set of <a href="/csantosb/tag:tcl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">tcl</span></a> files. To build the logic of our favourite <a href="/csantosb/tag:fpga" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">fpga</span></a>, we require also a couple of external firmware dependencies as IP block in their own git repositories, with tagged revisions, being mutually dependent and incompatible, following the tag in use. Not all of them are compliant with our project. Each of the firmware modules incorporates its own set of VHDL versioned dependencies, along with its associated documentation. We need to provide a python testing framework (you guess it), along with its verification libraries. We need to create a static web site with the instructions on how to download, instantiate, compile and deploy each different version of our project for a couple of thousand users out there, each with a different <a href="/csantosb/tag:gnulinux" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gnulinux</span></a> system, version and configuration of installed software and libraries. Take it for granted, each user need a different version of our project, as they need to guarantee compatibility with their own internal developments. And we need to provide the correct version of every tool, compatible with our code and scripts. <strong>The right version</strong>, I do mean. <em>Not any random version</em>. <br/>
Can you imagine the pain ? Now, suppose that you could describe in a couple of <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> <a href="https://infosec.press/csantosb/guix-crash-course#examples" rel="nofollow">manifest files</a> the status of your project. End of the history. Users willing to reproduce your project and dependencies only need to git clone the manifest files and install them locally, regardless of their system, of their present libraries or of their abilities. No problem if the host OS doesn’t provide the necessary software, guix handles the situation. Users just do <code>make</code> and the whole project is deployed, tested and simulated, using the right version of each node in the graph. This is Guix at its best. <br/>
Not yet convinced ? Take a look at <a href="https://csantosb.gitlab.io/ip/talks/hdl-lib_proposal/" rel="nofollow">here</a>. <br/>
Feeling like tempted ? Start by a <a href="https://infosec.press/csantosb/guix-crash-course" rel="nofollow">crash course</a> to guix. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/use-guix</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
    <item>
      <title>on dependencies</title>
      <link>https://infosec.press/csantosb/on-dependencies</link>
      <description>&lt;![CDATA[img br/&#xA;Digital hardware design implies dealing with a lot of scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code. Each one comes with its own version history. This is not a problem, as we have #git. But when it comes to #dependencies between different sets, how do we handle the complexity ? !--more-- br/&#xA;Dependencies create a graph between each of its components (nodes). This way, documentation refers to a package, which complies with some scripts, related to project files for a given software. This forms a tree of dependencies, closely related, including duplicated dependencies, as one document may correspond to different pieces of code, forming a loop. And yet, this is not the whole picture. br/&#xA;Each node in the graph incorporates its own history of revisions: a given version of the documentation refers to a particular version of a firmware module, which links to a specific version of a data sheet, sending the user to a certain version of a bibliographic reference. This happens with the whole graph and is usually referred to as dependency hell. The complexity comes from the fact that, when one upgrades a node (to a more recent version), its new dependencies (new versions) are pulled along with it. But, in the most general case, those are also necessary for some other node in the graph, which prevents the upgrade from happening, unless more than one version of the node exists at some point in time. The mess arises. br/&#xA;Classical examples are software package managers, python being its most relevant example (specially due to the amount of dependency managers which coexist). Another good example is operating systems: when one decides to upgrade #librewolf web browser, all the necessary dynamic library dependencies of the new release must be pulled too; some of them maybe happens to be necessary for #emacs, which breaks. Package managers need to keep track of the whole history of graph of dependencies and deal with upgrades, software install and removal so as to keep a coherence between the whole system. Hell in heaven. OS must be seen as a set of nodes in a dependency graph moving forward in time, with obsoleted old releases impossible to reach anymore. br/&#xA;As engineers, this rings a bell. What happens when we upgrade proprietary software to support new hardware (Vivado, looking at you now) ?. A new software release implies new versions of IP cores, sometimes incompatible with previous versions, with different versions of documentation, sometimes incorporating its own set of regressions. #Tcl scripts may break, too, or require an upgrade of interpreters. The hell goes for long ... How do we check that our code performs correctly after an upgrade ? How to ensure that no side effects come with it ? Is it possible to be sure that data sheets are still relevant ? Is this paper compatible with the new upgrade ? Can we automatize all of that, or do we need to proceed by hand ? I guess you catch the idea. br/&#xA;modernhw makes it mandatory to use a tool to deal with dependencies. And a very good one, if you want my opinion, including all the necessary scripting capabilities and flexibility we need to cope with our scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code. br/&#xA;Do we have such a tool ? Sure. Its name is #guix. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/dependencies.png" alt="img"> <br/>
Digital hardware design implies dealing with a lot of scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code. Each one comes with its own version history. This is not a problem, as we have <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a>. But when it comes to <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a> between different sets, <a href="https://csantosb.gitlab.io/ip/talks/hdl-lib_what_is_needed/" rel="nofollow">how do we handle the complexity</a> ?  <br/>
Dependencies create a graph between each of its components (nodes). This way, documentation refers to a package, which complies with some scripts, related to project files for a given software. This forms a tree of dependencies, closely related, including duplicated dependencies, as one document may correspond to different pieces of code, forming a loop. And yet, this is not the whole picture. <br/>
Each node in the graph incorporates its own history of revisions: a given version of the documentation refers to a particular version of a firmware module, which links to a specific version of a data sheet, sending the user to a certain version of a bibliographic reference. This happens with the whole graph and is usually referred to as <a href="https://en.wikipedia.org/wiki/Dependency_hell" rel="nofollow">dependency hell.</a> The complexity comes from the fact that, when one upgrades a node (to a more recent version), its new dependencies (new versions) are pulled along with it. But, in the most general case, those are also necessary for some other node in the graph, which prevents the upgrade from happening, unless more than one version of the node exists at some point in time. The mess arises. <br/>
Classical examples are software package managers, <a href="https://medium.com/knerd/the-nine-circles-of-python-dependency-hell-481d53e3e025" rel="nofollow">python</a> being its most relevant example (specially due to the amount of dependency managers which coexist). Another good example is operating systems: when one decides to upgrade <a href="/csantosb/tag:librewolf" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">librewolf</span></a> web browser, all the necessary dynamic library dependencies of the new release must be pulled too; some of them maybe happens to be necessary for <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>, which breaks. Package managers need to keep track of the whole history of graph of dependencies and deal with upgrades, software install and removal so as to keep a coherence between the whole system. Hell in heaven. OS must be seen as a set of nodes in a dependency graph moving forward in time, with obsoleted old releases impossible to reach anymore. <br/>
As engineers, this rings a bell. What happens when we upgrade proprietary software to support new hardware (<a href="https://www.amd.com/es/products/software/adaptive-socs-and-fpgas/vivado.html" rel="nofollow">Vivado</a>, looking at you now) ?. A new software release implies new versions of IP cores, sometimes incompatible with previous versions, with different versions of documentation, sometimes incorporating its own set of regressions. <a href="/csantosb/tag:Tcl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Tcl</span></a> scripts may break, too, or require an upgrade of interpreters. The hell goes for long ... How do we check that our code performs correctly after an upgrade ? How to ensure that no side effects come with it ? Is it possible to be sure that data sheets are still relevant ? Is this paper compatible with the new upgrade ? Can we automatize all of that, or do we need to proceed by hand ? I guess you catch the idea. <br/>
<a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> makes it mandatory to use a tool to deal with dependencies. And a very good one, if you want my opinion, including all the necessary scripting capabilities and flexibility we need to cope with our scripts, configurations, packages, project files, documentation, specifications, bibliographies, data sheets, software, etc. and, of course, a lot of code. <br/>
Do we have such a tool ? Sure. Its name is <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a>. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/on-dependencies</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
  </channel>
</rss>