<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>csantosb</title>
    <link>https://infosec.press/csantosb/</link>
    <description>Random thoughts</description>
    <pubDate>Sat, 04 Apr 2026 02:30:07 +0000</pubDate>
    <item>
      <title>sourcehut as guix test farm</title>
      <link>https://infosec.press/csantosb/sourcehut-as-guix-test-farm</link>
      <description>&lt;![CDATA[img br/&#xA;It is possible to contribute to improving #guix as the need for new functionalities, packages, fixes or upgrades arise. This is one of the strongest points in open communities: the possibility to participate on the development and continuous improvement of the tool. Let’s see how it goes when it comes to guix.!--more-- br/&#xA;Guix is a huge project which follows closely the #freesoftware paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your fixes, updates or new features, once they have been tested. In a first approach, from my own experience, one may create a personal local repository of package definitions, for a personal use. As a second step, it is possible to create a public guix channel, in parallel to contributing upstream. br/&#xA;Contributing your code to guix comes to sending #email with your patches attached, it’s that simple. Don&#39;t be intimidated by the details (this is used by lots of open communities, after all). Once your patches are submitted, a review of your code follows, see details. Some tools, like mumi, are helpful to that purpose. br/&#xA;&#xA;In detail&#xA;&#xA;Following the kind of contribution (new additions, fixes or upgrades), these simple steps will allow you to start contributing to guix: br/&#xA;&#xA;    git clone guix itselft br/&#xA;    from the guix repository, do: br/&#xA;    &#xA;        guix shell -D guix -CPW&#xA;    ./bootstrap&#xA;    ./configure&#xA;    make -j$(nproc)&#xA;    ./pre-inst-env guix build hello&#xA;        add and commit your changes, watch the commit message br/&#xA;    beware your synopses and descriptions br/&#xA;    remember to run the package tests, if relevant br/&#xA;    check the license br/&#xA;    use an alphabetical order in input lists br/&#xA;    no sign off your commits br/&#xA;    don’t forget to use lint/style/refresh -l/dependents to check your code br/&#xA;&#xA;Boring and routinary, right ? br/&#xA;&#xA;Use sourcehut&#xA;&#xA;img br/&#xA;Most of all the of the previous can be run automatically with help of sourcehut build farm #ci capabilities. Just simply, push the guix repository to sr.ht. At this point, it is possible to use this manifest file to run the lint/style/refresh -l/dependents testing stages on the yosys package definition, por example: br/&#xA;&#xA;image: guix&#xA;shell: true&#xA;environment:&#xA;  prj: guix.guix&#xA;  cmd: &#34;guix shell -D guix -CPWN git nss-certs -- ./pre-inst-env guix&#34;&#xA;sources:&#xA;  https://git.sr.ht/~csantosb/guix.guix&#xA;tasks:&#xA;  defpkg: |&#xA;      cd &#34;$prj&#34;&#xA;      pkg=$(git log -1 --oneline | cut -d&#39;:&#39; -f 2 | xargs)&#xA;      echo &#34;export pkg=$pkg&#34;     &#34;$HOME/.buildenv&#34;&#xA;  setup: |&#xA;      cd &#34;$prj&#34;&#xA;      guix shell -D guix -CPW -- ./bootstrap&#xA;      guix shell -D guix -CPW -- ./configure&#xA;      guix shell -D guix -CPW -- make -j $(nproc)&#xA;  build: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd build --rounds=5 $pkg&#34;&#xA;  lint: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd lint $pkg&#34;&#xA;  style: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd style $pkg --dry-run&#34;&#xA;  refresh: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd refresh -l $pkg&#34;&#xA;  dependents: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd build --dependents $pkg&#34;&#xA;triggers:&#xA;  condition: failure&#xA;    action: email&#xA;    to: builds.sr.ht@csantosb.mozmail.com&#xA;&#xA;Submit the manifest with br/&#xA;&#xA;hut builds submit # --edit&#xA;&#xA;You’ll be able to log into the build farm to follow the build process or to debug it with br/&#xA;&#xA;hut builds ssh ID&#xA;&#xA;Check the log here. As you can see, it fails: building of yosys succeeds, but building of packages which depend on it (--dependents) fails. br/&#xA;&#xA;Advanced&#xA;&#xA;Sourcehut provides a facility to automatize patch submission and testing. Using its hub integrator, one may just send an email to the email list related to your project (guix in this case), which mimics guix behavior for accepting patches. br/&#xA;The trick here consists on appending the project name as a prefix to the subject of the message, for example PATCH project-name], which will trigger the build of previous [.build.yml manifest file at the root of the project, after applying the patch. Neat, right ? br/&#xA;If you followed right here, you’ll notice that previous build manifest file is monolithic, affecting always the same package (yosys), which is kind of useless, as we are here interested in testing our patch. Thus, the question on how to trigger a custom build containing an updated $pkg variable related to the patch to test remains open. br/&#xA;To update the contents of the $pkg variable in the build manifest, one has to parse the commit message in the patch, extracting from there the package name. This is not a problem, as guix imposes clear commit messages in patches, so typically something like br/&#xA;&#xA;gnu: gnunet: Update to 0.23.0&#xA;&#xA;or br/&#xA;&#xA;gnu: texmacs: Add qtwayland-5&#xA;&#xA;Hopefully, parsing these messages to get the package name, and so the value of $pkg is trivial. br/&#xA;Then, it remains to include in our build manifest a first task which updates the contents of &#34;$HOME/.buildenv&#34;. This file is automatically populated using the environment variables in the manifest, and its contents are sourced at the beginning of all tasks. This mechanism allows passing variables between tasks. br/&#xA;&#xA;echo &#34;export pkg=value&#34;     &#34;$HOME/.buildenv&#34;&#xA;&#xA;Send your contribution&#xA;&#xA;Finally, once your changes go through all the tests, br/&#xA;&#xA;    use git send-email to create and send a patch br/&#xA;    consider reviews, if any, updating your patch accordingly with git ammend br/&#xA;    resend a new patch including a patch version (v1, v2 ...) br/&#xA;&#xA;Interested ? Consult the documentation for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. br/&#xA;ciseries br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/guix.png" alt="img"> <br/>
It is possible to contribute to improving <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> as the need for new functionalities, packages, fixes or upgrades arise. This is one of the strongest points in open communities: the possibility to participate on the development and continuous improvement of the tool. Let’s see how it goes when it comes to <a href="https://guix.gnu.org/" rel="nofollow">guix</a>. <br/>
Guix is a huge project which follows closely the <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your fixes, updates or new features, once they have been tested. In a first approach, from my own experience, one may create a personal local repository of package definitions, for a personal use. As a second step, it is possible to create a public <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">guix channel</a>, in parallel to <a href="https://infosec.press/csantosb/guix-channels#contributing" rel="nofollow">contributing</a> upstream. <br/>
<a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">Contributing</a> your code to guix comes to <a href="https://git-send-email.io/" rel="nofollow">sending <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a></a> <a href="https://www.futurile.net/2022/03/07/git-patches-email-workflow/" rel="nofollow">with your patches</a> attached, it’s that simple. Don&#39;t be intimidated by the details (this is used by lots of open communities, after all). Once your patches are submitted, a review of your code follows, see <a href="https://libreplanet.org/wiki?title=Group:Guix/PatchReviewSessions2024" rel="nofollow">details</a>. Some tools, like <a href="https://www.youtube.com/watch?v=8m8igXrKaqU" rel="nofollow">mumi</a>, are helpful to that purpose. <br/></p>

<h1 id="in-detail">In detail</h1>

<p>Following the kind of contribution (new additions, fixes or upgrades), these simple steps will allow you to start contributing to guix: <br/></p>

<p>    git clone <a href="https://git.savannah.gnu.org/git/guix.git" rel="nofollow">guix itselft</a> <br/>
    from the guix repository, do: <br/></p>

<p>    <code>sh
    guix shell -D guix -CPW
    ./bootstrap
    ./configure
    make -j$(nproc)
    ./pre-inst-env guix build hello
</code>
    add and commit your changes, watch the commit message <br/>
    beware your <a href="https://guix.gnu.org/manual/en/html_node/Synopses-and-Descriptions.html" rel="nofollow">synopses and descriptions</a> <br/>
    remember to run the package tests, if relevant <br/>
    check the license <br/>
    use an alphabetical order in input lists <br/>
    no sign off your commits <br/>
    don’t forget to use <code>lint/style/refresh -l/dependents</code> to check your code <br/></p>

<p>Boring and routinary, right ? <br/></p>

<h1 id="use-sourcehut">Use sourcehut</h1>

<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/sourcehut.png" alt="img"> <br/>
Most of all the of the previous can be run automatically with help of <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">sourcehut</a> build farm <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> capabilities. Just simply, push the guix repository to <a href="https://git.sr.ht/~csantosb/guix.guix" rel="nofollow">sr.ht</a>. At this point, it is possible to use <a href="https://builds.sr.ht/~csantosb/job/1391146/manifest" rel="nofollow">this manifest</a> file to run the <code>lint/style/refresh -l/dependents</code> testing stages on the <code>yosys</code> package definition, por example: <br/></p>

<pre><code class="language-yaml">image: guix
shell: true
environment:
  prj: guix.guix
  cmd: &#34;guix shell -D guix -CPWN git nss-certs -- ./pre-inst-env guix&#34;
sources:
  - https://git.sr.ht/~csantosb/guix.guix
tasks:
  - def_pkg: |
      cd &#34;$prj&#34;
      _pkg=$(git log -1 --oneline | cut -d&#39;:&#39; -f 2 | xargs)
      echo &#34;export pkg=$_pkg&#34; &gt;&gt; &#34;$HOME/.buildenv&#34;
  - setup: |
      cd &#34;$prj&#34;
      guix shell -D guix -CPW -- ./bootstrap
      guix shell -D guix -CPW -- ./configure
      guix shell -D guix -CPW -- make -j $(nproc)
  - build: |
      cd &#34;$prj&#34;
      eval &#34;$cmd build --rounds=5 $pkg&#34;
  - lint: |
      cd &#34;$prj&#34;
      eval &#34;$cmd lint $pkg&#34;
  - style: |
      cd &#34;$prj&#34;
      eval &#34;$cmd style $pkg --dry-run&#34;
  - refresh: |
      cd &#34;$prj&#34;
      eval &#34;$cmd refresh -l $pkg&#34;
  - dependents: |
      cd &#34;$prj&#34;
      eval &#34;$cmd build --dependents $pkg&#34;
triggers:
  - condition: failure
    action: email
    to: builds.sr.ht@csantosb.mozmail.com
</code></pre>

<p>Submit the manifest with <br/></p>

<pre><code class="language-sh">hut builds submit # --edit
</code></pre>

<p>You’ll be able to log into the build farm to follow the build process or to debug it with <br/></p>

<pre><code class="language-sh">hut builds ssh ID
</code></pre>

<p>Check the log <a href="https://builds.sr.ht/~csantosb/job/1391146" rel="nofollow">here</a>. As you can see, it fails: building of <code>yosys</code> succeeds, but building of packages which depend on it (<code>--dependents</code>) <a href="https://builds.sr.ht/~csantosb/job/1391146#task-dependents" rel="nofollow">fails</a>. <br/></p>

<h1 id="advanced">Advanced</h1>

<p>Sourcehut provides a facility to automatize <a href="https://man.sr.ht/builds.sr.ht/#integrations" rel="nofollow">patch submission and testing</a>. Using its <code>hub</code> integrator, one may just send an email to the email list related to your project (guix in this case), which mimics guix behavior for accepting patches. <br/>
The trick here consists on appending the project name as a prefix to the subject of the message, for example <code>[PATCH project-name]</code>, which will trigger the build of previous <a href="https://builds.sr.ht/~csantosb/job/1391146/manifest" rel="nofollow">.build.yml</a> manifest file at the root of the project, after applying the patch. Neat, right ? <br/>
If you followed right here, you’ll notice that previous build manifest file is monolithic, affecting always the same package (yosys), which is kind of useless, as we are here interested in testing our patch. Thus, the question on how to trigger a custom build containing an updated <code>$pkg</code> variable related to the patch to test remains open. <br/>
To update the contents of the <code>$pkg</code> variable in the build manifest, one has to parse the commit message in the patch, extracting from there the package name. This is not a problem, as guix imposes clear commit messages in patches, so typically something like <br/></p>

<pre><code class="language-sh">* gnu: gnunet: Update to 0.23.0
</code></pre>

<p>or <br/></p>

<pre><code class="language-sh">* gnu: texmacs: Add qtwayland-5
</code></pre>

<p>Hopefully, parsing these messages to get the package name, and so the value of <code>$pkg</code> is trivial. <br/>
Then, it remains to include in our build manifest a first task which updates the contents of <code>&#34;$HOME/.buildenv&#34;</code>. This file is automatically populated using the environment variables in the manifest, and its contents are sourced at the beginning of all tasks. This mechanism allows passing variables between tasks. <br/></p>

<pre><code class="language-sh">echo &#34;export pkg=value&#34; &gt;&gt; &#34;$HOME/.buildenv&#34;
</code></pre>

<h1 id="send-your-contribution">Send your contribution</h1>

<p>Finally, once your changes go through all the tests, <br/></p>

<p>    use <a href="https://git-send-email.io/" rel="nofollow">git send-email</a> to create and <a href="https://guix.gnu.org/manual/en/html_node/Submitting-Patches.html" rel="nofollow">send a patch</a> <br/>
    consider reviews, if any, updating your patch accordingly with <code>git ammend</code> <br/>
    resend a new patch including a patch version (v1, v2 ...) <br/></p>

<p>Interested ? Consult <a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">the documentation</a> for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. <br/>
<a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a> <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/sourcehut-as-guix-test-farm</guid>
      <pubDate>Tue, 17 Dec 2024 16:57:05 +0000</pubDate>
    </item>
    <item>
      <title>ci (sourcehut): alu</title>
      <link>https://infosec.press/csantosb/ci-sourcehut-alu</link>
      <description>&lt;![CDATA[img br/&#xA;Remote #ci is the way to go in #modernhw digital design testing. In this #ciseries, let’s see how to implement it with detail using sourcehut and a real world example. !--more-- br/&#xA;Sourcehut is a lightweight #gitforge where I host my #git repositories. Not only it is based on a paradigm perfectly adapted to #modernhw, but also its builds service includes support for guix (x8664) images. This means that we will be able to execute all of our testing online inside guix profiles, shells or natively on top of the bare-bones image. br/&#xA;&#xA;Alu&#xA;&#xA;Let’s consider now a variant of the previous example with open-logic. Here, we concentrate on a toy design only for demonstration purposes, a dummy alu emulator, which uses #osvvm as verification framework and relies on a few #openlogic blocs. In this case, its dependencies are defined in a manifest.scm file, including both fw-open-logic and osvvm, among other dependencies. br/&#xA;Install dependencies locally, in a new profile with br/&#xA;&#xA;cd alu&#xA;mkdir deps&#xA;export GUIXPROFILE=open-logic/deps&#xA;guix install -P $GUIXPROFILE -m .builds/manifest.scm&#xA;. $GUIXPROFILE/etc/profile&#xA;&#xA;In this case, we will test the design using, first, a custom made makefile. Secondly, we will use hdlmake to automatically produce our makefile. Similarly to previous #openlogic example, two build manifest are used: br/&#xA;&#xA;    profile1 br/&#xA;    profile2 br/&#xA;&#xA;You’ll realise how some of the tasks are common with the case of previous #openlogic example (update channels, auth and update profile). br/&#xA;&#xA;osvvm&#xA;&#xA;In this case, we also need to compile osvvm libraries br/&#xA;&#xA;    compile\_osvvm, produce a compiled version of #osvvm verification libraries; this is necessary as we are using here the tcl  scripts included in the library itself to follow the correct order of compilation. Libraries will appear within the local profile under $GUIXPROFILE/VHDLLIBS/GHDL-X.Y.Z br/&#xA;&#xA;test&#xA;&#xA;    test, for a fully custom made testing pipeline; in this case, using a Makefile br/&#xA;    Just simply, source the .envrc file where the local $GUIXPROFILE variable is defined, cd to the ghdl directory and call make to compile the design and run the simulation in two steps: first, clean all and include sources in its corresponding libraries with br/&#xA;    &#xA;        make cleanall include&#xA;        &#xA;    Then, produce a new Makefile using ghdl. br/&#xA;    &#xA;        ./makefile.sh # ghdl --gen-makefile ...&#xA;        &#xA;    Finally, run the simulation with br/&#xA;    &#xA;        make GHDLRUNFLAGS=&#34;--stop-time=4us --disp-time --ieee-asserts=enable&#34; run&#xA;        &#xA;    This will produce a executable file before running it with the provided parameters. br/&#xA;    You may notice that, in this case, you need to produce somehow your own Makefile, or equivalent pipeline, right ? br/&#xA;&#xA;hdlmake&#xA;&#xA;Wouldn’t it be nice if we had a tool to deploy online which produces makefiles for us ? It exists, and its name is #hdlmake. br/&#xA;&#xA;    test\hdlmake br/&#xA;    Source the .envrc file where the local $GUIXPROFILE variable is defined, cd to the .builds/hdlmake directory where all Manifest.py files are located, and call hdlmake to produce the Makefile. Finally, just run make to compile the design, produce an executable and run it. br/&#xA;&#xA;Check the resulting logs inline here, for example. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/sourcehut.png" alt="img"> <br/>
Remote <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> is the <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">way to go</a> in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design testing. In this <a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a>, let’s see how to implement it with detail using <a href="https://sourcehut.org/" rel="nofollow">sourcehut</a> and a real world example.  <br/>
<a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">Sourcehut</a> is a lightweight <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> where I host my <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repositories. Not only it is <a href="https://infosec.press/csantosb/git-forges#sourcehut" rel="nofollow">based on a paradigm</a> perfectly adapted to <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, but also its <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds</a> service includes support for <a href="https://man.sr.ht/builds.sr.ht/compatibility.md#guix-system" rel="nofollow">guix</a> (x86_64) images. This means that we will be able to execute all of our testing online inside <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">guix profiles</a>, <a href="https://infosec.press/csantosb/guix-crash-course#shell-containers" rel="nofollow">shells</a> or natively on top of the bare-bones image. <br/></p>

<h1 id="alu">Alu</h1>

<p>Let’s consider now a variant of the <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">previous example with open-logic</a>. Here, we concentrate on a <a href="https://git.sr.ht/~csantosb/ip.alu/tree" rel="nofollow">toy design</a> only for demonstration purposes, a <a href="https://git.sr.ht/~csantosb/ip.alu/tree/master/item/src/alu.vhd" rel="nofollow">dummy alu emulator</a>, which uses <a href="/csantosb/tag:osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">osvvm</span></a> as verification framework and relies on a few <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> blocs. In this case, its dependencies are defined in a <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/manifest.scm" rel="nofollow">manifest.scm</a> file, including both <code>fw-open-logic</code> and <code>osvvm</code>, among other dependencies. <br/>
Install dependencies locally, in a new <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">profile</a> with <br/></p>

<pre><code class="language-sh">cd alu
mkdir _deps
export GUIX_PROFILE=open-logic/_deps
guix install -P $GUIX_PROFILE -m .builds/manifest.scm
. $GUIX_PROFILE/etc/profile
</code></pre>

<p>In this case, we will test the design using, first, a custom made makefile. Secondly, we will use <a href="https://hdlmake.readthedocs.io/en/master/" rel="nofollow">hdlmake</a> to automatically produce our makefile. Similarly to <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">previous</a> <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> example, two build manifest are used: <br/></p>

<p>    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/profile1.yml" rel="nofollow">profile1</a> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/profile2.yml" rel="nofollow">profile2</a> <br/></p>

<p>You’ll realise how some of the tasks are common with the case of previous <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> example (update channels, auth and update profile). <br/></p>

<h2 id="osvvm">osvvm</h2>

<p>In this case, we also need to compile osvvm libraries <br/></p>

<p>    <strong>compile__osvvm</strong>, <a href="https://builds.sr.ht/~csantosb/job/1389146#task-compile_osvvm" rel="nofollow">produce a compiled version</a> of <a href="/csantosb/tag:osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">osvvm</span></a> verification libraries; this is necessary as we are using here the <code>tcl</code>  scripts included in the library itself to follow the correct order of compilation. Libraries will appear within the local profile under <code>$GUIX_PROFILE/VHDL_LIBS/GHDL-X.Y.Z</code> <br/></p>

<h2 id="test">test</h2>

<p>    <strong>test</strong>, for a fully custom made testing pipeline; in this case, using a <code>Makefile</code> <br/>
    Just simply, source the <code>.envrc</code> file where the local <code>$GUIX_PROFILE</code> variable is defined, cd to the <code>ghdl</code> directory and call make to compile the design and run the simulation in two steps: first, clean all and include sources in its corresponding libraries with <br/></p>

<p>    <code>sh
    make __clean_all __include
</code></p>

<p>    Then, produce a new <code>Makefile</code> using <code>ghdl</code>. <br/></p>

<p>    <code>sh
    ./makefile.sh # ghdl --gen-makefile ...
</code></p>

<p>    Finally, run the simulation with <br/></p>

<p>    <code>sh
    make GHDLRUNFLAGS=&#34;--stop-time=4us --disp-time --ieee-asserts=enable&#34; run
</code></p>

<p>    This will produce a executable file before <a href="https://builds.sr.ht/~csantosb/job/1389146#task-test" rel="nofollow">running it</a> with the provided parameters. <br/>
    You may notice that, in this case, you need to produce somehow your own <code>Makefile</code>, or equivalent pipeline, right ? <br/></p>

<h2 id="hdlmake">hdlmake</h2>

<p>Wouldn’t it be nice if we had a tool to deploy online which produces makefiles for us ? It exists, and its name is <a href="/csantosb/tag:hdlmake" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">hdlmake</span></a>. <br/></p>

<p>    <strong>test__hdlmake</strong> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/8324cd0fcb838cfb8303aae9e668b6831a329cbb/.builds/profile1.yml#L39" rel="nofollow">Source</a> the <code>.envrc</code> file where the local <code>$GUIX_PROFILE</code> variable is defined, cd to the <code>.builds/hdlmake</code> directory where all <code>Manifest.py</code> files are located, and call <code>hdlmake</code> to produce the <code>Makefile</code>. Finally, just run make to compile the design, produce an executable and run it. <br/></p>

<p>Check the resulting logs inline <a href="https://builds.sr.ht/~csantosb/job/1389146#task-test_hdlmake" rel="nofollow">here</a>, for example. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-sourcehut-alu</guid>
      <pubDate>Fri, 13 Dec 2024 12:38:24 +0000</pubDate>
    </item>
    <item>
      <title>ci (sourcehut): open-logic</title>
      <link>https://infosec.press/csantosb/ci-sourcehut</link>
      <description>&lt;![CDATA[img br/&#xA;Remote #ci is the way to go in #modernhw digital design testing. In this #ciseries, let’s see how to implement it with detail using sourcehut and a real world example. !--more-- br/&#xA;Sourcehut is a lightweight #gitforge where I host my #git repositories. Not only it is based on a paradigm perfectly adapted to #modernhw, but also its builds service includes support for guix (x8664) images. This means that we will be able to execute all of our testing online inside guix profiles, shells or natively on top of the bare-bones image. br/&#xA;&#xA;Open logic&#xA;&#xA;Let’s see how in detail using the cookbook as a starting point, and taking as a complete example the fw-open-logic #openlogic firmware package which comes with the electronics guix channel. br/&#xA;Get it with: br/&#xA;&#xA;guix install fw-open-logic:out&#xA;&#xA;Open logic is a useful #vhdl library of commonly used components, implemented in a reusable and vendor/tool-independent way.  As any other #modernhw library, it includes tests sets for any of its components, using the vunit utility in this case. br/&#xA;To run the full tests suite use (user wide using the default $GUIXPROFILE), install its dependencies, defined in a manifest.scm file (ghdl-clang and python-vunit in this case). br/&#xA;&#xA;cd open-logic&#xA;guix install -m .builds/manifest.scm&#xA;cd sim&#xA;python3 run.py --ghdl -v&#xA;&#xA;or local to the project, using a profile br/&#xA;&#xA;cd open-logic&#xA;mkdir deps&#xA;export GUIXPROFILE=open-logic/deps&#xA;guix install -P $GUIXPROFILE -m .builds/manifest.scm&#xA;. $GUIXPROFILE/etc/profile&#xA;cd sim&#xA;python3 run.py --ghdl -v&#xA;&#xA;go remote&#xA;&#xA;img br/&#xA;Now, how do we proceed online using #sourcehut #ci builds facility ? Builds will pop up a new environment based on an up to date guix-system image when we push a commit to git.sr.ht, provided we include a .build.yml build manifest file, or by a .build folder with up to 4 build manifest files, at the root of the git project 1]. Be careful: consider that this image is [built daily using a crontab job, which is a good and a bad thing at the same time. From one side, you won’t be using the same environment for your tests, which breaks #reproducibility (see comments section below). On the other side, #guix is a rolling release, and new fancy features and new fixes are added every day. Keep this in mind. br/&#xA;Let’s create a .builds folder in a topic test branch, with the following contents: br/&#xA;&#xA;    manifest.scm, list of dependencies in our project br/&#xA;    guix.scm, default guix repository, redundant, included here for convenience br/&#xA;    channels.scm, list of guix channels remote repositories, in addition to the default guix repository, from where we pull packages br/&#xA;    We will be using here my own electronics channel (no substitutes), as well as the guix science channel (which provides substitutes). br/&#xA;    (note how here we load the local guix.scm file, instead of making use of the %default-channels global variable) br/&#xA;    &#xA;        (load &#34;guix.scm&#34;)&#xA;    ;;; %default-channels&#xA;        key.pub, auth key to access substitutes of packages in guix channels br/&#xA;&#xA;build manifests&#xA;&#xA;From now on, every new push to the test #git branch will trigger the execution of the tasks defined in the three build manifest files br/&#xA;&#xA;    profile1 br/&#xA;    profile2 br/&#xA;    shell1 br/&#xA;&#xA;The two profile build manifest files use a slightly different approach, and are given here for comparison purposes only. The shell build manifest uses an isolated shell container within the image itself to illustrate this feature. br/&#xA;Inside the manifests, I declare the image to use, guix, and the global environment variables sourced before each task is run: prj (project name), srv (list of servers with substitutes), manifest and channels (pointing to the corresponding files) and key (same). It is important to declare a trigger action, to receive an email with all relevant information in case of failure (log, id, commit, etc.). br/&#xA;&#xA;tasks&#xA;&#xA;What’s interesting here is the list of tasks. Some of them are common to all three manifests br/&#xA;&#xA;    env, useful only for debugging br/&#xA;    guix\updatechannels, replace the default project local guix.scm file by the output of br/&#xA;    &#xA;        guix describe --format=channels&#xA;        &#xA;    The goal here is avoid pulling latest guix upstream, useless and cpu and time consuming, and using the local version instead. Remember that the guix system image we are using here is updated daily. br/&#xA;    &#xA;        guix\auth, runs the authorize command to add the key.pub file to guix, so that we will be able to download package substitutes when necessary br/&#xA;        &#xA;                sudo guix archive --authorize &lt; &#34;$key&#34;&#xA;                &#xA;        Here, one may opt by doing a br/&#xA;        &#xA;                guix pull --channels=&#34;$channels&#34;&#xA;                &#xA;        as in profile2, to set the revision of the guix channels we are using (remember channels are nothing but git repositories). br/&#xA;        Note how in profile1 and shell1 we opt for a different approach. br/&#xA;        guix\updateprofile, where we create a deps folder to be used as a local $GUIXPROFILE (defined in .envrc). br/&#xA;        Then, one of br/&#xA;        &#xA;                # profile1&#xA;        guix time-machine --channels=&#34;$channels&#34; -- \&#xA;             package -p &#34;$GUIXPROFILE&#34; \&#xA;             --substitute-urls=&#34;$srv&#34; \&#xA;             -m &#34;$manifest&#34;&#xA;                &#xA;        or br/&#xA;        &#xA;                # profile2&#xA;        guix \&#xA;            package -p &#34;$GUIXPROFILE&#34; \&#xA;            --substitute-urls=&#34;$srv&#34; \&#xA;            -m &#34;$manifest&#34;&#xA;                &#xA;        will install packages in $manifest into the $GUIXPROFILE. I’m using here the time-machine mechanism to set the revision of the guix channels, depending if guix pull was run in the previous stage or not. br/&#xA;        vunit, sets env variables in .envrc and runs python3 run.py --ghdl -v inside sim directory br/&#xA;        Note that here, we are using ghdl-clang and python-vunit packages, provided respectively by guix-science and the electronics channel. br/&#xA;        guix\shelltest, used by shell1, make use of time-machine (no former guix pull, then), to create a shell container, where to install project dependencies. Then, if calls inmediately run.sh to run the unit tests br/&#xA;        &#xA;                guix time-machine --channels=&#34;$channels&#34; -- shell -C --substitute-urls=&#34;$srv&#34; -m &#34;$manifest&#34; -- ./.builds/run.sh&#xA;        &#xA;&#xA;comments&#xA;&#xA;You may check the logs of profile1, profile2 and shell1 manifests, including a section with logs per task, to better understand what’s going on here. Remember that #sourcehut gives ssh access to the builds by connecting to the runners in case of failures, which provides a practical way of debugging the manifest files. br/&#xA;You may see how, using the remove guix image, it is possible to deploy a series of tasks to test our #modernhw design as we develop it: we will get an email in case of failure to pass the tests. Here, I present three approaches: guix pulling to set the repositories revisions on use; time-machine, to achieve the same, and guix shell to create an isolated container. These three alternatives are not necessary here, of course, but are given as a simple and practical demo of what can be achieved with #guix, #sourcehut and #ci. br/&#xA;To conclude this long post, it is important to stress once again that the point on using #guix resides in its reproducibility capabilities. By keeping a couple of #plaintext files, namely the manifest.scm and channels.scm, one can obtain #determinism in the execution of the tests. Even if the guix image is upgraded and rebuilt daily (and so it changes), by fixing the revision of our channels (remember, guix pull or guix time-machine) we obtain always the same products out of our tests, as we run the same (project and tests) code, within exactly the same environment. br/&#xA;&#xA;---&#xA;&#xA;[1] It is also possible to automatically submit builds when a patch to a repo with build manifests is sent to a mailing list. This is achieved by appending the project name as a prefix to the subject of the message, for example [PATCH project-name]. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/sourcehut.png" alt="img"> <br/>
Remote <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> is the <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">way to go</a> in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design testing. In this <a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a>, let’s see how to implement it with detail using <a href="https://sourcehut.org/" rel="nofollow">sourcehut</a> and a real world example.  <br/>
<a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">Sourcehut</a> is a lightweight <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> where I host my <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repositories. Not only it is <a href="https://infosec.press/csantosb/git-forges#sourcehut" rel="nofollow">based on a paradigm</a> perfectly adapted to <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, but also its <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds</a> service includes support for <a href="https://man.sr.ht/builds.sr.ht/compatibility.md#guix-system" rel="nofollow">guix</a> (x86_64) images. This means that we will be able to execute all of our testing online inside <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">guix profiles</a>, <a href="https://infosec.press/csantosb/guix-crash-course#shell-containers" rel="nofollow">shells</a> or natively on top of the bare-bones image. <br/></p>

<h1 id="open-logic">Open logic</h1>

<p>Let’s see how in detail using the <a href="https://man.sr.ht/~whereiseveryone/builds.sr.ht-guix-cookbook/" rel="nofollow">cookbook</a> as a starting point, and taking as a complete example the <code>fw-open-logic</code> <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> firmware package which comes with the <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics guix channel</a>. <br/>
Get it with: <br/></p>

<pre><code class="language-sh">guix install fw-open-logic:out
</code></pre>

<p><a href="https://github.com/open-logic/open-logic" rel="nofollow">Open logic</a> is a useful <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> library of commonly used components, implemented in a reusable and vendor/tool-independent way.  As any other <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> library, it includes tests sets for any of its components, using the <a href="https://infosec.press/csantosb/on-testing#vunit" rel="nofollow">vunit</a> utility in this case. <br/>
To run the full tests suite use (user wide using the default <code>$GUIX_PROFILE</code>), install its dependencies, defined in a <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/manifest.scm" rel="nofollow">manifest.scm</a> file (<code>ghdl-clang</code> and <code>python-vunit</code> in this case). <br/></p>

<pre><code class="language-sh">cd open-logic
guix install -m .builds/manifest.scm
cd sim
python3 run.py --ghdl -v
</code></pre>

<p>or local to the project, using <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">a profile</a> <br/></p>

<pre><code class="language-sh">cd open-logic
mkdir _deps
export GUIX_PROFILE=open-logic/_deps
guix install -P $GUIX_PROFILE -m .builds/manifest.scm
. $GUIX_PROFILE/etc/profile
cd sim
python3 run.py --ghdl -v
</code></pre>

<h2 id="go-remote">go remote</h2>

<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/ci2.png" alt="img"> <br/>
Now, how do we proceed online using <a href="/csantosb/tag:sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">sourcehut</span></a> <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> <code>builds</code> facility ? <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">Builds</a> will pop up a new environment based on an up to date <code>guix-system</code> image when we push a commit to <code>git.sr.ht</code>, provided we include a <code>.build.yml</code> build manifest file, or by a <code>.build</code> folder with up to 4 build manifest files, at the root of the git project [1]. Be careful: consider that this image is <a href="https://git.sr.ht/~sircmpwn/builds.sr.ht/tree/master/item/images/guix" rel="nofollow">built daily</a> using a <a href="https://git.sr.ht/~sircmpwn/builds.sr.ht/tree/master/item/contrib/crontab" rel="nofollow">crontab</a> job, which is a good and a bad thing at the same time. From one side, you won’t be using the same environment for your tests, which breaks <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> (see <strong>comments</strong> section below). On the other side, <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> is a rolling release, and new fancy features and new fixes are added every day. Keep this in mind. <br/>
Let’s create a <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds" rel="nofollow">.builds</a> folder in a topic <code>test branch</code>, with the following contents: <br/></p>

<p>    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/manifest.scm" rel="nofollow">manifest.scm</a>, list of dependencies in our project <br/>
    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/guix.scm" rel="nofollow">guix.scm</a>, default guix repository, redundant, included here for convenience <br/>
    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/channels.scm" rel="nofollow">channels.scm</a>, list of <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">guix channels</a> remote repositories, in addition to the default guix repository, from where we pull packages <br/>
    We will be using here my own <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a> (no substitutes), as well as the <a href="https://codeberg.org/guix-science/guix-science" rel="nofollow">guix science</a> channel (which provides substitutes). <br/>
    (note how here we load the local <code>guix.scm</code> file, instead of making use of the <code>%default-channels</code> global variable) <br/></p>

<p>    <code>scheme
    (load &#34;guix.scm&#34;)
    ;;; %default-channels
</code>
    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/key.pub" rel="nofollow">key.pub</a>, <a href="https://man.sr.ht/~whereiseveryone/builds.sr.ht-guix-cookbook/" rel="nofollow">auth key</a> to access <a href="https://infosec.press/csantosb/guix-crash-course#packages" rel="nofollow">substitutes</a> of packages in guix channels <br/></p>

<h3 id="build-manifests">build manifests</h3>

<p>From now on, every new push to the <code>test</code> <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> branch will trigger the execution of the tasks defined in the three <a href="https://man.sr.ht/builds.sr.ht/#build-manifests" rel="nofollow">build manifest files</a> <br/></p>

<p>    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/profile1.yml" rel="nofollow">profile1</a> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/profile2.yml" rel="nofollow">profile2</a> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/shell1.yml" rel="nofollow">shell1</a> <br/></p>

<p>The two profile build manifest files use a slightly different approach, and are given here for comparison purposes only. The shell build manifest uses an isolated shell container <em>within</em> the image itself to illustrate this feature. <br/>
Inside the manifests, I declare the image to use, <code>guix</code>, and the global environment variables sourced before each task is run: <code>prj</code> (project name), <code>srv</code> (list of servers with substitutes), <code>manifest</code> and <code>channels</code> (pointing to the corresponding files) and <code>key</code> (same). It is important to declare a trigger action, to receive an email with all relevant information in case of failure (log, id, commit, etc.). <br/></p>

<h3 id="tasks">tasks</h3>

<p>What’s interesting here is the list of tasks. Some of them are common to all three manifests <br/></p>

<p>    <strong>env</strong>, useful only for debugging <br/>
    <strong>guix__update__channels</strong>, replace the default project local <code>guix.scm</code> file by the output of <br/></p>

<p>    <code>sh
    guix describe --format=channels
</code></p>

<p>    The goal here is avoid pulling latest guix upstream, useless and cpu and time consuming, and using the local version instead. Remember that the guix system image we are using here is <a href="https://git.sr.ht/~sircmpwn/builds.sr.ht/tree/master/item/images/guix" rel="nofollow">updated daily</a>. <br/></p>

<p>        <strong>guix__auth</strong>, runs the authorize command to add the <code>key.pub</code> file to guix, so that we will be able to download package substitutes when necessary <br/></p>

<p>        <code>sh
        sudo guix archive --authorize &lt; &#34;$key&#34;
</code></p>

<p>        Here, one may opt by doing a <br/></p>

<p>        <code>sh
        guix pull --channels=&#34;$channels&#34;
</code></p>

<p>        as in <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/profile2.yml" rel="nofollow">profile2</a>, to set the revision of the guix channels we are using (remember channels are nothing but git repositories). <br/>
        Note how in <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/profile1.yml" rel="nofollow">profile1</a> and <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/shell1.yml" rel="nofollow">shell1</a> we opt for a different approach. <br/>
        <strong>guix__update__profile</strong>, where we create a <code>_deps</code> folder to be used as a local <code>$GUIX_PROFILE</code> (defined in <code>.envrc</code>). <br/>
        Then, one of <br/></p>

<p>        <code>sh
        # profile1
        guix time-machine --channels=&#34;$channels&#34; -- \
             package -p &#34;$GUIX_PROFILE&#34; \
             --substitute-urls=&#34;$srv&#34; \
             -m &#34;$manifest&#34;
</code></p>

<p>        or <br/></p>

<p>        <code>sh
        # profile2
        guix \
            package -p &#34;$GUIX_PROFILE&#34; \
            --substitute-urls=&#34;$srv&#34; \
            -m &#34;$manifest&#34;
</code></p>

<p>        will install packages in <code>$manifest</code> into the <code>$GUIX_PROFILE</code>. I’m using here the <a href="https://infosec.press/csantosb/guix-crash-course#time-machine" rel="nofollow">time-machine</a> mechanism to set the revision of the guix channels, depending if <code>guix pull</code> was run in the previous stage or not. <br/>
        <strong>vunit</strong>, sets env variables in <code>.envrc</code> and runs <code>python3 run.py --ghdl -v</code> inside <code>sim</code> directory <br/>
        Note that here, we are using <code>ghdl-clang</code> and <code>python-vunit</code> packages, provided respectively by <code>guix-science</code> and the <code>electronics</code> channel. <br/>
        <strong>guix__shell__test</strong>, used by <a href="https://git.sr.ht/~csantosb/ip.open-logic/tree/test/item/.builds/shell1.yml" rel="nofollow">shell1</a>, make use of <code>time-machine</code> (no former <code>guix pull</code>, then), to create a <a href="https://infosec.press/csantosb/guix-crash-course#time-machine%23shell-containers" rel="nofollow">shell container</a>, where to install project dependencies. Then, if calls inmediately <code>run.sh</code> to run the unit tests <br/></p>

<p>        <code>sh
        guix time-machine --channels=&#34;$channels&#34; -- shell -C --substitute-urls=&#34;$srv&#34; -m &#34;$manifest&#34; -- ./.builds/run.sh
</code></p>

<h2 id="comments">comments</h2>

<p>You may check the logs of <a href="https://builds.sr.ht/~csantosb/job/1384658" rel="nofollow">profile1</a>, <a href="https://builds.sr.ht/~csantosb/job/1384659" rel="nofollow">profile2</a> and <a href="https://builds.sr.ht/~csantosb/job/1384660" rel="nofollow">shell1</a> manifests, including a section with logs per task, to better understand what’s going on here. Remember that <a href="/csantosb/tag:sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">sourcehut</span></a> gives <a href="https://man.sr.ht/builds.sr.ht/build-ssh.md" rel="nofollow">ssh access</a> to the builds by connecting to the runners in case of failures, which provides a practical way of debugging the manifest files. <br/>
You may see how, using the remove guix image, it is possible to deploy a series of tasks to test our <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> design as we develop it: we will get an email in case of failure to pass the tests. Here, I present three approaches: <code>guix pulling</code> to set the repositories revisions on use; <code>time-machine</code>, to achieve the same, and <code>guix shell</code> to create an isolated container. These three alternatives are not necessary here, of course, but are given as a simple and practical demo of what can be achieved with <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a>, <a href="/csantosb/tag:sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">sourcehut</span></a> and <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>. <br/>
To conclude this long post, it is important to stress once again that the point on using <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> resides in its <a href="https://infosec.press/csantosb/use-guix#reproducibility" rel="nofollow">reproducibility</a> capabilities. By keeping a couple of <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> files, namely the <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">manifest.scm and channels.scm</a>, one can obtain <a href="/csantosb/tag:determinism" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">determinism</span></a> in the execution of the tests. Even if the guix image is upgraded and rebuilt daily (and so it changes), by fixing the revision of our channels (remember, <code>guix pull</code> or <code>guix time-machine</code>) we obtain always the same products out of our tests, as we run the same (project and tests) code, within exactly the same environment. <br/></p>

<hr>

<p>[1] It is also possible to automatically submit builds when a patch to a repo with build manifests is sent to a mailing list. This is achieved by appending the project name as a prefix to the subject of the message, for example [PATCH project-name]. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-sourcehut</guid>
      <pubDate>Fri, 13 Dec 2024 10:18:11 +0000</pubDate>
    </item>
    <item>
      <title>ci (gitlab/hub)</title>
      <link>https://infosec.press/csantosb/ci-gitlab-hub</link>
      <description>&lt;![CDATA[img br/&#xA;Remote #ci is the way to go in #modernhw digital design testing. In this #ciseries, let’s see it in practice with some detail using two of the most popular forges out there. !--more-- br/&#xA;&#xA;Gitlab&#xA;&#xA;The gitlab #gitforge includes tones of features. Among these, a facility called the container registry, which stores per project container images. Guix pack allows the creation of custom #reproductible environments as images. In particular, it is possible to create a docker image out of our manifest and channels files with br/&#xA;&#xA;guix time-machine -C channels.scm -- pack --compression=xz --save-provenance -f docker -m manifest.scm&#xA;&#xA;Check the documentation for options. br/&#xA;Remember that there are obviously alternative methods to produce docker images. The point on using guix resides on its reproducibility capabilities: you’ll be able to create a new, identical docker image, out of the manifest and channels files at any point in time. Even more: you’ll have the capacity to retrieve your manifest file out of the binary image in case your manifest file gets lost. br/&#xA;Then, this image must be loaded into the local docker store with br/&#xA;&#xA;docker load &lt; IMAGE&#xA;&#xA;and renamed to something meaningful br/&#xA;&#xA;docker tag IMAGE:latest gitlab-registry.whatever.fr/domain/group/NAME:TAG&#xA;&#xA;go remote&#xA;&#xA;img br/&#xA;Finally, pushed to the remote container registry of your project with br/&#xA;&#xA;docker push gitlab-registry.whatever.fr/domain/group/NAME:TAG&#xA;&#xA;At this point, you have an environment where you’ll run your tests using gitlab&#39;s ci features. You’ll set up your gitlab’s runners and manifest files to use this container to execute your jobs. br/&#xA;As an alternative, you could use a ssh executor running on your own fast and powerful hardware resources (dedicated machine, shared cluster, etc.). In this case, you’d rather produce an apptainer  container image with: br/&#xA;&#xA;guix time-machine -C channels.scm -- pack -f squashfs ...&#xA;&#xA;scp this container file to your computing resources and call it from the #gitlab runner. br/&#xA;&#xA;Github&#xA;&#xA;The github is probably the most popular #gitforge out there. It follows a similar to #gitlab in its conception (pull requests and merge requests, you catch the idea ?). It also includes a container registry, and the set of features if offers may be exchanged with ease with any other #gitforge following the same paradigm. No need to go into more details. br/&#xA;There is a couple of interesting tips about using #github, though. It happens more usually than not that users encounter frequently problems of #reproducibility when using container images hosted on ghcr.io, the hosting service for user images. These images are usually employed for running #ci testing pipelines, and they usually break as upstream changes happen: updates, image definition changes, image packages upgrades, etc. If you read my dependencies hell post, this should ring a bell. br/&#xA;What can be done about in what concerns #modernhw ? Well, we have #guix. Let’s try a differente approach: building an image locally, and pushing it to #github registry. Let’s see how. br/&#xA;&#xA;in practice&#xA;&#xA;An example repository shows tha way to proceed. Its contents allow to create a docker container image to be hosted remotely. It includes all that’s necessary to perform remote #ci testing of a #modernhw #vhdl design. br/&#xA;&#xA;docker pull ghcr.io/csantosb/hdl&#xA;docker images # check $ID&#xA;docker run -ti $ID bash&#xA;&#xA;It includes a couple of #plaintext files to produce a #deterministic container. First, the channels.scm file with the list of guix chanels to use to pull packages from. Then, a manifest.scm, with the list of packages to be install within the container. br/&#xA;The image container may be build with br/&#xA;&#xA;image=$(guix time-machine --channels=channels.scm -- \&#xA;             pack -f docker \&#xA;             -S /bin=bin \&#xA;             --save-provenance \&#xA;             -m manifest.scm)&#xA;&#xA;At this point, it is to be load to the docker store with br/&#xA;&#xA;docker load &lt; $image&#xA;docker images&#xA;&#xA;Now it is time to tag the image br/&#xA;&#xA;docker tag IMID ghcr.io/USER/REPO:RELEASE&#xA;&#xA;and login to ghcr.io br/&#xA;&#xA;docker login -u USER -p PASSWORD ghcr.io&#xA;&#xA;Finally, the image is to be push remotely br/&#xA;&#xA;docker push ghcr.io/USER/HDL:RELEASE&#xA;&#xA;test&#xA;&#xA;You’ll may test this image using the neorv32 project, for example, with: br/&#xA;&#xA;docker pull ghcr.io/csantosb/hdl&#xA;docker run -ti ID bash&#xA;git clone --depth=1 https://github.com/stnolting/neorv32&#xA;cd neorv32&#xA;git clone --depth=1 https://github.com/stnolting/neorv32-vunit test&#xA;cd test&#xA;rm -rf neorv32&#xA;ln -sf ../../neorv32 neorv32&#xA;python3 sim/run.py --ci-mode -v&#xA;`]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/gitlab.png" alt="img"> <br/>
Remote <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> is the <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">way to go</a> in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design testing. In this <a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a>, let’s see it in practice with some detail using two of the most popular forges out there.  <br/></p>

<h1 id="gitlab">Gitlab</h1>

<p>The <a href="https://gitlab.com/" rel="nofollow">gitlab</a> <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> includes tones of features. Among these, a facility called the <a href="https://docs.gitlab.com/ee/user/packages/container_registry/" rel="nofollow">container registry</a>, which stores per project container images. <a href="https://infosec.press/csantosb/guix-crash-course#packs" rel="nofollow">Guix pack</a> allows the creation of custom <a href="/csantosb/tag:reproductible" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproductible</span></a> environments as images. In particular, it is possible to create a docker image out of our <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">manifest and channels files</a> with <br/></p>

<pre><code class="language-sh">guix time-machine -C channels.scm -- pack --compression=xz --save-provenance -f docker -m manifest.scm
</code></pre>

<p>Check the <a href="https://guix.gnu.org/manual/en/html_node/Invoking-guix-pack.html" rel="nofollow">documentation</a> for options. <br/>
Remember that there are obviously alternative methods to produce docker images. The point on using guix resides on its <a href="https://infosec.press/csantosb/use-guix#reproducibility" rel="nofollow">reproducibility</a> capabilities: you’ll be able to create a new, identical docker image, out of the <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">manifest and channels files</a> at any point in time. Even more: you’ll have the capacity to retrieve your manifest file out of the binary image in case your manifest file gets lost. <br/>
Then, this image must be loaded into the local docker store with <br/></p>

<pre><code class="language-shell">docker load &lt; IMAGE
</code></pre>

<p>and renamed to something meaningful <br/></p>

<pre><code class="language-shell">docker tag IMAGE:latest gitlab-registry.whatever.fr/domain/group/NAME:TAG
</code></pre>

<h2 id="go-remote">go remote</h2>

<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/ci2.png" alt="img"> <br/>
Finally, pushed to the remote container registry of your project with <br/></p>

<pre><code class="language-shell">docker push gitlab-registry.whatever.fr/domain/group/NAME:TAG
</code></pre>

<p>At this point, you have an environment where you’ll run your tests using <a href="https://docs.gitlab.com/ee/ci/" rel="nofollow">gitlab&#39;s ci</a> features. You’ll set up your gitlab’s <a href="https://docs.gitlab.com/runner/" rel="nofollow">runners</a> and <a href="https://docs.gitlab.com/ee/ci/#step-1-create-a-gitlab-ciyml-file" rel="nofollow">manifest files</a> to use this container to execute your jobs. <br/>
As an alternative, you could use a <a href="https://docs.gitlab.com/runner/executors/ssh.html" rel="nofollow">ssh executor</a> running on your own fast and powerful hardware resources (dedicated machine, shared cluster, etc.). In this case, you’d rather produce an apptainer  container image with: <br/></p>

<pre><code class="language-sh">guix time-machine -C channels.scm -- pack -f squashfs ...
</code></pre>

<p><code>scp</code> this container file to your computing resources and call it from the <a href="/csantosb/tag:gitlab" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitlab</span></a> runner. <br/></p>

<h1 id="github">Github</h1>

<p>The <a href="https://github.com/" rel="nofollow">github</a> is probably the most popular <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> out there. It follows a similar to <a href="/csantosb/tag:gitlab" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitlab</span></a> in its conception (pull requests and merge requests, you catch the idea ?). It also includes a <a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry" rel="nofollow">container registry</a>, and the set of features if offers may be exchanged with ease with any other <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> following the same paradigm. No need to go into more details. <br/>
There is a couple of interesting tips about using <a href="/csantosb/tag:github" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">github</span></a>, though. It happens more usually than not that users encounter frequently problems of <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> when using container images hosted on <code>ghcr.io</code>, the hosting service for user images. These images are usually employed for running <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> testing pipelines, and they <a href="https://github.com/stnolting/neorv32/issues/1116#issuecomment-2532796271" rel="nofollow">usually break</a> as upstream changes happen: updates, image definition changes, image packages upgrades, etc. If you read my <a href="https://infosec.press/csantosb/on-dependencies" rel="nofollow">dependencies hell</a> post, this should ring a bell. <br/>
What can be done about in what concerns <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> ? Well, we have <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a>. Let’s try a differente approach: building an image locally, and pushing it to <a href="/csantosb/tag:github" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">github</span></a> registry. Let’s see how. <br/></p>

<h2 id="in-practice">in practice</h2>

<p>An <a href="https://github.com/csantosb/hdl-image.git" rel="nofollow">example repository</a> shows tha way to proceed. Its contents allow to create a docker container image to be hosted remotely. It includes <a href="https://raw.githubusercontent.com/csantosb/hdl-image/refs/heads/master/manifest.scm" rel="nofollow">all that’s necessary</a> to perform remote <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> testing of a <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> design. <br/></p>

<pre><code class="language-sh">docker pull ghcr.io/csantosb/hdl
docker images # check $ID
docker run -ti $ID bash
</code></pre>

<p>It includes a couple of <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">files</a> to produce a <a href="/csantosb/tag:deterministic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">deterministic</span></a> container. First, the <a href="https://github.com/csantosb/hdl-image/blob/master/channels.scm" rel="nofollow">channels.scm</a> file with the list of guix chanels to use to pull packages from. Then, a <a href="https://github.com/csantosb/hdl-image/blob/master/manifest.scm" rel="nofollow">manifest.scm</a>, with the list of packages to be install within the container. <br/>
The image container may be <a href="https://git.sr.ht/~csantosb/hdl-image/tree/b1ab9a56802e56e3326c8985bd1b61c93173c5ab/readme.org#L3" rel="nofollow">build</a> with <br/></p>

<pre><code class="language-sh">image=$(guix time-machine --channels=channels.scm -- \
             pack -f docker \
             -S /bin=bin \
             --save-provenance \
             -m manifest.scm)
</code></pre>

<p>At this point, it is to be load to the docker store with <br/></p>

<pre><code class="language-sh">docker load &lt; $image
# docker images
</code></pre>

<p>Now it is time to tag the image <br/></p>

<pre><code class="language-sh">docker tag IMID ghcr.io/USER/REPO:RELEASE
</code></pre>

<p>and login to <code>ghcr.io</code> <br/></p>

<pre><code class="language-sh">docker login -u USER -p PASSWORD ghcr.io
</code></pre>

<p>Finally, the image is to be push remotely <br/></p>

<pre><code class="language-sh">docker push ghcr.io/USER/HDL:RELEASE
</code></pre>

<h2 id="test">test</h2>

<p>You’ll may test this image using the <a href="https://github.com/stnolting/neorv32" rel="nofollow">neorv32</a> project, for example, with: <br/></p>

<pre><code class="language-sh">docker pull ghcr.io/csantosb/hdl
docker run -ti ID bash
git clone --depth=1 https://github.com/stnolting/neorv32
cd neorv32
git clone --depth=1 https://github.com/stnolting/neorv32-vunit test
cd test
rm -rf neorv32
ln -sf ../../neorv32 neorv32
python3 sim/run.py --ci-mode -v
</code></pre>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-gitlab-hub</guid>
      <pubDate>Wed, 11 Dec 2024 12:50:04 +0000</pubDate>
    </item>
    <item>
      <title>git forges</title>
      <link>https://infosec.press/csantosb/git-forges</link>
      <description>&lt;![CDATA[img br/&#xA;Using #git is not the whole picture on #modernhw version control landscape. Git is great when one decides to locally follow changes, take diffs, create branches and so on. When it comes to collaboration with other people or to create a community around a common project, the need for extra tooling arises, and it becomes evident that git alone is not enough. A #gitforge fills this gap. !--more-- br/&#xA;Git bare repositories are a means of sharing the local git history remotely. Bares doesn’t show the worktree, as they are used solely as a common exchange place. This might be a remote server accessible through ssh, for example. Several different users may collaborate this way, provided they agree on a common workflow. Bares are more than enough for some needs. A front end on top of it may help to get an overview of what is going on and to take a look at branches, users and the like. All it takes to make this workflow useful is a little management, as git was designed with a fully distributed architecture in mind. Check the docs for more details. br/&#xA;Now, this approach is a bit too bare bones for most people. On top of bare git repositories, some decided to add extra functionality to ease using git remotely, calling for contributors attracted by buttons, colors, menus and most generally, being used to web frontends. Web forges include all usual suspects (project creation and configutation, markup rendering, user account and authorizations, project overview, etc.), as well as more advanced features (continuous integration, #ci, for testing and deployment with git hooks, wikis, code linters, built in actions, issue tracking, etc.). They abstract the use of git showing diffs, logs, issues threads, etc. As any other web gui tool, they come with its own set of inconvenients in what concern user freedom. br/&#xA;Popular examples are all around. #Gitlab may be deployed as a custom (not federated) instance, and is commonly found in research and public institutions; codeberg, based on forgejo, is a great example of how to deploy a lightweight #freesoftware instance of a collaborative forge (and the promise to federate on the fediverse). Many others exist, which more or less features, bells and whistles. You always have the choice. br/&#xA;&#xA;sourcehut&#xA;&#xA;#Sourcehut, as a collaborative platform, deserves special attention. It departs from mainstream forges, following a different paradigm based on the most robust, distributed and flexible technology at our hands since decades, plain text #email. Git, since its origins, includes a close integration with email, as they both share a distributed philosophy, avoiding central point of failure silos (surprising how mosft git forges tend to concentrate in silos). Sourcehut core architecture is based on mail exchange, patches and #maillists, which turns out to be a much more flexible approach than that of what most forges propose. Their concept of project goes well beyond that of usual workflows, integrating nicely git with email, wikis, bug trackers and build features. They’re still in an alpha stage, so expect the best still to come. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/forge.png" alt="img"> <br/>
Using <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> is not the whole picture on <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> version control landscape. Git is great when one decides to locally follow changes, take diffs, create branches and so on. When it comes to collaboration with other people or to create a community around a common project, the need for extra tooling arises, and it becomes evident that git alone is not enough. A <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> fills this gap.  <br/>
<a href="https://git-scm.com/book/en/v2/Git-on-the-Server-Getting-Git-on-a-Server" rel="nofollow">Git bare repositories</a> are a means of sharing the local git history remotely. Bares doesn’t show the worktree, as they are used solely as a common exchange place. This might be a remote server accessible through ssh, for example. Several different users may collaborate this way, provided they agree on a common workflow. Bares are more than enough for some needs. A front end on top of it may help to get an overview of what is going on and to take a look at branches, users and the like. All it takes to make this workflow useful is a little management, as git was designed with a fully distributed architecture in mind. Check the docs for more details. <br/>
Now, this approach is a bit too bare bones for most people. On top of bare git repositories, some decided to add extra functionality to ease using git remotely, calling for contributors attracted by buttons, colors, menus and most generally, being used to web frontends. Web forges include all usual suspects (project creation and configutation, markup rendering, user account and authorizations, project overview, etc.), as well as more advanced features (continuous integration, <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>, for testing and deployment with git hooks, wikis, code linters, built in actions, issue tracking, etc.). They abstract the use of git showing diffs, logs, issues threads, etc. As any other web gui tool, they come with its own set of inconvenients in what concern user freedom. <br/>
Popular examples are all around. <a href="/csantosb/tag:Gitlab" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Gitlab</span></a> may be deployed as a custom (not federated) instance, and is <a href="https://about.gitlab.com/" rel="nofollow">commonly found</a> in research and public institutions; <a href="https://codeberg.org/" rel="nofollow">codeberg</a>, based on <a href="https://forgejo.org/" rel="nofollow">forgejo</a>, is a great example of how to deploy a lightweight <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> instance of a collaborative forge (and the promise to federate on the <a href="https://www.fediverse.to/" rel="nofollow">fediverse</a>). Many others exist, which more or less features, bells and whistles. You always <a href="https://drewdevault.com/2022/03/29/free-software-free-infrastructure.html" rel="nofollow">have the choice</a>. <br/></p>

<h1 id="sourcehut">sourcehut</h1>

<p><a href="/csantosb/tag:Sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Sourcehut</span></a>, as a collaborative platform, deserves special attention. It departs from mainstream forges, following a <a href="https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html" rel="nofollow">different paradigm</a> based on the most robust, distributed and flexible technology at our hands since decades, <a href="https://useplaintext.email/" rel="nofollow">plain text</a> <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a>. Git, since its origins, includes a close integration with email, as they both share a distributed philosophy, avoiding central point of failure silos (surprising how mosft git forges tend to concentrate in silos). <a href="https://drewdevault.com/2018/07/02/Email-driven-git.html" rel="nofollow">Sourcehut</a> core architecture is based on mail exchange, patches and <a href="/csantosb/tag:maillists" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">maillists</span></a>, which turns out to be a much more flexible approach than that of what most forges propose. Their concept of project goes well beyond that of usual workflows, integrating nicely git with email, wikis, bug trackers and build features. They’re still in an alpha stage, so expect the best still to come. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/git-forges</guid>
      <pubDate>Sun, 08 Dec 2024 22:36:11 +0000</pubDate>
    </item>
    <item>
      <title>ci (intro)</title>
      <link>https://infosec.press/csantosb/ci-intro</link>
      <description>&lt;![CDATA[img br/&#xA;How to seek, detect, be notified, analyze logs, understand and react to the different possible kind of issues one may encounter in a digital design is a vast topic of research, well beyond the scope of this modest post. But there are a couple of things we may state about here, though: automatizing issue detection is the way to go. Continuous integration (#ci) testing is a practice to adopt in #modernhw as a way to ensure that our design complies with its constraints. Let’s see this in more detail. !--more-- br/&#xA;&#xA;git&#xA;&#xA;We said #git, then, as mandatory when tracking changes (in documentation, project development, taking notes, etc.). Meaningful changes imply new commits (and good commit messages, for what it takes), but this comes along with a risk of introducing issues. Some kind of mechanism is necessary to automatize the execution of a checkout list to be run per new commit. The list is project aware, for sure, but may also be different following the git branch, and even the kind of commit (merges are to be considered differently to regular commits in topic branches, for example). We need to consider what an issue exactly is, and then you’ll need to adopt a different perspective on kinds of checkout lists. br/&#xA;&#xA;verification&#xA;&#xA;First (ideally), one starts with clear specifications about the goals of current development effort (in practice this never happens in research, and if you ever have it, they’ll evolve with time). These specifications (you’ll figure out where to find them somehow) will define the tests to run. For example, if you need to implement in firmware a deep neural network, you’ll probably have access to a test data set to verify the outcomes are correct. You may tune, improve or even completely change the architecture of your network, at the very end, you’ll have to verify your design with help of the test data set. Additionally, you may define more sophisticated tests: consumption, area, resources, etc. These all fall into the category of verification testing. br/&#xA;&#xA;unit tests&#xA;&#xA;Secondly, you’ll be running unit tests during your whole design cycle (and they’ll evolve along with it), and target tests (the one we mentioned just before). Does this addition perform correctly ? What if we stress a module with random inputs ? Are we going through all code in a given design unit ? Do we cover all values of some input/output signal in this important module ? These are all unit testing checkouts, and they’ll help us to detect issues in an early stage of design. br/&#xA;&#xA;codesign&#xA;&#xA;Codesign falls somewhere in between the two previous: as a testing methodology, it includes concepts of verification and unit testing (and can be combined with them). It is way more ambitious and complex, but also more powerful. No matter your testing strategy, the point here is that you’ll be running these tests (fully or partially) automatically at the several different stages of your development cycle. If they fail, you’ll have to be warned. br/&#xA;&#xA;guix&#xA;&#xA;img br/&#xA;Guix, as a package manager, provides all necessary software to deploy our tests (and can be extended with additional tooling). It also includes all that&#39;s necessary to create a running environment where we will execute our tests. Most importantly, #guix does so in a #deterministic and #reproductible way: we will be able to reproduce our tests in the future under exactly the same conditions. Shell containers, profiles and the time machine mechanism allow the degree of #reproducibility we need here. All it takes is a couple of text files. br/&#xA;&#xA;---&#xA;&#xA;Most usually, we will focus on two strategies to seek for issues: local, and remote. Local strategies are greatly based on git hooks, and will be topic of another post. Let’s see now in practice what can be done with help of remote tools, based on #ci, understood as a methodology consisting on automatically executing a set of tests procedures on a digital design. br/&#xA;ciseries br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/ci.png" alt="img"> <br/>
How to seek, detect, be notified, analyze logs, understand and react to the <a href="https://infosec.press/csantosb/on-testing" rel="nofollow">different possible kind of issues</a> one may encounter in a digital design is a vast topic of research, well beyond the scope of this modest post. But there are a couple of things we may state about here, though: automatizing issue detection is the way to go. <a href="https://en.wikipedia.org/wiki/Continuous_integration" rel="nofollow">Continuous integration</a> (<a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>) testing is a practice to adopt in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> as a way to ensure that our design complies with its constraints. Let’s see this in more detail.  <br/></p>

<h1 id="git">git</h1>

<p>We said <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a>, then, as mandatory when <a href="https://infosec.press/csantosb/on-dependencies" rel="nofollow">tracking changes</a> (in documentation, project development, taking notes, etc.). Meaningful changes imply new commits (and good <a href="https://www.freecodecamp.org/news/how-to-write-better-git-commit-messages/" rel="nofollow">commit messages</a>, for what it takes), but this comes along with a risk of introducing issues. Some kind of mechanism is necessary to automatize the execution of a checkout list to be run per new commit. The list is project aware, for sure, but may also be different following the git branch, and even the kind of commit (merges are to be considered differently to regular commits in topic branches, for example). We need to consider what an issue exactly is, and then you’ll need to adopt a different perspective on kinds of checkout lists. <br/></p>

<h1 id="verification">verification</h1>

<p>First (ideally), one starts with clear specifications about the goals of current development effort (in practice this never happens in research, and if you ever have it, they’ll evolve with time). These specifications (you’ll figure out where to find them somehow) will define the tests to run. For example, if you need to implement in firmware a deep neural network, you’ll probably have access to a test data set to verify the outcomes are correct. You may tune, improve or even completely change the architecture of your network, at the very end, you’ll <a href="https://infosec.press/csantosb/on-testing#osvvm" rel="nofollow">have to verify your design</a> with help of the test data set. Additionally, you may define more sophisticated tests: consumption, area, resources, etc. These all fall into the category of <strong>verification testing</strong>. <br/></p>

<h1 id="unit-tests">unit tests</h1>

<p>Secondly, you’ll be running <a href="https://infosec.press/csantosb/on-testing#vunit" rel="nofollow">unit tests</a> during your whole design cycle (and they’ll evolve along with it), and target tests (the one we mentioned just before). Does this addition perform correctly ? What if we stress a module with random inputs ? Are we going through all code in a given design unit ? Do we cover all values of some input/output signal in this important module ? These are all <strong>unit testing</strong> checkouts, and they’ll help us to detect issues in an early stage of design. <br/></p>

<h1 id="codesign">codesign</h1>

<p><a href="https://infosec.press/csantosb/on-testing#cocotb" rel="nofollow">Codesign</a> falls somewhere in between the two previous: as a testing methodology, it includes concepts of verification and unit testing (and can be combined with them). It is way more ambitious and complex, but also more powerful. No matter your testing strategy, the point here is that you’ll be running these tests (fully or partially) automatically at the several different stages of your development cycle. If they fail, you’ll have to be warned. <br/></p>

<h1 id="guix">guix</h1>

<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/guix.png" alt="img"> <br/>
<a href="https://infosec.press/csantosb/use-guix" rel="nofollow">Guix</a>, as a package manager, provides all necessary software to deploy our tests (and can be <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">extended</a> with additional tooling). It also includes <a href="https://infosec.press/csantosb/guix-crash-course" rel="nofollow">all that&#39;s necessary</a> to create a running environment where we will execute our tests. Most importantly, <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> does so in a <a href="/csantosb/tag:deterministic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">deterministic</span></a> and <a href="/csantosb/tag:reproductible" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproductible</span></a> way: we will be able to reproduce our tests in the future under exactly the same conditions. <a href="https://infosec.press/csantosb/guix-crash-course#shell-containers" rel="nofollow">Shell containers</a>, <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">profiles</a> and the <a href="https://infosec.press/csantosb/guix-crash-course#time-machine" rel="nofollow">time machine mechanism</a> allow the degree of <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> we need here. All it takes is <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">a couple of text files</a>. <br/></p>

<hr>

<p>Most usually, we will focus on two strategies to seek for issues: local, and remote. Local strategies are greatly based on <a href="https://git-scm.com/book/ms/v2/Customizing-Git-Git-Hooks" rel="nofollow">git hooks</a>, and will be topic of another post. <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">Let’s see now in practice</a> what can be done with help of remote tools, based on <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>, understood as a methodology consisting on automatically executing a set of tests procedures on a digital design. <br/>
<a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a> <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-intro</guid>
      <pubDate>Sun, 08 Dec 2024 21:19:43 +0000</pubDate>
    </item>
    <item>
      <title>on testing</title>
      <link>https://infosec.press/csantosb/on-testing</link>
      <description>&lt;![CDATA[img br/&#xA;Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in #modernhw a few usual tools are available to ease the process as a whole. Let’s overview some of them. !--more-- br/&#xA;Here on the electronics digital design side of life, we have mainly three #freesoftware fine tools (among many others) to perform code checking to a large extent: osvvm, cocotb and vunit. They are all compatible with the ghdl compiler, and they are all available from my own #guix electronics channel (cocotb and vunit will hopefully get merged on guix upstream at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. br/&#xA;They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. br/&#xA;&#xA;osvvm&#xA;&#xA;First, we have osvvm. #Osvvm is a modern verification #vhdl library using most up-to-date language constructs (by the main contributor to the vhdl standard), and I’ll mention it frequently in this #modernhw posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the documentation repository for up-to-date details about osvvm. br/&#xA;You’ll be able to install osvvm with br/&#xA;&#xA;guix search osvvm&#xA;guix install osvvm-uart osvvm-scripts&#xA;&#xA;You have a simple use of the osvvm vhdl library in the #aludesign, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log here, using the remote ci builds facility of sourcehut. br/&#xA;&#xA;vunit&#xA;&#xA;Then, we have Vunit as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the &#34;test early and test often&#34; paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the documentation for details. br/&#xA;Install it as usual with br/&#xA;&#xA;guix install python-vunit&#xA;&#xA;A clever example of its use is provided by the fw-open-logic firmware package (also included in the electronics channel). When you install it, you’ll need to build the package once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: br/&#xA;&#xA;guix build fw-open-logic:out&#xA;&#xA;By the way, if you need the simulation libraries, they are available too. br/&#xA;&#xA;guix install fw-open-logic:out&#xA;# guix install fw-open-logic:sim  # sim libraries&#xA;&#xA;Additionnaly, #vunit is compatible with running a testing #ci pipeline online, as explained here. br/&#xA;&#xA;cocotb&#xA;&#xA;Finally, we have the interesting and original cocotb. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand #ghdl simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read the documentation to understand how powerful cosumulation approach can reveal. By the way, install it with br/&#xA;&#xA;guix install python-cocotb&#xA;&#xA;---&#xA;&#xA;From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in #modernhw digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in #ci infrastructure. How it goes ? This is the topic of the ci posts in this series. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/testing.png" alt="img"> <br/>
Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> a few usual tools are available to ease the process as a whole. Let’s overview some of them.  <br/>
Here on the electronics digital design side of life, we have mainly three <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> fine tools (among many others) to perform code checking to a large extent: <strong>osvvm</strong>, <strong>cocotb</strong> and <strong>vunit</strong>. They are all compatible with the <a href="https://infosec.press/csantosb/ghdl" rel="nofollow">ghdl compiler</a>, and they are all available from my own <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a> (<a href="https://issues.guix.gnu.org/68153" rel="nofollow">cocotb</a> and <a href="https://issues.guix.gnu.org/74242" rel="nofollow">vunit</a> will hopefully get merged on <a href="https://infosec.press/csantosb/guix" rel="nofollow">guix upstream</a> at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. <br/>
They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. <br/></p>

<h1 id="osvvm">osvvm</h1>

<p>First, we have <a href="https://github.com/OSVVM" rel="nofollow">osvvm</a>. <a href="/csantosb/tag:Osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Osvvm</span></a> is a modern verification <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> library using most up-to-date language constructs (by the <a href="https://www.linkedin.com/in/jimwilliamlewis" rel="nofollow">main contributor</a> to the <a href="https://gitlab.com/IEEE-P1076" rel="nofollow">vhdl standard</a>), and I’ll mention it frequently in this <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the <a href="https://github.com/OSVVM/Documentation#readme" rel="nofollow">documentation repository</a> for up-to-date details about osvvm. <br/>
You’ll be able to install osvvm with <br/></p>

<pre><code class="language-sh"># guix search osvvm
guix install osvvm-uart osvvm-scripts
</code></pre>

<p>You <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/sim/alu_tb.vhd#L30" rel="nofollow">have a simple use</a> of the osvvm vhdl library in the <a href="/csantosb/tag:aludesign" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">aludesign</span></a>, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log <a href="https://builds.sr.ht/query/log/1380968/test_profile/log" rel="nofollow">here</a>, using the <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">remote ci</a> <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds facility</a> of <a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">sourcehut</a>. <br/></p>

<h1 id="vunit">vunit</h1>

<p>Then, we have <a href="https://github.com/VUnit/vunit" rel="nofollow">Vunit</a> as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the “test early and test often” paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the <a href="https://vunit.github.io/" rel="nofollow">documentation</a> for details. <br/>
Install it as usual with <br/></p>

<pre><code class="language-sh">guix install python-vunit
</code></pre>

<p>A clever example of its use is provided by the <code>fw-open-logic</code> firmware package (also included in the <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a>). When you install it, you’ll need to <a href="https://infosec.press/csantosb/guix-crash-course#packages" rel="nofollow">build the package</a> once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: <br/></p>

<pre><code class="language-sh">guix build fw-open-logic:out
</code></pre>

<p>By the way, if you need the simulation libraries, they are available too. <br/></p>

<pre><code class="language-sh">guix install fw-open-logic:out
# guix install fw-open-logic:sim  # sim libraries
</code></pre>

<p>Additionnaly, <a href="/csantosb/tag:vunit" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vunit</span></a> is compatible with running a testing <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> pipeline online, as explained <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">here</a>. <br/></p>

<h1 id="cocotb">cocotb</h1>

<p>Finally, we have the interesting and original <a href="https://www.cocotb.org/" rel="nofollow">cocotb</a>. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand <a href="/csantosb/tag:ghdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ghdl</span></a> simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read <a href="https://docs.cocotb.org/en/stable/index.html" rel="nofollow">the documentation</a> to understand how powerful cosumulation approach can reveal. By the way, install it with <br/></p>

<pre><code class="language-sh">guix install python-cocotb
</code></pre>

<hr>

<p>From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> infrastructure. How it goes ? This is the topic of the <a href="https://infosec.press/csantosb/ci" rel="nofollow">ci posts</a> in this series. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/on-testing</guid>
      <pubDate>Fri, 06 Dec 2024 09:32:14 +0000</pubDate>
    </item>
    <item>
      <title>git best practices</title>
      <link>https://infosec.press/csantosb/git-best-practices</link>
      <description>&lt;![CDATA[img br/&#xA;We said #git, then. How to use git as efficiently as possible in #modernhw ? We know the answer, using a front end. Right, but then ? Following a set of simple principles and best practices that will make your life simpler. Follow me on this trip. !--more-- br/&#xA;&#xA;read a couple of good references&#xA;&#xA;Pragmatic Version Control Using Git, Version Control with Git, 3rd Edition and Pragmatic guide to git are good examples, but there are many more around. Use them as a reference and as a starting point, and try to go beyond following your needs. br/&#xA;Check the official doc. And remember you have man git-log, man git-config, etc. at your disposal. br/&#xA;&#xA;use a front end&#xA;&#xA;Yes, again. br/&#xA;Avoid the #cli. And try to make your text editor and your front end as good friends as possible. br/&#xA;&#xA;coding&#xA;&#xA;Format your code properly, otherwise, diffing becomes useless, and your code diffs will be hidden by formatting diffs. Even worst, people you collaborate with will be unable to read your history. Comply to language standards. br/&#xA;&#xA;changes&#xA;&#xA;If possible, ask your text editor to have some kind of visual hints on what you have changed, added or removed. br/&#xA;Learn how to inspect diffs between working copy and staging area, between working copy and last commit, and the contents of a commit. Learn how to discard changes. br/&#xA;&#xA;commits&#xA;&#xA;Commits are lightweight diffs. A commit has one or two parent commits, and is identified by a unique hash. br/&#xA;Stage your changes first, commit them then. Learn how to stage chunks of changes, not all of them. br/&#xA;Remember to commit early, commit often: git is a CVS, not an archival, not a backup system. Never commit binaries (except artifacts: pdf, etc.). br/&#xA;Authentify who authors your developments and gpg-sign your commits. br/&#xA;Group changes in meaningful commits, and remember git history must be read as a novel: write meaningful commit messages. Consider that people spend much more time reading git history than writing it. br/&#xA;&#xA;branches, tags and releases&#xA;&#xA;A branch is a pointer to a commit, and if your remove the pointer to a commit, you won’t be able to access it anymore. Branches are free (as in beer !), so branch as much as your need. br/&#xA;Tags are fix pointers (labels) to commits (aliases) They identify stages, or important hints in development. br/&#xA;Remember to always store your work hash/tag along with your results, you’ll know what you’re doing, you’ll know which version of your submodules you’re using, and you’ll be able to compare your results. br/&#xA;Releases are numbered tags, identifying accomplishments. Be familiar with semantic versioning br/&#xA;Before merging branches, understand the differences between fast-forward (advance the pointer) and non fast-forward (keep branch history in a feature branch). And learn how to resolve merge conflicts with your frontend ! br/&#xA;&#xA;logs&#xA;&#xA;Check frequently where you are in the log history, you may get backwards in history by just moving a pointer. br/&#xA;Learn to search (and filter searches) in the log messages br/&#xA;&#xA;workflows&#xA;&#xA;Remember local is decoupled from remote, and that git doesn’t impose any workflow, so everything is possible. br/&#xA;Learn the advised workflow in collaborative development: gitflow, and consider merge / pull requests are just artificial standards of a #gitforge. br/&#xA;At a minimum, use: br/&#xA;&#xA;    main, stable branch (releases only) br/&#xA;    devel, working branch (commit here) br/&#xA;    feature, topic-specific branch (spin-off) br/&#xA;&#xA;locally&#xA;&#xA;Everything may be fixed while working locally. br/&#xA;Use .gitignore., locally at $GITDIR/info/exclude and at global level at ./.gitignore. For lazy people you have gitignore.io br/&#xA;git-config your environment before anything else, globally at ~/.gitconfig, at project local at $GITDIR/config. br/&#xA;You’ll find more details about all the previous here. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/gitbest.png" alt="img"> <br/>
We said <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a>, then. How to use git as efficiently as possible in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> ? We know the answer, <a href="https://infosec.press/csantosb/git-ytbn" rel="nofollow">using a front end</a>. Right, but then ? Following a set of simple principles and <a href="https://l2it.pages.in2p3.fr/cad/git-best-practices/" rel="nofollow">best practices</a> that will make your life simpler. Follow me on this trip.  <br/></p>

<h1 id="read-a-couple-of-good-references">read a couple of good references</h1>

<p><a href="https://pragprog.com/titles/tsgit/pragmatic-version-control-using-git/" rel="nofollow">Pragmatic Version Control Using Git</a>, <a href="https://www.oreilly.com/library/view/version-control-with/9781492091189/" rel="nofollow">Version Control with Git, 3rd Edition</a> and <a href="https://github.com/saladinreborn/latihangit/blob/master/Buku%20GIT/Pragmatic%20Guide%20to%20Git.pdf" rel="nofollow">Pragmatic guide to git</a> are good examples, but there are many more around. Use them as a reference and as a starting point, and try to go beyond following your needs. <br/>
Check the official <a href="https://git-scm.com/book/en/v2" rel="nofollow">doc</a>. And remember you have <code>man git-log</code>, <code>man git-config</code>, etc. at your disposal. <br/></p>

<h1 id="use-a-front-end">use a front end</h1>

<p>Yes, again. <br/>
<strong>Avoid the <a href="/csantosb/tag:cli" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">cli</span></a></strong>. And try to make your text editor and your front end as good friends as possible. <br/></p>

<h1 id="coding">coding</h1>

<p>Format your code properly, otherwise, diffing becomes useless, and your code diffs will be hidden by formatting diffs. Even worst, people you collaborate with will be unable to read your history. Comply to language standards. <br/></p>

<h1 id="changes">changes</h1>

<p>If possible, ask your text editor to have some kind of visual hints on what you have changed, added or removed. <br/>
Learn how to inspect diffs between working copy and staging area, between working copy and last commit, and the contents of a commit. Learn how to discard changes. <br/></p>

<h1 id="commits">commits</h1>

<p>Commits are lightweight diffs. A commit has one or two parent commits, and is identified by a unique hash. <br/>
Stage your changes first, commit them then. Learn how to stage chunks of changes, not all of them. <br/>
Remember to commit early, commit often: git is a CVS, not an archival, not a backup system. Never commit binaries (except artifacts: pdf, etc.). <br/>
Authentify who authors your developments and gpg-sign your commits. <br/>
Group changes in meaningful commits, and remember git history must be read as a novel: write meaningful commit messages. Consider that people spend much more time reading git history than writing it. <br/></p>

<h1 id="branches-tags-and-releases">branches, tags and releases</h1>

<p>A branch is a pointer to a commit, and if your remove the pointer to a commit, you won’t be able to access it anymore. Branches are free (as in beer !), so branch as much as your need. <br/>
Tags are fix pointers (labels) to commits (aliases) They identify stages, or important hints in development. <br/>
Remember to always store your work hash/tag along with your results, you’ll know what you’re doing, you’ll know which version of your submodules you’re using, and you’ll be able to compare your results. <br/>
Releases are numbered tags, identifying accomplishments. Be familiar with semantic versioning <br/>
Before merging branches, understand the differences between fast-forward (advance the pointer) and non fast-forward (keep branch history in a feature branch). And learn how to resolve merge conflicts with your frontend ! <br/></p>

<h1 id="logs">logs</h1>

<p>Check frequently where you are in the log history, you may get backwards in history by just moving a pointer. <br/>
Learn to search (and filter searches) in the log messages <br/></p>

<h1 id="workflows">workflows</h1>

<p>Remember local is decoupled from remote, and that git doesn’t impose any workflow, so everything is possible. <br/>
Learn the advised workflow in collaborative development: <a href="https://nvie.com/posts/a-successful-git-branching-model/" rel="nofollow">gitflow</a>, and consider merge / pull requests are just artificial standards of a <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a>. <br/>
At a minimum, use: <br/></p>

<p>    main, stable branch (releases only) <br/>
    devel, working branch (commit here) <br/>
    feature, topic-specific branch (spin-off) <br/></p>

<h1 id="locally">locally</h1>

<p>Everything may be fixed while working locally. <br/>
Use <a href="https://git-scm.com/docs/gitignore" rel="nofollow">.gitignore.</a>, locally at <code>$GITDIR/info/exclude</code> and at global level at <code>./.gitignore</code>. For lazy people you have <code>gitignore.io</code> <br/>
<a href="https://git-scm.com/docs/git-config/en" rel="nofollow">git-config</a> your environment before anything else, globally at <code>~/.gitconfig</code>, at project local at <code>$GITDIR/config</code>. <br/>
You’ll find more details about all the previous <a href="https://l2it.pages.in2p3.fr/cad/git-best-practices/" rel="nofollow">here</a>. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/git-best-practices</guid>
      <pubDate>Mon, 02 Dec 2024 20:27:56 +0000</pubDate>
    </item>
    <item>
      <title>guix crash course</title>
      <link>https://infosec.press/csantosb/guix-crash-course</link>
      <description>&lt;![CDATA[img br/&#xA;Guix reveals as a practical means of handling dependencies. However, the amount of information available to start using it may appear as a bit overwhelming for a beginner, letting the feeling of a tool reserved to a reduced community or experts. Far from that. Here you’ll find everything you need to get started with guix, with a light touch on using it for #modernhw. !--more-- br/&#xA;We will concentrate in the use of #guix as an external package manager on top of a #linux distribution based on #systemd. We’ll let aside using and referring to #guixsystem as a full operating system by itself (which I never used anyway). This way, in the context of #modernhw, we may keep on using our favorite tools, environment and workflow. As an addition, we have everything that guix provides at our disposal, without affecting our local packages configuration: guix acts as an extra layer on top of our current OS, without any interference with it. You’ll have the possibility to install any guix software, remove it afterward or make use of the fancy features guix has to offer, without your host OS ever noticing what’s going on. br/&#xA;All what follows is roughly based on the guix reference manual and the guix cookbook, so refer to them for more on depth explanations. This article is strongly influenced by my personal experience as a daily driver, so next topics are necessarily biased towards my own needs. br/&#xA;There is much more to say about guix, but this is just an introductory crash course, right ? br/&#xA;&#xA;install&#xA;&#xA;First things first. You need to be root to proceed to a binary guix installation. Just download the installer, and follow the instructions. br/&#xA;After that, you’ll be using guix as a regular user, and all what follows must be run without any special rights beyond accessing to your home directory. Behind the curtains, guix computes what’s necessary through the running guix daemon, handled by your host’s systemd. br/&#xA;&#xA;packages&#xA;&#xA;Packages are built definitions. At no surprise, they are installed (or removed) with br/&#xA;&#xA;guix search synthesis&#xA;guix install yosys&#xA;guix remove python&#xA;&#xA;Definitions, in turn, are guile descriptions on how and where to obtain code source, a precise and unambiguous reference to it, how to process and how to install it, along with the necessary input dependencies, its kind, and how to use them. Definitions may be seen as customized default build templates, which avoids complicated package definitions, simplifying its design. Thus, a default build template called python-build-system exist, for example, for producing python packages. A package definition customizes the way this template is used, modifying its default package, source, etc. fields. br/&#xA;Definitions are built on isolated, minimalistic environments. Once built, packages are deposit in the guix store under /gnu/store. Each package is given a unique hash: changing the definition, or any of its inputs, produces a different package and hash. This is what usually is referred to as functional package management of #dependencies. br/&#xA;A package may have multiple outputs (out, by default, but also doc, etc.). Packages are built locally following the package definition. To avoid long run times and wasting cpu cycles, guix introduces substitutes or pre-built packages available on remote (substitute) servers. When available, substitutes are downloaded, which avoids having to built packages locally. Otherwise, your local computing resources will be put to contribution, which is far from ideal, so better configure your substitute servers before anything else (check your systemd guix-daemon file). It is possible to verify substitute availability with br/&#xA;&#xA;guix weather ghdl-clang&#xA;&#xA;  https://guix.bordeaux.inria.fr ☀&#xA;--  100.0 % des substituts sont disponibles (1 sur 1)  &lt;--&#xA;    6,2 Mio de fichiers nar (compressés)&#xA;    39,6 Mio sur le disque (décompressé)&#xA;    0,777 secondes par requête (0,8 secondes en tout)&#xA;    1,3 requêtes par seconde&#xA;&#xA;It is crucial to understand that a given package built will be identical to any other build of this same package, regardless of the host computer, which is what holds the validity of the very idea of substitutes, and guarantees #reproducibility. This holds for any guix construction, including shell containers (see below). br/&#xA;Keep that in mind. br/&#xA;&#xA;profiles and generations&#xA;&#xA;After first install, guix will create on your behalf a default profile under ~/.guix-profile. All operations (install, remove) will affect this profile, unless you decide to point somewhere else (with the modifier -p $GUIXPROFILE). br/&#xA;&#xA;guix package -p $GUIXPROFILE --list-installed&#xA;&#xA;coreutils       9.1     out     /gnu/store/fk39d3y3zyr6ajyzy8d6ghd0sj524cs5-coreutils-9.1&#xA;git             2.46.0  out     /gnu/store/wyhw9f49kvc7qvbsbfgm09lj0cpz1wlb-git-2.46.0&#xA;fw-open-logic   3.0.1   out     /gnu/store/hrgdvswmvqcyai4pqmr7df0kpyyak94j-fw-open-logic-3.0.1&#xA;osvvm-scripts   2024.09 out     /gnu/store/xhxr3y1k8838my6mfk992kn392pwszjm-osvvm-scripts-2024.09&#xA;osvvm-uart      2024.09 out     /gnu/store/x3pjf95h8p3mbcx4zxb6948xfq3y3vg8-osvvm-uart-2024.09&#xA;fd              9.0.0   out     /gnu/store/nx0hz1y3g7iyi4snyza7rl5600z73xyn-fd-9.0.0&#xA;make            4.4.1   out     /gnu/store/963iman5zw7zdf128mqhklihvjh6habm-make-4.4.1&#xA;tcllib          1.19    out     /gnu/store/443vgrmwac1mvipyhin5jblsml9lplxf-tcllib-1.19&#xA;tcl             8.6.12  out     /gnu/store/w2icygvc0h294bzak0dyfafq649sdqvn-tcl-8.6.12&#xA;ghdl-clang      4.1.0   out     /gnu/store/sy0ryysxwbkzj6gpfka20fs27knmgmkd-ghdl-clang-4.1.0&#xA;&#xA;Each profile generation will consist on a set of symbolic links pointing to /gnu/store. A new generation is produced when you install or remove something. This will only redefine your profile’s links, and so the status of the profile (and the packages you have access to). Generations are roughly the equivalent of #git commits, if this helps. They are nothing but collections of links pointing to the store, where packages are installed. Each collection defines a generation and so the current status of a guix profile. br/&#xA;You may roll back to previous generations, or move forward, but only linear generation histories are allowed. In you go back n generations, and then create a new one, your previous history is lost. br/&#xA;&#xA;guix package -p $GUIXPROFILE --list-generations&#xA;&#xA;In addition to creating links, a profile redefines the environment variables (following the profle contents), appending, prepending or replacing the current ones. This way, the user enters an augmented context, having access to the packages in the profile. br/&#xA;Note that, inside a profile, the user still have access to the external system: for example, the PATH env variable is augmented with the profile bin directory, but former binaries are still there. To get a higher degree of isolation, we need shell containers (see below). br/&#xA;&#xA;clean&#xA;&#xA;From time to time, don’t forget to clean the store, removing stuff no profile and generation is pointing to, with br/&#xA;&#xA;guix gc&#xA;&#xA;Before that, remove old generations from your profiles, unless you plan to make use of them at some point. br/&#xA;&#xA;upgrade&#xA;&#xA;Upgrade guix current profile with br/&#xA;&#xA;guix pull &amp;&amp; guix upgrade&#xA;&#xA;This will create a new generation in your default profile, including updates to all your packages in current profile. Pulling syncs local guix with remote guix repository, fetching updates locally. Upgrade will deploy these updates to your profile. br/&#xA;Remember also to br/&#xA;&#xA;sudo -i guix pull&#xA;sudo systemctl daemon-reload&#xA;sudo systemctl restart guix-daemon&#xA;&#xA;to upgrade the system daemon, so that it is never too delayed with respect to your guix in use. br/&#xA;&#xA;manifest, channels&#xA;&#xA;If not already clear from the previous, remember that it is possible to replicate environments (contexts, profiles, dependencies) using a couple of #plaintext files. br/&#xA;First, the #manifest.scm, which includes the list of packages in the environment. As an example, export your current profile with br/&#xA;&#xA;guix package -p $GUIXPROFILE --export-manifest   manifest.scm&#xA;&#xA;Put it somewhere under version control, and replicate your environment somewhere else with br/&#xA;&#xA;guix package -p $GUIXPROFILE -m manifest.scm&#xA;&#xA;That’s all it takes to get exactly the same development context in another host, for example. br/&#xA;But you’re right, I see you follow. This is not enough. You also need to freeze which guix version you’re using (guix, as any other package manager, not always installs the same version of some package). You need also a #channels.scm file. It may be produced with br/&#xA;&#xA;guix describe --format=channels -p $GUIXPROFILE   channels.scm&#xA;&#xA;and includes the list of channels in use, along with a hash to identify which version of the channels to use, among the whole history of channel revisions (the #git commit of the channel repository). Then, import it somewhere else with br/&#xA;&#xA;guix pull -C channels.scm&#xA;&#xA;examples&#xA;&#xA;Fixing your channels, its revision and the list of packages is all you need to eliminate any ambiguity, achieving #reproducibility and #determinism. Remember that this is the best advantage of guix, after all. Say you publish something (report, article, paper, blog ticket). If you provide a git repository with these two files, anyone else will be br/&#xA;able, hopefully, to replicate your asserts. br/&#xA;As a typical example, when you have a complex #vhdl design, including a large set of dependencies, you need a means to handle them. Here, we assume the #dependencies may be vhdl compilers, tcl shells, python interpreters, unit testing libraries, verification frameworks. etc. ... but also other vhdl modules, each in its own git repository. br/&#xA;At this point, you’ll need, first, to fix your channels.scm to &#34;freeze&#34; the repositories status. Here we are using, for example, the electronics, guix-science and guix channels, each in a fix release given by a commit hash. br/&#xA;&#xA;(list (channel&#xA;       (name &#39;electronics)&#xA;       (url &#34;https://git.sr.ht/~csantosb/guix.channel-electronics&#34;)&#xA;       (branch &#34;main&#34;)&#xA;       (commit&#xA;        &#34;2cad57b4bb35cc9250a7391d879345b75af4ee0a&#34;)&#xA;       (introduction&#xA;        (make-channel-introduction&#xA;         &#34;ba1a85b31202a711d3e3ed2f4adca6743e0ecce2&#34;&#xA;         (openpgp-fingerprint&#xA;          &#34;DA15 A1FC 975E 5AA4 0B07  EF76 F1B4 CAD1 F94E E99A&#34;))))&#xA;      (channel&#xA;       (name &#39;guix-science)&#xA;       (url &#34;https://codeberg.org/guix-science/guix-science.git&#34;)&#xA;       (branch &#34;master&#34;)&#xA;       (commit&#xA;        &#34;1ced1b3b913b181e274ca7ed2239d6661c5154c9&#34;)&#xA;       (introduction&#xA;        (make-channel-introduction&#xA;         &#34;b1fe5aaff3ab48e798a4cce02f0212bc91f423dc&#34;&#xA;         (openpgp-fingerprint&#xA;          &#34;CA4F 8CF4 37D7 478F DA05  5FD4 4213 7701 1A37 8446&#34;))))&#xA;      (channel&#xA;       (name &#39;guix)&#xA;       (url &#34;https://git.savannah.gnu.org/git/guix.git&#34;)&#xA;       (branch &#34;master&#34;)&#xA;       (commit&#xA;        &#34;3e2442de5268782213b04048463fcbc5d76accd7&#34;)&#xA;       (introduction&#xA;        (make-channel-introduction&#xA;         &#34;9edb3f66fd807b096b48283debdcddccfea34bad&#34;&#xA;         (openpgp-fingerprint&#xA;          &#34;BBB0 2DDF 2CEA F6A8 0D1D  E643 A2A0 6DF2 A33A 54FA&#34;)))))&#xA;&#xA;Then, you need the list of dependencies necessary to your design. These are provided in a manifest.scm file, as for example in br/&#xA;&#xA;(specifications-  manifest&#xA; (list &#34;ghdl-clang&#34;&#xA;       &#34;tcl&#34;&#xA;       &#34;tcllib&#34;&#xA;       &#34;make&#34;&#xA;       &#34;python-vunit&#34;&#xA;       &#34;osvvm-uart&#34;&#xA;       &#34;osvvm-scripts&#34;&#xA;       &#34;fw-open-logic&#34;&#xA;       &#34;git&#34;&#xA;       &#34;which&#34;&#xA;       &#34;findutils&#34;&#xA;       &#34;coreutils&#34;))&#xA;&#xA;One may include these two files in the design, in a different testing #git branch, for example. Then, all it takes to run your design in a reproducible way is cloning the design git repository, checking out the testing branch, and running #guix time machine (see next) to replicate a local profile containing all the design’s dependencies. Remember that here we include also all third party firmware modules instantiated in our design. br/&#xA;&#xA;time machine&#xA;&#xA;How guix guarantees that it is possible to reproduce a profile in the future ? The trick consist on asking current guix version to call a previous guix version (the one we define), to deploy the profile with the packages we need. br/&#xA;For example: let&#39;s ask guix-5 to make use of guix-4 to install emacs-30 package, which is only available in the guix-4 repositories, whereas guix-5 only provides emacs-32. br/&#xA;This mechanism is called time-machine. It is used as, for example: br/&#xA;&#xA;guix time-machine --channels=channels.scm -- package -p $GUIXPROFILE -m manifest.scm&#xA;&#xA;Here, up-to-date guix uses time machine to roll back to the former guix version defined in channels.scm. Then, former guix calls the package command to install under $GUIXPROFILE the list of packages defined in the manifest.scm file. br/&#xA;What’s important to understand here is that this will produce exactly the same output regardless of the host and the point in time when we run this command. The profile we produce is always the same, by design. And this is what is relevant for #modernhw. br/&#xA;&#xA;shell containers&#xA;&#xA;Guix includes a command to create independent environments from the rest of our host system. This provides an increased degree of isolation when compared to profiles, as the later lie on top of, and only augment, our current shell. Shell containers create a new, almost empty by default, minimalistic context for us to install packages. br/&#xA;&#xA;guix shell --container --link-profile --emulate-fhs coreutils which python-vunit osvvm-uart&#xA;guix shell --container --link-profile --emulate-fhs -m manifest.scm&#xA;&#xA;or, if one needs determinism br/&#xA;&#xA;guix time-machine --channels=channels.scm -- shell --container --link-profile --emulate-fhs -m manifest.scm&#xA;&#xA;The --link-profile flag will link the contents of $GUIXPROFILE under /gnu/store to ~/.guix-profile. br/&#xA;The --emulate-fs will, well, reproduce the standard file system under /, as some packages expect this layout and fail otherwise. br/&#xA;coreutils and which packages will be helpful, otherwise, not even ls command is present within the container. Minimalistic, I said. I should have use isolated instead. br/&#xA;&#xA;packs&#xA;&#xA;Great. But. What if guix is not around ? How do I use it in a cluster, or in another host where guix is not yet available ? How do I distribute my dependencies, environments, etc. to a non-yet-guix-user ? No problem, guix pack is intended to be used as a simple way of &#34;packaging&#34; guix contexts (understood as a set of packages), deploying them afterward in a target host. This is next step after profiles and shell containers. br/&#xA;Guix pack comes equipped with several different backends, producing contexts in the most habitual and useful formats. For example, the following command will pack #emacs, #ghdl and #yosys for you to use where you need it. br/&#xA;&#xA;guix pack -f docker emacs ghdl-clang yosys&#xA;&#xA;In the context of #modernhw, #docker images may be used for #ci tests, uploading the image to a remote #gitforge registry; #apptainer containers can be sent and run in a #hpc cluster; .tar.gz compresses files are a clean may of installing non-existing software in a remote machine. Furthermore, one has the possibility of packaging all the project #dependencies in a manifest.scm file, and distribute it along with the source code to anyone willing to use it. No instructions about the proper environment to run the project, no complicated installation of dependencies. Stop asking third party users in your README to handle your dependencies for you. br/&#xA;A simple docker pull pointing to a #forge image repository is enough when guix is not locally available. Long run times ? Use a #forge #ci custom runner in your own hardware with your #singularity image. Remote work to an #ssh server with obsolete software ? Pack, send and untar your favorite development tools and create a custom profile, no admin rights needed. The possibilities are endless. br/&#xA;And most important: the advantage of this approach over a classical docker or singularity files for producing the images is #reproducibility: every single time you build the image, you’ll get the exact same binary product. Use the --save-provenance flag to store in the image itself the manifest you used to create it. br/&#xA;Good luck trying to achieve the same with a docker file. br/&#xA;&#xA;importing&#xA;&#xA;Now, guix is not a universal tool for installing anything around. What about this obscure #python package no one uses but you ? You’d absolutely need this #emacs package you just found on #codeberg which guix doesn’t provide. Rust, go ... There are plenty of pieces of code around not being yet packaged along with guix. No problem. Guix incorporates a simple and elegant way of extending the amount of packages you’ll be able to install. Just import them. br/&#xA;&#xA;guix import pypi itsdangerous&#xA;guix import crate becareful&#xA;guix import gem wow&#xA;&#xA;Previous commands will issue a new package definition corresponding to a package already handled by the language own package manager. #Dependencies, you say ? Use the --recursive flag. Once you have the definition, you’ll be able to build, install and use the corresponding package. br/&#xA;Check in the documentation the surprising amount of backends available, you’ll be gratefully surprised. br/&#xA;&#xA;software heritage&#xA;&#xA;Last, but not least. br/&#xA;You have guix, its package definitions and all the fancy tools which come along. But. What if you don’t have access to the source code ? In this case, all the previous becomes meaningless: remember that guix is a fully bootstrapped distribution, being built from the very bottom up. Building a package from its source means having access to the source, which most of the time is hosted in a #gitforge. But #forges disappear, especially proprietary ones, repositories relocate or are just obsoleted and get replaced. br/&#xA;In this case, guix gets you covered by falling back to Software Heritage (#SH). This initiative, with support of the UNESCO, collects, preserves, and shares the source code of all software that is publicly available, including its full development history. The collection is trigger automatically by a crawler, manually with a browser plugin or by guix itself when developers lint package definitions. br/&#xA;If, in ten years, you try to replicate one of your papers, and you plan to recreate your environment to run your code and reproduce your plots, you won’t be bothered by nowadays python 3.10 having being obsoleted, abandon and buried in history of computers by #guile. SH keeps a copy for you to sleep better at night. br/&#xA;&#xA;channels&#xA;&#xA;Channels, as a feature to extend guix repository of definitions, deserve its own chapter. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/guix-crash-course.png" alt="img"> <br/>
Guix reveals as a <a href="https://infosec.press/csantosb/use-guix" rel="nofollow">practical means</a> of handling <a href="https://infosec.press/csantosb/on-dependencies" rel="nofollow">dependencies</a>. However, the amount of information available to start using it may appear as a bit overwhelming for a beginner, letting the feeling of a tool reserved to a reduced community or experts. Far from that. Here you’ll find everything you need to get started with <a href="https://guix.gnu.org/" rel="nofollow">guix</a>, with a light touch on using it for <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>.  <br/>
We will concentrate in the use of <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> as an external package manager on top of a <a href="/csantosb/tag:linux" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">linux</span></a> distribution based on <a href="/csantosb/tag:systemd" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">systemd</span></a>. We’ll let aside using and referring to <a href="/csantosb/tag:guixsystem" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guixsystem</span></a> as a full operating system by itself (which I never used anyway). This way, in the context of <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, we may keep on using our favorite tools, environment and workflow. As an addition, we have everything that guix provides at our disposal, without affecting our local packages configuration: guix acts as an extra layer on top of our current OS, without any interference with it. You’ll have the possibility to install any guix software, remove it afterward or make use of the fancy features guix has to offer, without your host OS ever noticing what’s going on. <br/>
All what follows is roughly based on the <a href="https://guix.gnu.org/manual/en/html_node/index.html" rel="nofollow">guix reference manual</a> and the <a href="https://guix.gnu.org/cookbook/en/guix-cookbook.html" rel="nofollow">guix cookbook</a>, so refer to them for more on depth explanations. This article is strongly influenced by my personal experience as a daily driver, so next topics are necessarily biased towards my own needs. <br/>
There is <a href="https://www.futurile.net/resources/guix/" rel="nofollow">much more to say</a> about guix, but this is just an introductory crash course, right ? <br/></p>

<h1 id="install">install</h1>

<p>First things first. You need to be root to proceed to a <a href="https://guix.gnu.org/manual/en/html_node/Binary-Installation.html" rel="nofollow">binary guix installation</a>. Just download the installer, and follow the instructions. <br/>
After that, you’ll be using guix as a regular user, and all what follows must be run without any special rights beyond accessing to your home directory. Behind the curtains, guix computes what’s necessary through the running guix daemon, handled by your host’s systemd. <br/></p>

<h1 id="packages">packages</h1>

<p>Packages are built definitions. At no surprise, they are installed (or removed) with <br/></p>

<pre><code class="language-sh">guix search synthesis
guix install yosys
guix remove python
</code></pre>

<p><a href="https://guix.gnu.org/cookbook/en/html_node/A-_0060_0060Hello-World_0027_0027-package.html" rel="nofollow">Definitions</a>, in turn, are <a href="https://guix.gnu.org/cookbook/en/html_node/A-Scheme-Crash-Course.html" rel="nofollow">guile</a> descriptions on how and where to obtain code source, a precise and unambiguous reference to it, how to process and how to install it, along with the necessary input dependencies, its kind, and how to use them. Definitions may be seen as customized default <a href="https://guix.gnu.org/manual/en/html_node/Build-Systems.html" rel="nofollow">build templates</a>, which avoids complicated package definitions, simplifying its design. Thus, a default build template called <code>python-build-system</code> exist, for example, for producing python packages. A package definition customizes the way this template is used, modifying its default package, source, etc. fields. <br/>
Definitions are built on isolated, minimalistic environments. Once built, packages are deposit in the guix store under <code>/gnu/store</code>. Each package is given a unique hash: changing the definition, or any of its inputs, produces a different package and hash. This is what usually is referred to as <a href="https://arxiv.org/abs/1305.4584" rel="nofollow">functional package management</a> of <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a>. <br/>
A package may have multiple outputs (<code>out</code>, by default, but also <code>doc</code>, etc.). Packages are built locally following the package definition. To avoid long run times and wasting cpu cycles, guix introduces <a href="https://guix.gnu.org/manual/en/html_node/Substitutes.html" rel="nofollow">substitutes</a> or pre-built packages available on remote (substitute) servers. When available, substitutes are downloaded, which avoids having to built packages locally. Otherwise, your local computing resources will be put to contribution, which is far from ideal, so better configure your substitute servers before anything else (check your systemd <code>guix-daemon</code> file). It is possible to verify substitute availability with <br/></p>

<pre><code class="language-sh">guix weather ghdl-clang
</code></pre>

<pre><code class="language-sh">  https://guix.bordeaux.inria.fr ☀
--&gt; 100.0 % des substituts sont disponibles (1 sur 1)  &lt;--
    6,2 Mio de fichiers nar (compressés)
    39,6 Mio sur le disque (décompressé)
    0,777 secondes par requête (0,8 secondes en tout)
    1,3 requêtes par seconde
</code></pre>

<p>It is <strong>crucial</strong> to understand that <em>a given package built will be identical to any other build of this same package</em>, regardless of the host computer, which is what holds the validity of the very idea of substitutes, and guarantees <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a>. This holds for any guix construction, including shell containers (see below). <br/>
Keep that in mind. <br/></p>

<h1 id="profiles-and-generations">profiles and generations</h1>

<p>After first install, guix will create on your behalf a default profile under <code>~/.guix-profile</code>. All operations (install, remove) will affect this profile, unless you decide to point somewhere else (with the modifier <code>-p $GUIX_PROFILE</code>). <br/></p>

<pre><code class="language-sh">guix package -p $GUIX_PROFILE --list-installed
</code></pre>

<pre><code class="language-sh">coreutils       9.1     out     /gnu/store/fk39d3y3zyr6ajyzy8d6ghd0sj524cs5-coreutils-9.1
git             2.46.0  out     /gnu/store/wyhw9f49kvc7qvbsbfgm09lj0cpz1wlb-git-2.46.0
fw-open-logic   3.0.1   out     /gnu/store/hrgdvswmvqcyai4pqmr7df0kpyyak94j-fw-open-logic-3.0.1
osvvm-scripts   2024.09 out     /gnu/store/xhxr3y1k8838my6mfk992kn392pwszjm-osvvm-scripts-2024.09
osvvm-uart      2024.09 out     /gnu/store/x3pjf95h8p3mbcx4zxb6948xfq3y3vg8-osvvm-uart-2024.09
fd              9.0.0   out     /gnu/store/nx0hz1y3g7iyi4snyza7rl5600z73xyn-fd-9.0.0
make            4.4.1   out     /gnu/store/963iman5zw7zdf128mqhklihvjh6habm-make-4.4.1
tcllib          1.19    out     /gnu/store/443vgrmwac1mvipyhin5jblsml9lplxf-tcllib-1.19
tcl             8.6.12  out     /gnu/store/w2icygvc0h294bzak0dyfafq649sdqvn-tcl-8.6.12
ghdl-clang      4.1.0   out     /gnu/store/sy0ryysxwbkzj6gpfka20fs27knmgmkd-ghdl-clang-4.1.0
</code></pre>

<p>Each profile generation will consist on a set of symbolic links pointing to <code>/gnu/store</code>. A new generation is produced when you install or remove something. This will only redefine your profile’s links, and so the status of the profile (and the packages you have access to). Generations are roughly the equivalent of <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> commits, if this helps. They are nothing but collections of links pointing to the store, where packages are installed. Each collection defines a generation and so the current status of a guix profile. <br/>
You may roll back to previous generations, or move forward, but only linear generation histories are allowed. In you go back n generations, and then create a new one, your previous history is lost. <br/></p>

<pre><code class="language-sh">guix package -p $GUIX_PROFILE --list-generations
</code></pre>

<p>In addition to creating links, a profile redefines the environment variables (following the profle contents), appending, prepending or replacing the current ones. This way, the user enters an augmented context, having access to the packages in the profile. <br/>
Note that, inside a profile, the user still have access to the external system: for example, the PATH env variable is augmented with the profile bin directory, but former binaries are still there. To get a higher degree of isolation, we need shell containers (see below). <br/></p>

<h1 id="clean">clean</h1>

<p>From time to time, don’t forget to clean the store, removing stuff no profile and generation is pointing to, with <br/></p>

<pre><code class="language-sh">guix gc
</code></pre>

<p>Before that, remove old generations from your profiles, unless you plan to make use of them at some point. <br/></p>

<h1 id="upgrade">upgrade</h1>

<p>Upgrade guix current profile with <br/></p>

<pre><code class="language-sh">guix pull &amp;&amp; guix upgrade
</code></pre>

<p>This will create a new generation in your default profile, including updates to all your packages in current profile. <code>Pulling</code> syncs local guix with remote guix repository, fetching updates locally. <code>Upgrade</code> will deploy these updates to your profile. <br/>
Remember also to <br/></p>

<pre><code class="language-sh">sudo -i guix pull
sudo systemctl daemon-reload
sudo systemctl restart guix-daemon
</code></pre>

<p>to upgrade the system daemon, so that it is never too delayed with respect to your guix in use. <br/></p>

<h1 id="manifest-channels">manifest, channels</h1>

<p>If not already clear from the previous, remember that it is possible to <a href="https://guix.gnu.org/manual/en/html_node/Replicating-Guix.html" rel="nofollow">replicate</a> environments (contexts, profiles, dependencies) using a couple of <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> files. <br/>
First, the <code>#manifest.scm</code>, which includes the list of packages in the environment. As an example, export your current profile with <br/></p>

<pre><code class="language-sh">guix package -p $GUIX_PROFILE --export-manifest &gt; manifest.scm
</code></pre>

<p>Put it somewhere under version control, and replicate your environment somewhere else with <br/></p>

<pre><code class="language-sh">guix package -p $GUIX_PROFILE -m manifest.scm
</code></pre>

<p>That’s all it takes to get exactly the same development context in another host, for example. <br/>
But you’re right, I see you follow. This is not enough. You also need to freeze which guix version you’re using (guix, as any other package manager, not always installs the same version of some package). You need also a <code>#channels.scm</code> file. It may be produced with <br/></p>

<pre><code class="language-sh">guix describe --format=channels -p $GUIX_PROFILE &gt; channels.scm
</code></pre>

<p>and includes the list of <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">channels</a> in use, along with a hash to identify <em>which version</em> of the channels to use, among the whole history of channel revisions (the <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> commit of the channel repository). Then, import it somewhere else with <br/></p>

<pre><code class="language-sh">guix pull -C channels.scm
</code></pre>

<h2 id="examples">examples</h2>

<p>Fixing your channels, its revision and the list of packages is all you need to eliminate any ambiguity, achieving <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> and <a href="/csantosb/tag:determinism" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">determinism</span></a>. Remember that this is the best advantage of guix, after all. Say you publish something (report, article, paper, blog ticket). If you provide a git repository with these two files, anyone else will be <br/>
able, hopefully, to replicate your asserts. <br/>
As a typical example, when you have a complex <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> design, including <a href="https://infosec.press/csantosb/on-dependencies" rel="nofollow">a large set of dependencies</a>, you need a means to handle them. Here, we assume the <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a> may be vhdl compilers, tcl shells, python interpreters, unit testing libraries, verification frameworks. etc. ... but also other vhdl modules, each in its own git repository. <br/>
At this point, you’ll need, first, to fix your <code>channels.scm</code> to “freeze” the repositories status. Here we are using, for example, the <a href="https://git.sr.ht/~csantosb/guix.channel-electronics" rel="nofollow">electronics</a>, <a href="https://git.sr.ht/~csantosb/guix.channel-electronics" rel="nofollow">guix-science</a> and guix channels, each in a fix release given by a commit hash. <br/></p>

<pre><code class="language-scheme">(list (channel
       (name &#39;electronics)
       (url &#34;https://git.sr.ht/~csantosb/guix.channel-electronics&#34;)
       (branch &#34;main&#34;)
       (commit
        &#34;2cad57b4bb35cc9250a7391d879345b75af4ee0a&#34;)
       (introduction
        (make-channel-introduction
         &#34;ba1a85b31202a711d3e3ed2f4adca6743e0ecce2&#34;
         (openpgp-fingerprint
          &#34;DA15 A1FC 975E 5AA4 0B07  EF76 F1B4 CAD1 F94E E99A&#34;))))
      (channel
       (name &#39;guix-science)
       (url &#34;https://codeberg.org/guix-science/guix-science.git&#34;)
       (branch &#34;master&#34;)
       (commit
        &#34;1ced1b3b913b181e274ca7ed2239d6661c5154c9&#34;)
       (introduction
        (make-channel-introduction
         &#34;b1fe5aaff3ab48e798a4cce02f0212bc91f423dc&#34;
         (openpgp-fingerprint
          &#34;CA4F 8CF4 37D7 478F DA05  5FD4 4213 7701 1A37 8446&#34;))))
      (channel
       (name &#39;guix)
       (url &#34;https://git.savannah.gnu.org/git/guix.git&#34;)
       (branch &#34;master&#34;)
       (commit
        &#34;3e2442de5268782213b04048463fcbc5d76accd7&#34;)
       (introduction
        (make-channel-introduction
         &#34;9edb3f66fd807b096b48283debdcddccfea34bad&#34;
         (openpgp-fingerprint
          &#34;BBB0 2DDF 2CEA F6A8 0D1D  E643 A2A0 6DF2 A33A 54FA&#34;)))))

</code></pre>

<p>Then, you need the list of dependencies necessary to your design. These are provided in a <code>manifest.scm</code> file, as for example in <br/></p>

<pre><code class="language-scheme">(specifications-&gt;manifest
 (list &#34;ghdl-clang&#34;
       &#34;tcl&#34;
       &#34;tcllib&#34;
       &#34;make&#34;
       &#34;python-vunit&#34;
       &#34;osvvm-uart&#34;
       &#34;osvvm-scripts&#34;
       &#34;fw-open-logic&#34;
       &#34;git&#34;
       &#34;which&#34;
       &#34;findutils&#34;
       &#34;coreutils&#34;))
</code></pre>

<p>One may include these two files in the design, in a different <code>testing</code> <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> branch, for example. Then, all it takes to run your design in a reproducible way is cloning the design git repository, checking out the <code>testing</code> branch, and running <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> time machine (see next) to replicate a local profile containing all the design’s dependencies. Remember that here we include also all third party firmware modules instantiated in our design. <br/></p>

<h1 id="time-machine">time machine</h1>

<p>How guix guarantees that it is possible to reproduce a profile in the future ? The trick consist on asking current guix version to call a previous guix version (the one we define), to deploy the profile with the packages we need. <br/>
For example: let&#39;s ask guix-5 to make use of guix-4 to install emacs-30 package, which is only available in the guix-4 repositories, whereas guix-5 only provides emacs-32. <br/>
This mechanism is called time-machine. It is used as, for example: <br/></p>

<pre><code class="language-sh">guix time-machine --channels=channels.scm -- package -p $GUIX_PROFILE -m manifest.scm
</code></pre>

<p>Here, up-to-date guix uses time machine to roll back to the former guix version defined in <code>channels.scm</code>. Then, former guix calls the <code>package</code> command to install under <code>$GUIX_PROFILE</code> the list of packages defined in the <code>manifest.scm</code> file. <br/>
What’s important to understand here is that this will produce exactly the same output regardless of the host and the point in time when we run this command. The profile we produce is always the same, by design. And <em>this is what is relevant for <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a></em>. <br/></p>

<h1 id="shell-containers">shell containers</h1>

<p>Guix includes a command to create independent environments from the rest of our host system. This provides an increased degree of isolation when compared to profiles, as the later lie on top of, and only augment, our current shell. Shell containers create a new, almost empty by default, minimalistic context for us to install packages. <br/></p>

<pre><code class="language-sh">guix shell --container --link-profile --emulate-fhs coreutils which python-vunit osvvm-uart
guix shell --container --link-profile --emulate-fhs -m manifest.scm
</code></pre>

<p>or, if one needs determinism <br/></p>

<pre><code class="language-sh">guix time-machine --channels=channels.scm -- shell --container --link-profile --emulate-fhs -m manifest.scm
</code></pre>

<p>The <code>--link-profile</code> flag will link the contents of <code>$GUIX_PROFILE</code> under <code>/gnu/store</code> to <code>~/.guix-profile</code>. <br/>
The <code>--emulate-fs</code> will, well, reproduce the standard file system under <code>/</code>, as some packages expect this layout and fail otherwise. <br/>
<code>coreutils</code> and <code>which</code> packages will be helpful, otherwise, not even <code>ls</code> command is present within the container. Minimalistic, I said. I should have use isolated instead. <br/></p>

<h1 id="packs">packs</h1>

<p>Great. But. What if guix is not around ? How do I use it in a cluster, or in another host where guix is not yet available ? How do I distribute my dependencies, environments, etc. to a non-yet-guix-user ? No problem, <code>guix pack</code> is intended to be used as a <a href="https://guix.gnu.org/manual/en/html_node/Invoking-guix-pack.html" rel="nofollow">simple way</a> of “packaging” guix contexts (understood as a set of packages), deploying them afterward in a target host. This is next step after profiles and shell containers. <br/>
Guix pack comes equipped with several different backends, producing contexts in the most habitual and useful formats. For example, the following command will pack <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>, <a href="/csantosb/tag:ghdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ghdl</span></a> and <a href="/csantosb/tag:yosys" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">yosys</span></a> for you to use where you need it. <br/></p>

<pre><code class="language-sh">guix pack -f docker emacs ghdl-clang yosys
</code></pre>

<p>In the context of <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, <a href="/csantosb/tag:docker" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">docker</span></a> images may be used for <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> tests, uploading the image to a remote <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> registry; <a href="/csantosb/tag:apptainer" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">apptainer</span></a> containers can be sent and run in a <a href="/csantosb/tag:hpc" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">hpc</span></a> cluster; .tar.gz compresses files are a clean may of installing non-existing software in a remote machine. Furthermore, one has the possibility of packaging all the project <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a> in a <code>manifest.scm</code> file, and distribute it along with the source code to anyone willing to use it. No instructions about the proper environment to run the project, no complicated installation of dependencies. <em>Stop asking third party users in your README to handle <strong>your dependencies</strong> for you</em>. <br/>
A simple <code>docker pull</code> pointing to a <a href="/csantosb/tag:forge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">forge</span></a> image repository is enough when guix is not locally available. Long run times ? Use a <a href="/csantosb/tag:forge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">forge</span></a> <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> custom runner in your own hardware with your <a href="/csantosb/tag:singularity" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">singularity</span></a> image. Remote work to an <a href="/csantosb/tag:ssh" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ssh</span></a> server with obsolete software ? Pack, send and untar your favorite development tools and create a custom profile, no admin rights needed. The possibilities are endless. <br/>
And most important: the advantage of this approach over a classical docker or singularity files for producing the images is <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a>: every single time you build the image, you’ll get the exact same binary product. Use the <code>--save-provenance</code> flag to store in the image itself the manifest you used to create it. <br/>
Good luck trying to achieve the same with a docker file. <br/></p>

<h1 id="importing">importing</h1>

<p>Now, guix is not a universal tool for installing anything around. What about this obscure <a href="/csantosb/tag:python" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">python</span></a> package no one uses but you ? You’d absolutely need this <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a> package you just found on <a href="/csantosb/tag:codeberg" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">codeberg</span></a> which guix doesn’t provide. Rust, go ... There are plenty of pieces of code around not being yet packaged along with guix. No problem. Guix incorporates a simple and elegant way of extending the amount of packages you’ll be able to install. Just <a href="https://guix.gnu.org/manual/en/html_node/Invoking-guix-import.html" rel="nofollow">import them</a>. <br/></p>

<pre><code class="language-sh">guix import pypi itsdangerous
guix import crate becareful
guix import gem wow
</code></pre>

<p>Previous commands will issue a new package definition corresponding to a package already handled by the language own package manager. <a href="/csantosb/tag:Dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Dependencies</span></a>, you say ? Use the <code>--recursive</code> flag. Once you have the definition, you’ll be able to build, install and use the corresponding package. <br/>
Check in the <a href="https://guix.gnu.org/manual/en/html_node/Invoking-guix-import.html" rel="nofollow">documentation</a> the surprising amount of backends available, you’ll be gratefully surprised. <br/></p>

<h1 id="software-heritage">software heritage</h1>

<p>Last, but not least. <br/>
You have guix, its package definitions and all the fancy tools which come along. But. What if you don’t have access to the source code ? In this case, all the previous becomes meaningless: remember that guix is a fully bootstrapped distribution, <a href="https://infosec.press/csantosb/use-guix" rel="nofollow">being built from the very bottom up</a>. Building a package from its source means having access to the source, which most of the time is hosted in a <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a>. But <a href="/csantosb/tag:forges" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">forges</span></a> disappear, especially proprietary ones, repositories relocate or are just obsoleted and get replaced. <br/>
In this case, guix gets you covered by falling back to <a href="https://www.softwareheritage.org/2019/04/18/software-heritage-and-gnu-guix-join-forces-to-enable-long-term-reproducibility/" rel="nofollow">Software Heritage</a> (<a href="/csantosb/tag:SH" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">SH</span></a>). This initiative, <a href="https://www.unesco.org/en/articles/celebrating-software-source-code-digital-heritage" rel="nofollow">with support of the UNESCO</a>, collects, preserves, and shares the source code of all software that is publicly available, including its full development history. The collection is trigger automatically by a crawler, manually with a <a href="https://www.softwareheritage.org/browser-extensions/" rel="nofollow">browser plugin</a> or by guix itself when developers lint package definitions. <br/>
If, in ten years, you try to replicate one of your papers, and you plan to recreate your environment to run your code and reproduce your plots, you won’t be bothered by nowadays python 3.10 having being obsoleted, abandon and buried in history of computers by <a href="/csantosb/tag:guile" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guile</span></a>. SH keeps a copy for you to sleep better at night. <br/></p>

<h1 id="channels">channels</h1>

<p><a href="https://guix.gnu.org/manual/en/html_node/Channels.html" rel="nofollow">Channels</a>, as a feature to extend guix repository of definitions, deserve <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">its own chapter</a>. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/guix-crash-course</guid>
      <pubDate>Fri, 29 Nov 2024 15:25:48 +0000</pubDate>
    </item>
    <item>
      <title>sourcehut crash course</title>
      <link>https://infosec.press/csantosb/sourcehut-crash-course</link>
      <description>&lt;![CDATA[img br/&#xA;Everything you need to get started with sourcehut, an original and interesting #gitforge . !--more-- br/&#xA;#Sourcehut is organized in independent, closely related services. #Git repositories, #maillists, #wiki pages, #bug trackers, etc. belong to different domains, see below. They cooperate to provide a pleasant and practical experience when doing #modernhw. br/&#xA;Check the official documentation for details in all that follows. br/&#xA;Each user is given a ~alias. Each user’s service will appear under the SERVICE.sr.ht/~user domain, and are accessible in the top panels of the web interface. br/&#xA;Main services are: br/&#xA;&#xA;git&#xA;&#xA;git.sr.ht/~user/project, #git repositories. br/&#xA;There are no groups to combine them. I use dot notation to group my projects git.sr.ht/~user/group1.prj1. br/&#xA;&#xA;builds&#xA;&#xA;builds.sr.ht/~user, task building service, with the list of builds by the user, with user’s recent activity. br/&#xA;builds.sr.ht, similar to the previous. br/&#xA;Builds are trigger by a .build.yml manifest, or by a .build folder folder with up to 4 manifest files, at the root of a project. br/&#xA;It is also possible to automatically submit builds when a patch to a repo with build manifests is sent to a mailing list. This is achieved by appending the project name as a prefix to the subject of the message, for example [PATCH project-name]. br/&#xA;Check doc for details. br/&#xA;&#xA;hub&#xA;&#xA;The main tab, sourcehut, gives access to the hub hub.sr.ht/~user (identical to sr.ht/~user). br/&#xA;The hub displays user’s projects. br/&#xA;Projects are groups of git repositories, #maillists and bug trackers. There may be any of them in a project. br/&#xA;Sourcehut itself is organised as a project here https://sr.ht/~sircmpwn/sourcehut, and may be used as an example. br/&#xA;&#xA;todo&#xA;&#xA;todo.sr.ht/~user, ticket tracking service, with the list of trackers created by the user, with user’s recent activity br/&#xA;todo.sr.ht, similar to the previous br/&#xA;&#xA;man&#xA;&#xA;man.sr.ht/~user/NAME, #wiki service to create documentation, with the wikis created by the user br/&#xA;man.sr.ht, sourcehut documentation br/&#xA;man.sr.ht/builds.sr.ht, builds service documentation br/&#xA;man.sr.ht/hub.sr.ht, hub service documentation, etc. br/&#xA;&#xA;lists&#xA;&#xA;lists.sr.ht/~user/NAME, #email #maillists service, with the list created by the user, with user’s recent activity br/&#xA;lists.sr.ht, lists the user follows, with recent activity br/&#xA;&#xA;Extra services are alse provided: br/&#xA;&#xA;chat&#xA;&#xA;chat.sr.ht, without the user alias, is a paid service. It provides a bouncer to save history of IRC channels while not connected, and may be used from another IRC client. br/&#xA;&#xA;paste.sr.ht&#xA;&#xA;Paste hosting service. br/&#xA;&#xA;srht.site&#xA;&#xA;Once again, the official documentation gives in depth details about the previous. And remember, there is also hut to operate from the #cli. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/sourcehut.png" alt="img"> <br/>
Everything you need to get started with <a href="https://sr.ht" rel="nofollow">sourcehut</a>, an <a href="https://infosec.press/csantosb/git-forges#sourcehut" rel="nofollow">original and interesting</a> <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> .  <br/>
<a href="/csantosb/tag:Sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Sourcehut</span></a> is organized in independent, closely related services. <a href="/csantosb/tag:Git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Git</span></a> repositories, <a href="/csantosb/tag:maillists" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">maillists</span></a>, <a href="/csantosb/tag:wiki" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">wiki</span></a> pages, <a href="/csantosb/tag:bug" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">bug</span></a> trackers, etc. belong to different domains, see below. They cooperate to provide a pleasant and practical experience when doing <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>. <br/>
Check the <a href="https://man.sr.ht" rel="nofollow">official documentation</a> for details in all that follows. <br/>
Each user is given a <code>~alias</code>. Each user’s service will appear under the <code>SERVICE.sr.ht/~user</code> domain, and are accessible in the top panels of the web interface. <br/>
Main services are: <br/></p>

<h1 id="git">git</h1>

<p><code>git.sr.ht/~user/project</code>, <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repositories. <br/>
There are no groups to combine them. I use dot notation to group my projects <code>git.sr.ht/~user/group1.prj1</code>. <br/></p>

<h1 id="builds">builds</h1>

<p><code>builds.sr.ht/~user</code>, task building service, with the list of builds by the user, with user’s recent activity. <br/>
<code>builds.sr.ht</code>, similar to the previous. <br/>
Builds are trigger by a <code>.build.yml</code> manifest, or by a <code>.build</code> folder folder with up to 4 manifest files, at the root of a project. <br/>
It is also possible to automatically submit builds when a patch to a repo with build manifests is sent to a mailing list. This is achieved by appending the project name as a prefix to the subject of the message, for example [PATCH project-name]. <br/>
Check <a href="https://man.sr.ht/builds.sr.ht/" rel="nofollow">doc</a> for details. <br/></p>

<h1 id="hub">hub</h1>

<p>The main tab, <code>sourcehut</code>, gives access to the hub <code>hub.sr.ht/~user</code> (identical to <code>sr.ht/~user</code>). <br/>
The hub displays user’s projects. <br/>
Projects are groups of git repositories, <a href="/csantosb/tag:maillists" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">maillists</span></a> and bug trackers. There may be any of them in a project. <br/>
Sourcehut itself is organised as a project here <a href="https://sr.ht/~sircmpwn/sourcehut" rel="nofollow">https://sr.ht/~sircmpwn/sourcehut</a>, and may be used as an example. <br/></p>

<h1 id="todo">todo</h1>

<p><code>todo.sr.ht/~user</code>, ticket tracking service, with the list of trackers created by the user, with user’s recent activity <br/>
<code>todo.sr.ht</code>, similar to the previous <br/></p>

<h1 id="man">man</h1>

<p><code>man.sr.ht/~user/NAME</code>, <a href="/csantosb/tag:wiki" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">wiki</span></a> service to create documentation, with the wikis created by the user <br/>
<code>man.sr.ht</code>, <code>sourcehut</code> documentation <br/>
<code>man.sr.ht/builds.sr.ht</code>, builds service documentation <br/>
<code>man.sr.ht/hub.sr.ht</code>, hub service documentation, etc. <br/></p>

<h1 id="lists">lists</h1>

<p><code>lists.sr.ht/~user/NAME</code>, <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a> <a href="/csantosb/tag:maillists" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">maillists</span></a> service, with the list created by the user, with user’s recent activity <br/>
<code>lists.sr.ht</code>, lists the user follows, with recent activity <br/></p>

<p><strong>Extra services</strong> are alse provided: <br/></p>

<h1 id="chat">chat</h1>

<p><code>chat.sr.ht</code>, without the user alias, is a paid service. It provides a <a href="https://sourcehut.org/blog/2021-11-29-announcing-the-chat.sr.ht-public-beta/" rel="nofollow">bouncer to save history of IRC channels</a> while not connected, and may be used from another IRC client. <br/></p>

<h1 id="paste-sr-ht">paste.sr.ht</h1>

<p>Paste hosting service. <br/></p>

<h1 id="srht-site">srht.site</h1>

<p>Once again, the <a href="https://man.sr.ht" rel="nofollow">official documentation</a> gives in depth details about the previous. And remember, there is also <a href="https://sr.ht/~emersion/hut/" rel="nofollow">hut</a> to operate from the <a href="/csantosb/tag:cli" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">cli</span></a>. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/sourcehut-crash-course</guid>
      <pubDate>Fri, 29 Nov 2024 14:33:48 +0000</pubDate>
    </item>
  </channel>
</rss>