<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>freesoftware &amp;mdash; csantosb</title>
    <link>https://infosec.press/csantosb/tag:freesoftware</link>
    <description>Random thoughts</description>
    <pubDate>Wed, 13 May 2026 21:28:17 +0000</pubDate>
    <item>
      <title>sourcehut as guix test farm</title>
      <link>https://infosec.press/csantosb/sourcehut-as-guix-test-farm</link>
      <description>&lt;![CDATA[img br/&#xA;It is possible to contribute to improving #guix as the need for new functionalities, packages, fixes or upgrades arise. This is one of the strongest points in open communities: the possibility to participate on the development and continuous improvement of the tool. Let’s see how it goes when it comes to guix.!--more-- br/&#xA;Guix is a huge project which follows closely the #freesoftware paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your fixes, updates or new features, once they have been tested. In a first approach, from my own experience, one may create a personal local repository of package definitions, for a personal use. As a second step, it is possible to create a public guix channel, in parallel to contributing upstream. br/&#xA;Contributing your code to guix comes to sending #email with your patches attached, it’s that simple. Don&#39;t be intimidated by the details (this is used by lots of open communities, after all). Once your patches are submitted, a review of your code follows, see details. Some tools, like mumi, are helpful to that purpose. br/&#xA;&#xA;In detail&#xA;&#xA;Following the kind of contribution (new additions, fixes or upgrades), these simple steps will allow you to start contributing to guix: br/&#xA;&#xA;    git clone guix itselft br/&#xA;    from the guix repository, do: br/&#xA;    &#xA;        guix shell -D guix -CPW&#xA;    ./bootstrap&#xA;    ./configure&#xA;    make -j$(nproc)&#xA;    ./pre-inst-env guix build hello&#xA;        add and commit your changes, watch the commit message br/&#xA;    beware your synopses and descriptions br/&#xA;    remember to run the package tests, if relevant br/&#xA;    check the license br/&#xA;    use an alphabetical order in input lists br/&#xA;    no sign off your commits br/&#xA;    don’t forget to use lint/style/refresh -l/dependents to check your code br/&#xA;&#xA;Boring and routinary, right ? br/&#xA;&#xA;Use sourcehut&#xA;&#xA;img br/&#xA;Most of all the of the previous can be run automatically with help of sourcehut build farm #ci capabilities. Just simply, push the guix repository to sr.ht. At this point, it is possible to use this manifest file to run the lint/style/refresh -l/dependents testing stages on the yosys package definition, por example: br/&#xA;&#xA;image: guix&#xA;shell: true&#xA;environment:&#xA;  prj: guix.guix&#xA;  cmd: &#34;guix shell -D guix -CPWN git nss-certs -- ./pre-inst-env guix&#34;&#xA;sources:&#xA;  https://git.sr.ht/~csantosb/guix.guix&#xA;tasks:&#xA;  defpkg: |&#xA;      cd &#34;$prj&#34;&#xA;      pkg=$(git log -1 --oneline | cut -d&#39;:&#39; -f 2 | xargs)&#xA;      echo &#34;export pkg=$pkg&#34;     &#34;$HOME/.buildenv&#34;&#xA;  setup: |&#xA;      cd &#34;$prj&#34;&#xA;      guix shell -D guix -CPW -- ./bootstrap&#xA;      guix shell -D guix -CPW -- ./configure&#xA;      guix shell -D guix -CPW -- make -j $(nproc)&#xA;  build: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd build --rounds=5 $pkg&#34;&#xA;  lint: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd lint $pkg&#34;&#xA;  style: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd style $pkg --dry-run&#34;&#xA;  refresh: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd refresh -l $pkg&#34;&#xA;  dependents: |&#xA;      cd &#34;$prj&#34;&#xA;      eval &#34;$cmd build --dependents $pkg&#34;&#xA;triggers:&#xA;  condition: failure&#xA;    action: email&#xA;    to: builds.sr.ht@csantosb.mozmail.com&#xA;&#xA;Submit the manifest with br/&#xA;&#xA;hut builds submit # --edit&#xA;&#xA;You’ll be able to log into the build farm to follow the build process or to debug it with br/&#xA;&#xA;hut builds ssh ID&#xA;&#xA;Check the log here. As you can see, it fails: building of yosys succeeds, but building of packages which depend on it (--dependents) fails. br/&#xA;&#xA;Advanced&#xA;&#xA;Sourcehut provides a facility to automatize patch submission and testing. Using its hub integrator, one may just send an email to the email list related to your project (guix in this case), which mimics guix behavior for accepting patches. br/&#xA;The trick here consists on appending the project name as a prefix to the subject of the message, for example PATCH project-name], which will trigger the build of previous [.build.yml manifest file at the root of the project, after applying the patch. Neat, right ? br/&#xA;If you followed right here, you’ll notice that previous build manifest file is monolithic, affecting always the same package (yosys), which is kind of useless, as we are here interested in testing our patch. Thus, the question on how to trigger a custom build containing an updated $pkg variable related to the patch to test remains open. br/&#xA;To update the contents of the $pkg variable in the build manifest, one has to parse the commit message in the patch, extracting from there the package name. This is not a problem, as guix imposes clear commit messages in patches, so typically something like br/&#xA;&#xA;gnu: gnunet: Update to 0.23.0&#xA;&#xA;or br/&#xA;&#xA;gnu: texmacs: Add qtwayland-5&#xA;&#xA;Hopefully, parsing these messages to get the package name, and so the value of $pkg is trivial. br/&#xA;Then, it remains to include in our build manifest a first task which updates the contents of &#34;$HOME/.buildenv&#34;. This file is automatically populated using the environment variables in the manifest, and its contents are sourced at the beginning of all tasks. This mechanism allows passing variables between tasks. br/&#xA;&#xA;echo &#34;export pkg=value&#34;     &#34;$HOME/.buildenv&#34;&#xA;&#xA;Send your contribution&#xA;&#xA;Finally, once your changes go through all the tests, br/&#xA;&#xA;    use git send-email to create and send a patch br/&#xA;    consider reviews, if any, updating your patch accordingly with git ammend br/&#xA;    resend a new patch including a patch version (v1, v2 ...) br/&#xA;&#xA;Interested ? Consult the documentation for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. br/&#xA;ciseries br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/guix.png" alt="img"> <br/>
It is possible to contribute to improving <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> as the need for new functionalities, packages, fixes or upgrades arise. This is one of the strongest points in open communities: the possibility to participate on the development and continuous improvement of the tool. Let’s see how it goes when it comes to <a href="https://guix.gnu.org/" rel="nofollow">guix</a>. <br/>
Guix is a huge project which follows closely the <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your fixes, updates or new features, once they have been tested. In a first approach, from my own experience, one may create a personal local repository of package definitions, for a personal use. As a second step, it is possible to create a public <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">guix channel</a>, in parallel to <a href="https://infosec.press/csantosb/guix-channels#contributing" rel="nofollow">contributing</a> upstream. <br/>
<a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">Contributing</a> your code to guix comes to <a href="https://git-send-email.io/" rel="nofollow">sending <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a></a> <a href="https://www.futurile.net/2022/03/07/git-patches-email-workflow/" rel="nofollow">with your patches</a> attached, it’s that simple. Don&#39;t be intimidated by the details (this is used by lots of open communities, after all). Once your patches are submitted, a review of your code follows, see <a href="https://libreplanet.org/wiki?title=Group:Guix/PatchReviewSessions2024" rel="nofollow">details</a>. Some tools, like <a href="https://www.youtube.com/watch?v=8m8igXrKaqU" rel="nofollow">mumi</a>, are helpful to that purpose. <br/></p>

<h1 id="in-detail">In detail</h1>

<p>Following the kind of contribution (new additions, fixes or upgrades), these simple steps will allow you to start contributing to guix: <br/></p>

<p>    git clone <a href="https://git.savannah.gnu.org/git/guix.git" rel="nofollow">guix itselft</a> <br/>
    from the guix repository, do: <br/></p>

<p>    <code>sh
    guix shell -D guix -CPW
    ./bootstrap
    ./configure
    make -j$(nproc)
    ./pre-inst-env guix build hello
</code>
    add and commit your changes, watch the commit message <br/>
    beware your <a href="https://guix.gnu.org/manual/en/html_node/Synopses-and-Descriptions.html" rel="nofollow">synopses and descriptions</a> <br/>
    remember to run the package tests, if relevant <br/>
    check the license <br/>
    use an alphabetical order in input lists <br/>
    no sign off your commits <br/>
    don’t forget to use <code>lint/style/refresh -l/dependents</code> to check your code <br/></p>

<p>Boring and routinary, right ? <br/></p>

<h1 id="use-sourcehut">Use sourcehut</h1>

<p><img src="https://git.sr.ht/~csantosb/blog.csantosb/blob/master/pics/sourcehut.png" alt="img"> <br/>
Most of all the of the previous can be run automatically with help of <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">sourcehut</a> build farm <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> capabilities. Just simply, push the guix repository to <a href="https://git.sr.ht/~csantosb/guix.guix" rel="nofollow">sr.ht</a>. At this point, it is possible to use <a href="https://builds.sr.ht/~csantosb/job/1391146/manifest" rel="nofollow">this manifest</a> file to run the <code>lint/style/refresh -l/dependents</code> testing stages on the <code>yosys</code> package definition, por example: <br/></p>

<pre><code class="language-yaml">image: guix
shell: true
environment:
  prj: guix.guix
  cmd: &#34;guix shell -D guix -CPWN git nss-certs -- ./pre-inst-env guix&#34;
sources:
  - https://git.sr.ht/~csantosb/guix.guix
tasks:
  - def_pkg: |
      cd &#34;$prj&#34;
      _pkg=$(git log -1 --oneline | cut -d&#39;:&#39; -f 2 | xargs)
      echo &#34;export pkg=$_pkg&#34; &gt;&gt; &#34;$HOME/.buildenv&#34;
  - setup: |
      cd &#34;$prj&#34;
      guix shell -D guix -CPW -- ./bootstrap
      guix shell -D guix -CPW -- ./configure
      guix shell -D guix -CPW -- make -j $(nproc)
  - build: |
      cd &#34;$prj&#34;
      eval &#34;$cmd build --rounds=5 $pkg&#34;
  - lint: |
      cd &#34;$prj&#34;
      eval &#34;$cmd lint $pkg&#34;
  - style: |
      cd &#34;$prj&#34;
      eval &#34;$cmd style $pkg --dry-run&#34;
  - refresh: |
      cd &#34;$prj&#34;
      eval &#34;$cmd refresh -l $pkg&#34;
  - dependents: |
      cd &#34;$prj&#34;
      eval &#34;$cmd build --dependents $pkg&#34;
triggers:
  - condition: failure
    action: email
    to: builds.sr.ht@csantosb.mozmail.com
</code></pre>

<p>Submit the manifest with <br/></p>

<pre><code class="language-sh">hut builds submit # --edit
</code></pre>

<p>You’ll be able to log into the build farm to follow the build process or to debug it with <br/></p>

<pre><code class="language-sh">hut builds ssh ID
</code></pre>

<p>Check the log <a href="https://builds.sr.ht/~csantosb/job/1391146" rel="nofollow">here</a>. As you can see, it fails: building of <code>yosys</code> succeeds, but building of packages which depend on it (<code>--dependents</code>) <a href="https://builds.sr.ht/~csantosb/job/1391146#task-dependents" rel="nofollow">fails</a>. <br/></p>

<h1 id="advanced">Advanced</h1>

<p>Sourcehut provides a facility to automatize <a href="https://man.sr.ht/builds.sr.ht/#integrations" rel="nofollow">patch submission and testing</a>. Using its <code>hub</code> integrator, one may just send an email to the email list related to your project (guix in this case), which mimics guix behavior for accepting patches. <br/>
The trick here consists on appending the project name as a prefix to the subject of the message, for example <code>[PATCH project-name]</code>, which will trigger the build of previous <a href="https://builds.sr.ht/~csantosb/job/1391146/manifest" rel="nofollow">.build.yml</a> manifest file at the root of the project, after applying the patch. Neat, right ? <br/>
If you followed right here, you’ll notice that previous build manifest file is monolithic, affecting always the same package (yosys), which is kind of useless, as we are here interested in testing our patch. Thus, the question on how to trigger a custom build containing an updated <code>$pkg</code> variable related to the patch to test remains open. <br/>
To update the contents of the <code>$pkg</code> variable in the build manifest, one has to parse the commit message in the patch, extracting from there the package name. This is not a problem, as guix imposes clear commit messages in patches, so typically something like <br/></p>

<pre><code class="language-sh">* gnu: gnunet: Update to 0.23.0
</code></pre>

<p>or <br/></p>

<pre><code class="language-sh">* gnu: texmacs: Add qtwayland-5
</code></pre>

<p>Hopefully, parsing these messages to get the package name, and so the value of <code>$pkg</code> is trivial. <br/>
Then, it remains to include in our build manifest a first task which updates the contents of <code>&#34;$HOME/.buildenv&#34;</code>. This file is automatically populated using the environment variables in the manifest, and its contents are sourced at the beginning of all tasks. This mechanism allows passing variables between tasks. <br/></p>

<pre><code class="language-sh">echo &#34;export pkg=value&#34; &gt;&gt; &#34;$HOME/.buildenv&#34;
</code></pre>

<h1 id="send-your-contribution">Send your contribution</h1>

<p>Finally, once your changes go through all the tests, <br/></p>

<p>    use <a href="https://git-send-email.io/" rel="nofollow">git send-email</a> to create and <a href="https://guix.gnu.org/manual/en/html_node/Submitting-Patches.html" rel="nofollow">send a patch</a> <br/>
    consider reviews, if any, updating your patch accordingly with <code>git ammend</code> <br/>
    resend a new patch including a patch version (v1, v2 ...) <br/></p>

<p>Interested ? Consult <a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">the documentation</a> for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. <br/>
<a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a> <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/sourcehut-as-guix-test-farm</guid>
      <pubDate>Tue, 17 Dec 2024 16:57:05 +0000</pubDate>
    </item>
    <item>
      <title>git forges</title>
      <link>https://infosec.press/csantosb/git-forges</link>
      <description>&lt;![CDATA[img br/&#xA;Using #git is not the whole picture on #modernhw version control landscape. Git is great when one decides to locally follow changes, take diffs, create branches and so on. When it comes to collaboration with other people or to create a community around a common project, the need for extra tooling arises, and it becomes evident that git alone is not enough. A #gitforge fills this gap. !--more-- br/&#xA;Git bare repositories are a means of sharing the local git history remotely. Bares doesn’t show the worktree, as they are used solely as a common exchange place. This might be a remote server accessible through ssh, for example. Several different users may collaborate this way, provided they agree on a common workflow. Bares are more than enough for some needs. A front end on top of it may help to get an overview of what is going on and to take a look at branches, users and the like. All it takes to make this workflow useful is a little management, as git was designed with a fully distributed architecture in mind. Check the docs for more details. br/&#xA;Now, this approach is a bit too bare bones for most people. On top of bare git repositories, some decided to add extra functionality to ease using git remotely, calling for contributors attracted by buttons, colors, menus and most generally, being used to web frontends. Web forges include all usual suspects (project creation and configutation, markup rendering, user account and authorizations, project overview, etc.), as well as more advanced features (continuous integration, #ci, for testing and deployment with git hooks, wikis, code linters, built in actions, issue tracking, etc.). They abstract the use of git showing diffs, logs, issues threads, etc. As any other web gui tool, they come with its own set of inconvenients in what concern user freedom. br/&#xA;Popular examples are all around. #Gitlab may be deployed as a custom (not federated) instance, and is commonly found in research and public institutions; codeberg, based on forgejo, is a great example of how to deploy a lightweight #freesoftware instance of a collaborative forge (and the promise to federate on the fediverse). Many others exist, which more or less features, bells and whistles. You always have the choice. br/&#xA;&#xA;sourcehut&#xA;&#xA;#Sourcehut, as a collaborative platform, deserves special attention. It departs from mainstream forges, following a different paradigm based on the most robust, distributed and flexible technology at our hands since decades, plain text #email. Git, since its origins, includes a close integration with email, as they both share a distributed philosophy, avoiding central point of failure silos (surprising how mosft git forges tend to concentrate in silos). Sourcehut core architecture is based on mail exchange, patches and #maillists, which turns out to be a much more flexible approach than that of what most forges propose. Their concept of project goes well beyond that of usual workflows, integrating nicely git with email, wikis, bug trackers and build features. They’re still in an alpha stage, so expect the best still to come. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/forge.png" alt="img"> <br/>
Using <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> is not the whole picture on <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> version control landscape. Git is great when one decides to locally follow changes, take diffs, create branches and so on. When it comes to collaboration with other people or to create a community around a common project, the need for extra tooling arises, and it becomes evident that git alone is not enough. A <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> fills this gap.  <br/>
<a href="https://git-scm.com/book/en/v2/Git-on-the-Server-Getting-Git-on-a-Server" rel="nofollow">Git bare repositories</a> are a means of sharing the local git history remotely. Bares doesn’t show the worktree, as they are used solely as a common exchange place. This might be a remote server accessible through ssh, for example. Several different users may collaborate this way, provided they agree on a common workflow. Bares are more than enough for some needs. A front end on top of it may help to get an overview of what is going on and to take a look at branches, users and the like. All it takes to make this workflow useful is a little management, as git was designed with a fully distributed architecture in mind. Check the docs for more details. <br/>
Now, this approach is a bit too bare bones for most people. On top of bare git repositories, some decided to add extra functionality to ease using git remotely, calling for contributors attracted by buttons, colors, menus and most generally, being used to web frontends. Web forges include all usual suspects (project creation and configutation, markup rendering, user account and authorizations, project overview, etc.), as well as more advanced features (continuous integration, <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>, for testing and deployment with git hooks, wikis, code linters, built in actions, issue tracking, etc.). They abstract the use of git showing diffs, logs, issues threads, etc. As any other web gui tool, they come with its own set of inconvenients in what concern user freedom. <br/>
Popular examples are all around. <a href="/csantosb/tag:Gitlab" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Gitlab</span></a> may be deployed as a custom (not federated) instance, and is <a href="https://about.gitlab.com/" rel="nofollow">commonly found</a> in research and public institutions; <a href="https://codeberg.org/" rel="nofollow">codeberg</a>, based on <a href="https://forgejo.org/" rel="nofollow">forgejo</a>, is a great example of how to deploy a lightweight <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> instance of a collaborative forge (and the promise to federate on the <a href="https://www.fediverse.to/" rel="nofollow">fediverse</a>). Many others exist, which more or less features, bells and whistles. You always <a href="https://drewdevault.com/2022/03/29/free-software-free-infrastructure.html" rel="nofollow">have the choice</a>. <br/></p>

<h1 id="sourcehut">sourcehut</h1>

<p><a href="/csantosb/tag:Sourcehut" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Sourcehut</span></a>, as a collaborative platform, deserves special attention. It departs from mainstream forges, following a <a href="https://begriffs.com/posts/2018-06-05-mailing-list-vs-github.html" rel="nofollow">different paradigm</a> based on the most robust, distributed and flexible technology at our hands since decades, <a href="https://useplaintext.email/" rel="nofollow">plain text</a> <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a>. Git, since its origins, includes a close integration with email, as they both share a distributed philosophy, avoiding central point of failure silos (surprising how mosft git forges tend to concentrate in silos). <a href="https://drewdevault.com/2018/07/02/Email-driven-git.html" rel="nofollow">Sourcehut</a> core architecture is based on mail exchange, patches and <a href="/csantosb/tag:maillists" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">maillists</span></a>, which turns out to be a much more flexible approach than that of what most forges propose. Their concept of project goes well beyond that of usual workflows, integrating nicely git with email, wikis, bug trackers and build features. They’re still in an alpha stage, so expect the best still to come. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/git-forges</guid>
      <pubDate>Sun, 08 Dec 2024 22:36:11 +0000</pubDate>
    </item>
    <item>
      <title>on writting freely</title>
      <link>https://infosec.press/csantosb/on-writting-freely</link>
      <description>&lt;![CDATA[img br/&#xA;Putting new ideas in community, exchanging opinions, replying to someone&#39;s else impressions, sharing public experiences, showing feelings about modern way of living, writing down notes on what’s going on from one’s side ... So many interesting and useful content around to share. The question is, how to do so simply and without complications ? How not to expend way too much time messing with tooling ? Is it yet possible to concentrate on what really matters, contents ? Here I summarize the way I’ve found to contribute to this blog, which fits best with my workflow. !--more-- br/&#xA;&#xA;the what&#xA;&#xA;First and foremost, for the requirements. br/&#xA;In my case, the requisites are simple, even if hard to get when one thinks about. br/&#xA;I need a distracting free environment to concentrate on what really matters: the content I’m willing to share. For sure, I do need to remain within my working environment, that’s to say, #emacs. I need to switch context quickly between any current activity and writing prose when something comes up; while writing, I need to stay focus. Similarly, I want to switch back to previous context when the writing is complete. No doubt, I need to complete previous posts when I have something new to include or to correct, so I need a means of retrieving previous posts quickly. Needless to say, I need a #freesoftware tool I may tune to my needs, fixing issues or including new features. br/&#xA;Last, but not least, I privilege a way to push remotely without complicated compilations of anything at all: a couple of keystrokes, and the post is sent online under its right form, including some markup and images. Updating previous versions if necessary should be that simple too, and I must be able to check the rendering with any web browser, included eww under emacs, so no javascript or fancy stuff involved. That’s it by now. br/&#xA;Easy, right ? br/&#xA;&#xA;the how&#xA;&#xA;My current choice goes for writefreely as an open, decentralized and free alternative to web publishing on the web, which concentrates on providing a simple reading experience. This solution may be self-hosted, but there are also some friendly communities around helping out. Infosec.press is one of them. Blog posts show up in the #fediverse under the (platform) user account, so that they are easy to follow. Server side, this is more than what I need. br/&#xA;But, client side ?, you must be asking. To write text, I’m using #orgmode, with all of its facilities, and not the markdown supported by default by the platform. Then, writefreely.el takes care of exporting contents, handling the data exchange with the server through the provided api. I had to fix a couple of issues before, mostly trivial side effects. This is one of the biggest advantages of #freesoftware, having the possibility to contribute to bug fixing, improving a common. br/&#xA;As for the question on how to access the blog contents locally, I opt for a different #plaintext file by blog ticket, that I manipulate as any other #orgroam node. Orgroam allows to quickly retrieve, manipulate and insert as links previous notes. When I type the name of a non-existing note, it creates a new one for me, based on a custom template which incorporates the necessary headings, title, tags and the like. It resuls in something like: br/&#xA;&#xA;:PROPERTIES:&#xA;:ID:       ID-6dd1-45d7-a70e-ae5c99c2797a&#xA;:END:&#xA;+TITLE: on writting freely&#xA;+OPTIONS: toc:nil -:nil \n:t&#xA;+LINK: srht https://repo/pics/%s&#xA;+filetags: :tag1:tag2:&#xA;a comment&#xA;[[srht:image.png]]&#xA;Donec neque quam, dignissim in, mollis nec, sagittis eu, wisi ... !--more--&#xA;Nunc eleifend leo vitae magna.&#xA;&#xA;You’ll figure it out. br/&#xA;Remains the question of images: just simply, they are hosted on an unlisted git repository, from where they are fetched. Now, with my working environment and with a couple of keys, I may pop up a new buffer, write some content, then publish, delete or update a new article within seconds, checking the results with #eww, all without leaving #emacs. You’ll get a set of local variables append to you buffer when you publish for the first time, something like br/&#xA;&#xA;Local Variables:&#xA;writefreely-post-id: &#34;ID&#34;&#xA;writefreely-post-token: nil&#xA;End:&#xA;&#xA;which allows to retrieve the post online afterwards. br/&#xA;Put the whole under #git control, and the perfect blogging setup is ready for you to enjoy writing ! Simple, elegant and efficient. br/&#xA;Finally, I have packaged writefreely.el and sent a patch to #guix so that it will hopefully get merged upstream soon. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/writefreely.png" alt="img"> <br/>
Putting new ideas in community, exchanging opinions, replying to someone&#39;s else impressions, sharing public experiences, showing feelings about modern way of living, writing down notes on what’s going on from one’s side ... So many interesting and useful content around to share. The question is, how to do so simply and without complications ? How not to expend way too much time messing with tooling ? Is it yet possible to concentrate on what really matters, contents ? Here I summarize the way I’ve found to contribute to this blog, which fits best with my workflow.  <br/></p>

<h1 id="the-what">the what</h1>

<p>First and foremost, for the requirements. <br/>
In my case, the requisites are simple, even if hard to get when one thinks about. <br/>
I need a distracting free environment to concentrate on what really matters: the content I’m willing to share. For sure, I do need to remain within my working environment, that’s to say, <a href="https://infosec.press/csantosb/use-emacs" rel="nofollow"><a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a></a>. I need to switch context quickly between any current activity and writing prose when something comes up; while writing, I need to stay focus. Similarly, I want to switch back to previous context when the writing is complete. No doubt, I need to complete previous posts when I have something new to include or to correct, so I need a means of retrieving previous posts quickly. Needless to say, I need a <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> tool I may tune to my needs, fixing issues or including new features. <br/>
Last, but not least, I privilege a way to <a href="https://infosec.press/csantosb" rel="nofollow">push remotely</a> without complicated compilations of anything at all: a couple of keystrokes, and the post is sent online under its right form, including some markup and images. Updating previous versions if necessary should be that simple too, and I must be able to check the rendering with any web browser, included <a href="https://www.gnu.org/software/emacs/manual/html_mono/eww.html" rel="nofollow">eww</a> under emacs, so no javascript or fancy stuff involved. That’s it by now. <br/>
Easy, right ? <br/></p>

<h1 id="the-how">the how</h1>

<p>My current choice goes for <a href="https://github.com/dangom/writefreely.el.git" rel="nofollow">writefreely</a> as an open, decentralized and free alternative to web publishing on the web, which concentrates on providing a simple reading experience. This solution may be self-hosted, but there are also some friendly <a href="https://writefreely.org/instances" rel="nofollow">communities</a> around helping out. <a href="https://infosec.press/about" rel="nofollow">Infosec.press</a> is one of them. Blog posts show up in the <a href="/csantosb/tag:fediverse" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">fediverse</span></a> under the (platform) user account, so that they are easy to follow. Server side, this is more than what I need. <br/>
But, client side ?, you must be asking. To write text, I’m using <a href="/csantosb/tag:orgmode" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">orgmode</span></a>, with all of its facilities, and not the markdown supported by default by the platform. Then, <a href="https://github.com/dangom/writefreely.el.git" rel="nofollow">writefreely.el</a> takes care of exporting contents, handling the data exchange with the server through the provided api. I had to fix a couple of <a href="https://github.com/dangom/writefreely.el/commits/master/" rel="nofollow">issues</a> before, mostly trivial side effects. This is one of the biggest advantages of <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>, having the possibility to contribute to bug fixing, improving a common. <br/>
As for the question on how to access the blog contents locally, I opt for a different <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> file by blog ticket, that I manipulate as any other <a href="/csantosb/tag:orgroam" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">orgroam</span></a> node. <a href="https://www.orgroam.com/" rel="nofollow">Orgroam</a> allows to quickly retrieve, manipulate and insert as links previous notes. When I type the name of a non-existing note, it creates a new one for me, based on a custom template which incorporates the necessary headings, title, tags and the like. It resuls in something like: <br/></p>

<pre><code class="language-text">:PROPERTIES:
:ID:       ID-6dd1-45d7-a70e-ae5c99c2797a
:END:
#+TITLE: on writting freely
#+OPTIONS: toc:nil -:nil \n:t
#+LINK: srht https://repo/pics/%s
#+filetags: :tag1:tag2:
# a comment
[[srht:image.png]]
Donec neque quam, dignissim in, mollis nec, sagittis eu, wisi ... &lt;!--more--&gt;
Nunc eleifend leo vitae magna.
</code></pre>

<p>You’ll figure it out. <br/>
Remains the question of images: just simply, they are hosted on an unlisted git repository, from where they are fetched. Now, with my working environment and with a couple of keys, I may pop up a new buffer, write some content, then publish, delete or update a new article within seconds, checking the results with <a href="/csantosb/tag:eww" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">eww</span></a>, all without leaving <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>. You’ll get a set of local variables append to you buffer when you publish for the first time, something like <br/></p>

<pre><code class="language-text"># Local Variables:
# writefreely-post-id: &#34;ID&#34;
# writefreely-post-token: nil
# End:
</code></pre>

<p>which allows to retrieve the post online afterwards. <br/>
Put the whole under <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> control, and the perfect blogging setup is ready for you to enjoy writing ! Simple, elegant and efficient. <br/>
Finally, I have <a href="https://issues.guix.gnu.org/74704" rel="nofollow">packaged writefreely.el</a> and sent a patch to <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> so that it will hopefully get merged upstream soon. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/on-writting-freely</guid>
      <pubDate>Sun, 08 Dec 2024 19:12:30 +0000</pubDate>
    </item>
    <item>
      <title>on testing</title>
      <link>https://infosec.press/csantosb/on-testing</link>
      <description>&lt;![CDATA[img br/&#xA;Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in #modernhw a few usual tools are available to ease the process as a whole. Let’s overview some of them. !--more-- br/&#xA;Here on the electronics digital design side of life, we have mainly three #freesoftware fine tools (among many others) to perform code checking to a large extent: osvvm, cocotb and vunit. They are all compatible with the ghdl compiler, and they are all available from my own #guix electronics channel (cocotb and vunit will hopefully get merged on guix upstream at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. br/&#xA;They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. br/&#xA;&#xA;osvvm&#xA;&#xA;First, we have osvvm. #Osvvm is a modern verification #vhdl library using most up-to-date language constructs (by the main contributor to the vhdl standard), and I’ll mention it frequently in this #modernhw posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the documentation repository for up-to-date details about osvvm. br/&#xA;You’ll be able to install osvvm with br/&#xA;&#xA;guix search osvvm&#xA;guix install osvvm-uart osvvm-scripts&#xA;&#xA;You have a simple use of the osvvm vhdl library in the #aludesign, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log here, using the remote ci builds facility of sourcehut. br/&#xA;&#xA;vunit&#xA;&#xA;Then, we have Vunit as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the &#34;test early and test often&#34; paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the documentation for details. br/&#xA;Install it as usual with br/&#xA;&#xA;guix install python-vunit&#xA;&#xA;A clever example of its use is provided by the fw-open-logic firmware package (also included in the electronics channel). When you install it, you’ll need to build the package once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: br/&#xA;&#xA;guix build fw-open-logic:out&#xA;&#xA;By the way, if you need the simulation libraries, they are available too. br/&#xA;&#xA;guix install fw-open-logic:out&#xA;# guix install fw-open-logic:sim  # sim libraries&#xA;&#xA;Additionnaly, #vunit is compatible with running a testing #ci pipeline online, as explained here. br/&#xA;&#xA;cocotb&#xA;&#xA;Finally, we have the interesting and original cocotb. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand #ghdl simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read the documentation to understand how powerful cosumulation approach can reveal. By the way, install it with br/&#xA;&#xA;guix install python-cocotb&#xA;&#xA;---&#xA;&#xA;From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in #modernhw digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in #ci infrastructure. How it goes ? This is the topic of the ci posts in this series. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/testing.png" alt="img"> <br/>
Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> a few usual tools are available to ease the process as a whole. Let’s overview some of them.  <br/>
Here on the electronics digital design side of life, we have mainly three <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> fine tools (among many others) to perform code checking to a large extent: <strong>osvvm</strong>, <strong>cocotb</strong> and <strong>vunit</strong>. They are all compatible with the <a href="https://infosec.press/csantosb/ghdl" rel="nofollow">ghdl compiler</a>, and they are all available from my own <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a> (<a href="https://issues.guix.gnu.org/68153" rel="nofollow">cocotb</a> and <a href="https://issues.guix.gnu.org/74242" rel="nofollow">vunit</a> will hopefully get merged on <a href="https://infosec.press/csantosb/guix" rel="nofollow">guix upstream</a> at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. <br/>
They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. <br/></p>

<h1 id="osvvm">osvvm</h1>

<p>First, we have <a href="https://github.com/OSVVM" rel="nofollow">osvvm</a>. <a href="/csantosb/tag:Osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Osvvm</span></a> is a modern verification <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> library using most up-to-date language constructs (by the <a href="https://www.linkedin.com/in/jimwilliamlewis" rel="nofollow">main contributor</a> to the <a href="https://gitlab.com/IEEE-P1076" rel="nofollow">vhdl standard</a>), and I’ll mention it frequently in this <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the <a href="https://github.com/OSVVM/Documentation#readme" rel="nofollow">documentation repository</a> for up-to-date details about osvvm. <br/>
You’ll be able to install osvvm with <br/></p>

<pre><code class="language-sh"># guix search osvvm
guix install osvvm-uart osvvm-scripts
</code></pre>

<p>You <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/sim/alu_tb.vhd#L30" rel="nofollow">have a simple use</a> of the osvvm vhdl library in the <a href="/csantosb/tag:aludesign" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">aludesign</span></a>, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log <a href="https://builds.sr.ht/query/log/1380968/test_profile/log" rel="nofollow">here</a>, using the <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">remote ci</a> <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds facility</a> of <a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">sourcehut</a>. <br/></p>

<h1 id="vunit">vunit</h1>

<p>Then, we have <a href="https://github.com/VUnit/vunit" rel="nofollow">Vunit</a> as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the “test early and test often” paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the <a href="https://vunit.github.io/" rel="nofollow">documentation</a> for details. <br/>
Install it as usual with <br/></p>

<pre><code class="language-sh">guix install python-vunit
</code></pre>

<p>A clever example of its use is provided by the <code>fw-open-logic</code> firmware package (also included in the <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a>). When you install it, you’ll need to <a href="https://infosec.press/csantosb/guix-crash-course#packages" rel="nofollow">build the package</a> once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: <br/></p>

<pre><code class="language-sh">guix build fw-open-logic:out
</code></pre>

<p>By the way, if you need the simulation libraries, they are available too. <br/></p>

<pre><code class="language-sh">guix install fw-open-logic:out
# guix install fw-open-logic:sim  # sim libraries
</code></pre>

<p>Additionnaly, <a href="/csantosb/tag:vunit" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vunit</span></a> is compatible with running a testing <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> pipeline online, as explained <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">here</a>. <br/></p>

<h1 id="cocotb">cocotb</h1>

<p>Finally, we have the interesting and original <a href="https://www.cocotb.org/" rel="nofollow">cocotb</a>. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand <a href="/csantosb/tag:ghdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ghdl</span></a> simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read <a href="https://docs.cocotb.org/en/stable/index.html" rel="nofollow">the documentation</a> to understand how powerful cosumulation approach can reveal. By the way, install it with <br/></p>

<pre><code class="language-sh">guix install python-cocotb
</code></pre>

<hr>

<p>From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> infrastructure. How it goes ? This is the topic of the <a href="https://infosec.press/csantosb/ci" rel="nofollow">ci posts</a> in this series. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/on-testing</guid>
      <pubDate>Fri, 06 Dec 2024 09:32:14 +0000</pubDate>
    </item>
    <item>
      <title>about</title>
      <link>https://infosec.press/csantosb/about</link>
      <description>&lt;![CDATA[---&#xA;&#xA;Curious about everything br/&#xA;#FreeSoftware and computer science #freedom as a basic right br/&#xA;Doing public #research in France br/&#xA;Using #emacs, what else ? br/&#xA;&#xA;---&#xA;&#xA;sr.ht ◦ cv-fr ◦ cv-hal ◦ orcid ◦ bibtex ◦ gitlab.com ◦ perso br/]]&gt;</description>
      <content:encoded><![CDATA[<hr>

<p>Curious about everything <br/>
<a href="/csantosb/tag:FreeSoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">FreeSoftware</span></a> and computer science <a href="/csantosb/tag:freedom" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freedom</span></a> as a basic right <br/>
Doing public <a href="/csantosb/tag:research" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">research</span></a> in France <br/>
Using <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>, what else ? <br/></p>

<hr>

<p><a href="https://sr.ht/~csantosb" rel="nofollow">sr.ht</a> ◦ <a href="https://csantosb-cv.gitlab.io/cv-fr" rel="nofollow">cv-fr</a> ◦ <a href="https://cv.hal.science/csantosb" rel="nofollow">cv-hal</a> ◦ <a href="https://orcid.org/0000-0003-3565-4234" rel="nofollow">orcid</a> ◦ <a href="https://git.sr.ht/~csantosb/referenceslibrary/blob/master/csb.bib" rel="nofollow">bibtex</a> ◦ <a href="https://gitlab.com/csantosb" rel="nofollow">gitlab.com</a> ◦ <a href="https://infosec.press/ideas/acerca-de-a-propos" rel="nofollow">perso</a> <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/about</guid>
      <pubDate>Sun, 01 Dec 2024 20:15:20 +0000</pubDate>
    </item>
    <item>
      <title>about #modernhw</title>
      <link>https://infosec.press/csantosb/about-modernhw-tools-and-techniques-towards-modern-digital-electronics-design</link>
      <description>&lt;![CDATA[Tools and techniques towards modern digital electronics design br/&#xA;&#xA;img br/&#xA;Modern digital hardware design (#modernhw) in research implies using the right tools. But this is not necessarily always the case. !--more-- br/&#xA;An engineer usually has to deal with heavyweight tooling, propitiatory GUIs, buggy software and more generally, a handful of rudimentary toolchains developed just to pretend they might be of some utility. But they are not. Most usually, they contribute to pollute the heart of what it’s relevant: ideas, concepts and how they translate to hardware. br/&#xA;Weird languages, obsolete codes, old-fashioned standards, mainstream OSs, ... the list goes long. On top of that, industry imposes wrong habits and closed code, impossible to track, debug or to share. Engineers are most times forced to use some tool just because that’s the way it is, and everyone’s else is doing the same. No choice. br/&#xA;Hardware digital design lacks all the usual development utilities used by software engineers since ages. No control version, no online forges, no patches, no reproducibility, no documentation, no code comments, no scripting, no code review. Just a set of random zip files in a USB key, including a huge amount of useless data files no one cares about. br/&#xA;Is it possible to afford things differently ? Let’s see. Do we have better tools for the task at hand ? I think so. Could we envision to design, review and publish in such a way as it reveals useful in research ? Let’s try. br/&#xA;This series of #modernhw posts are an attempt to explore a somehow original path towards a reproducible, lightweight way of understanding digital hardware design, based on #freesoftware. Here, I’ll try to describe a handful of tools I find useful, why they&#39;re helpful in this context, and how to use them to achieve our goal. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><strong>Tools and techniques towards modern digital electronics design</strong> <br/></p>

<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/modernhw.about.png" alt="img"> <br/>
Modern digital hardware design (<a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>) in research implies using the right tools. But this is not necessarily always the case.  <br/>
An engineer usually has to deal with heavyweight tooling, propitiatory GUIs, buggy software and more generally, a handful of rudimentary toolchains developed just to pretend they might be of some utility. But they are not. Most usually, they contribute to pollute the heart of what it’s relevant: ideas, concepts and how they translate to hardware. <br/>
Weird languages, obsolete codes, old-fashioned standards, mainstream OSs, ... the list goes long. On top of that, industry imposes wrong habits and closed code, impossible to track, debug or to share. Engineers are most times forced to use some tool just because that’s the way it is, and everyone’s else is doing the same. No choice. <br/>
Hardware digital design lacks all the usual development utilities used by software engineers since ages. No control version, no online forges, no patches, no reproducibility, no documentation, no code comments, no scripting, no code review. Just a set of random zip files in a USB key, including a huge amount of useless data files no one cares about. <br/>
Is it possible to afford things differently ? Let’s see. Do we have better tools for the task at hand ? I think so. Could we envision to design, review and publish in such a way as it reveals useful in research ? Let’s try. <br/>
This series of <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> posts are an attempt to explore a somehow original path towards a reproducible, lightweight way of understanding digital hardware design, based on <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>. Here, I’ll try to describe a handful of tools I find useful, why they&#39;re helpful in this context, and how to use them to achieve our goal. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/about-modernhw-tools-and-techniques-towards-modern-digital-electronics-design</guid>
      <pubDate>Sun, 24 Nov 2024 23:00:00 +0000</pubDate>
    </item>
    <item>
      <title>git</title>
      <link>https://infosec.press/csantosb/git-ytbn</link>
      <description>&lt;![CDATA[img br/&#xA;Once upon a time, in digital electronics design, we were used to moving huge zip files around including project data, code and all ancillary stuff along with it. We were doing backups, snapshots and versioning our code with help of compressed files. That’s the way it was, I knew that time, and you, young reader, would not believe me if I told you about the whole picture. This is not acceptable anymore. !--more-- br/&#xA;Today we have (well, we already did 20 years ago, but, you know, we were electronic engineers, not coding experts after all) tools to handle the situation. We have control version, we have git, and this more than we need to follow changes in our code (and our documentation, bibliography, etc.). Sure, we need a bit of discipline to use it in meaningful way, writing clear commit messages, committing correctly, creating clean histories, using the right branching model and learning how to use #git in a collaborative way. Following best practices is something that, as for today, should be accessible to any engineer, provided he is using the right tools. br/&#xA;The point here is how to interface git. In my experience, most hardware engineers (but not only) just do basic clone, push and pull, creating linear histories. Simply put, they use git as a backup system, with huge unrelated commits. They use git because they must use git, as anyone else is doing, but without any of the associated benefits. No history read, no diffs, no topic branches, no rebasing, no merging, no nothing. The reason is they’re using the command line interface (#cli) to git. Using git in the command line is fine, provided you’re fluent with clis, with a good shell and its plugins, and you have a very good long term memory. No one remembers how to interactively rebase 10 last commits on top of another branch, squashing history. Full stop. And no one will commit every 30 seconds if it takes one minute. Forget about. This brings to the clone/pull/push/that’s it step slope most people is faced to when they use git. And, unfortunately, this is the end of using git proper. br/&#xA;Now, what to do about ? #modernhw requires using a GUI to manipulate your git repositories. This will provide you with a lot of benefits, among them, a clear overview of where you are in your git history, your current status, and most importantly, where you’re going from this point. Which GUI to use ? Don’t expect an answer from me. The one you feel more comfortable with is the right choice. No matter what, provided it is #freesoftware. Spend as many time as you need mastering its usage, you’ll be rewarded afterward. br/&#xA;Personally, as I’m using #emacs, my not that original choice goes to using #magit. Magit is a front end application to the command line git, closely embedded within the rest of my workflow. As usual, once one gets used to its way of doing things, it’s really fast to perform any git operation in a few seconds. Within a few keystrokes, it is possible to get the status, log, remotes, branches and doing the difference between to different versions. It gives access to complete much more sophisticated manipulations in less time that it takes to tell about. Try to achieve the same using the command line (you will, but it will take ages). The key here is the time it takes: if you need longer to git than what it takes to think about, you’re using git the wrong way. You’ve been advised. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/git.png" alt="img"> <br/>
Once upon a time, in digital electronics design, we were used to moving huge zip files around including project data, code and all ancillary stuff along with it. We were doing backups, snapshots and versioning our code with help of compressed files. That’s the way it was, I knew that time, and you, young reader, would not believe me if I told you about the whole picture. This is not acceptable anymore.  <br/>
Today we have (well, we already did 20 years ago, but, you know, we were electronic engineers, not coding experts after all) tools to handle the situation. We have control version, we have <a href="https://git-scm.com/" rel="nofollow">git</a>, and this more than we need to follow changes in our code (and our documentation, bibliography, etc.). Sure, we need a bit of discipline to use it in meaningful way, writing clear commit messages, committing correctly, creating clean histories, using the right branching model and learning how to use <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> in a collaborative way. Following <a href="https://l2it.pages.in2p3.fr/cad/git-best-practices/" rel="nofollow">best practices</a> is something that, as for today, should be accessible to any engineer, provided he is using the right tools. <br/>
The point here is how to interface git. In my experience, most hardware engineers (but not only) just do basic clone, push and pull, creating linear histories. Simply put, they use git as a backup system, with huge unrelated commits. They use git because they must use git, as anyone else is doing, but without any of the associated benefits. No history read, no diffs, no topic branches, no rebasing, no merging, no nothing. The reason is they’re using the command line interface (<a href="/csantosb/tag:cli" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">cli</span></a>) to git. Using git in the command line is fine, provided you’re fluent with clis, with a good shell and its plugins, and you have a very good long term memory. No one remembers how to interactively rebase 10 last commits on top of another branch, squashing history. Full stop. And no one will commit every 30 seconds if it takes one minute. Forget about. This brings to the clone/pull/push/that’s it step slope most people is faced to when they use git. And, unfortunately, this is the end of using git proper. <br/>
Now, what to do about ? <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> requires using a GUI to manipulate your git repositories. This will provide you with a lot of benefits, among them, a clear overview of where you are in your git history, your current status, and most importantly, where you’re going from this point. Which GUI to use ? Don’t expect an answer from me. The one you feel more comfortable with is the right choice. No matter what, provided it is <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>. Spend as many time as you need mastering its usage, you’ll be rewarded afterward. <br/>
Personally, as I’m using <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>, my not that original choice goes to using <a href="/csantosb/tag:magit" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">magit</span></a>. <a href="https://magit.vc/" rel="nofollow">Magit</a> is a front end application to the command line git, closely embedded within the rest of my workflow. As usual, once one gets used to its way of doing things, it’s really fast to perform any git operation in a few seconds. Within a few keystrokes, it is possible to get the status, log, remotes, branches and doing the difference between to different versions. It gives access to complete much more sophisticated manipulations in less time that it takes to tell about. Try to achieve the same using the command line (you will, but it will take ages). The key here is the time it takes: if you need longer to git than what it takes to think about, you’re using git the wrong way. You’ve been advised. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/git-ytbn</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
    <item>
      <title>guix channels</title>
      <link>https://infosec.press/csantosb/guix-channels</link>
      <description>&lt;![CDATA[img br/&#xA;guix includes the concept of channels, or git repositories of package definitions. This gives a chance for anyone to complete the collection of packages provided by the distribution, in case this is not enough. Keep reading for an intro on how to create your own channels, or how to use the most popular third party guix channels around. !--more-- br/&#xA;Guix itself only provides #freesoftware and #reproductible packages (no binary pre compiled blobs, then), which should be enough to most needs. Unfortunately, this is not always the case. Sometimes one needs to package #nonfree software including binary files to make work a graphic or network card, sometimes the source code of the compiler necessary to produce a package got lost somewhere in a dusty office, and cannot be found other than in a pre compiled form, impossible to bootstrap from its root. br/&#xA;Additionally, guix is not perfect (but getting closer). Even if becoming better thanks to contributions from the community, there are rough edges an lots of issues. One of the most frequent complaints is the time it takes for submitted patches to be review and accepted, if ever. This inconvenient is usually circunvented by creating a personal collection of package definitions. br/&#xA;In such situations, as in many others, package definitions may be written with help of the community (or using the guix packager !) and used normally, even if they have little chance to reach one day the main upstream guix repository (none in the case of nonfree stuff).  Where these definitions live ? In topic channels. Users have the possibility to develop their own channels and use custom packages, extending guix, while submitting patches upstream in parallel. Once the patch is merged, it may be removed from the channel. br/&#xA;&#xA;usual suspects&#xA;&#xA;Other than guix channel itself, you may have a feeling of what a guix channel looks like taking a look at the guix-science organization. In there, you’ll find collections of package definitions intended to be used in a context of specific research domains, both in its free and nonfree forms. For one reason or another, they have not their place in the main guix repository, and they live their life in here. Interesting enough, guix-past channel brings old-fashioned software to the present, which reveals useful to reproduce results older than guix itself. Another interesting examples are guixrus, by the ~whereiseveryone collective, providing packages not yet fully tested, nightly releases or alpha quality software; or the guix-hpc channel, with packages related to high performance computing; also guix-bioc, containing bioconductor packages, etc. You get the idea. br/&#xA;To include additional channels, other than the default ones, in your ~/.config/guix/channels.scm file, follow the documentation, and check with guix describe: you’ll get a list of all the channels in use next time you do a guix pull. At this point, you’ll have plenty of new packages to install at your hands ! Just remember the discussion about substitutes: some channels provide substitute servers, some not. This might be an issue, or not, depending of the computing resources necessary to build certain packages. br/&#xA;&#xA;your first personal channel&#xA;&#xA;Guix channels are simple to create and use. Just put your #guile modules, containing your package definitions, in a git repository. To use them, complete the default guix load path as in br/&#xA;&#xA;guix install -L ~/guix-channel PACKAGE&#xA;&#xA;You may opt to include this local channel into your ~/.config/guix/channels.scm file, or include the -L flag in your commands. As an alternative, guix searches for package definitions in the $GUIXPACKAGEPATH, so you may put your channel path in this variable. Remember that guix builds packages by compiling source code: no substitute server will do the job for you in this case, so plan accordingly with your local resources and patience. br/&#xA;&#xA;electronics channel&#xA;&#xA;This is about #modernhw after all, so let’s see an example more on detail, my own guix-electronics channel which complements the ./gnu/packages/electronics.scm guix module. This is the channel I’m using daily, where I include all the packages I need and doesn’t yet exist upstream. When they get merged somewhere else, I remove them from here. In the meantime, I’m a happy user of my custom packages. br/&#xA;I classify all my package definitions in #guile modules under ./electronics/packages/MODULE.scm. This is the reason why the compilers module, for example, starts by br/&#xA;&#xA;(define-module (electronics packages compilers)&#xA;&#xA;Then, proceed as usual (check the guix repo itself for inspiration), putting your packages below br/&#xA;&#xA;(define-public mypackage&#xA;  (package&#xA;    (name &#34;mypackage&#34;)&#xA;    ...&#xA;    ))&#xA;&#xA;I create a git repo, commit my changes and push it online to a #gitforge. br/&#xA;I need a .guix-channel file with the url of my channel and a version number. If the code in the channel depends on other channels other than main guix channel, one needs to specify this information in this file too.  In this case, for example, several of the package definitions include a dependency on ghdl-clang, provided by guix-science channel, so I need to declare it in here too. br/&#xA;It is also necessary a .guix-authorizations file, with the (GPG key) list of developers with right to push to this channel. This is how guix authenticates channels. Finally, it is important to create a readme file or similar where introduce the channel (first commit, signing key fingerprint). That’s it. Include the channel introduction into your ~/.config/guix/channels.scm file, and you’ll be pulling from your online guix channel. br/&#xA;&#xA;contributing&#xA;&#xA;The advantage of using a custom channel is to benefit from an increased degree of freedom when using guix, without being dependent of someone’s else time availability to consider your particular needs. Remember, guix is a huge project which follows closely the #freesoftware paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your package definitions, once they have been tested. br/&#xA;Take for example the list of packages in my electronics channel. I have included here utilities for #vhdl code editing under #emacs, specific to the electronic digital design field, which are not yet available in guix upstream channel. There is also a cosimulation library, #cocotb, which allows coupling #python and #ghdl to simulate a design. I have submitted a patch to include this in guix itself, similarly to python-vunit; once merged, I’ll remove them from here. br/&#xA;Some packages, like open fpga loader are now part of guix. Some others, like the gnat ada compiler and ghdl-clang stable have been merged already on guix-science, as gnat package is obtained from binaries (and so ghdl-clang is also impacted, from guix perspective). I include here a more recent ghdl-clang, non yet released, as it includes some fixes and improvements I am interested in, without having to wait until guix incorporates a new release. br/&#xA;Catch the idea ? br/&#xA;Contributing your code to guix comes to sending #email with your patches attached, it’s that simple. Dont be intimidated by the workflow (this is used by the linux project, for example). Once your patches are submitted, a review of your code folllows, see details. Some tools, like mumi, are helpful to that purpose. br/&#xA;These simple steps will allow you to start contributing to guix: br/&#xA;&#xA;git clone guix itselft br/&#xA;add and commit your changes, watch the commit message br/&#xA;don’t forget to use guix lint/style/refresh -l to check your code br/&#xA;use git send-email to create and send a patch br/&#xA;cosider reviews, if any, updating your patch accordingly with git ammend br/&#xA;resend a new patch including a patch version (v1, v2 ...) br/&#xA;&#xA;Interested ? Consult the doc for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/guix-crash-course.png" alt="img"> <br/>
<a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> includes the concept of <a href="https://guix.gnu.org/manual/en/html_node/Channels.html" rel="nofollow">channels</a>, or git repositories of package definitions. This gives a chance for anyone to complete the collection of <a href="https://packages.guix.gnu.org/" rel="nofollow">packages</a> provided by the distribution, in case this is not enough. Keep reading for an intro on how to create your own channels, or how to use the most popular third party guix channels around.  <br/>
Guix itself only provides <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> and <a href="/csantosb/tag:reproductible" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproductible</span></a> packages (no binary pre compiled blobs, then), which should be enough to most needs. Unfortunately, this is not always the case. Sometimes one needs to package <a href="/csantosb/tag:nonfree" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">nonfree</span></a> software including binary files to make work a graphic or network card, sometimes the source code of the compiler necessary to produce a package got lost somewhere in a dusty office, and cannot be found other than in a pre compiled form, impossible to bootstrap from its root. <br/>
Additionally, guix is not perfect (but <a href="https://infosec.press/csantosb/guix-crash-course" rel="nofollow">getting closer</a>). Even if becoming better thanks to contributions from the <a href="https://guix.gnu.org/en/contact/" rel="nofollow">community</a>, there are rough edges an <a href="https://issues.guix.gnu.org/" rel="nofollow">lots of issues</a>. One of the most frequent complaints is the time it takes for submitted patches to be review and accepted, if ever. This inconvenient is usually circunvented by creating a personal collection of package definitions. <br/>
In such situations, as in many others, package definitions may be written with help of the community (or using the <a href="https://guix-hpc.gitlabpages.inria.fr/guix-packager/" rel="nofollow">guix packager</a> !) and used normally, even if they have little chance to reach one day the main upstream guix repository (none in the case of nonfree stuff).  Where these definitions live ? In topic channels. Users have the possibility to develop their own channels and use custom packages, extending guix, while <a href="https://guix.gnu.org/manual/en/html_node/Submitting-Patches.html" rel="nofollow">submitting patches upstream</a> in parallel. Once the patch is merged, it may be removed from the channel. <br/></p>

<h1 id="usual-suspects">usual suspects</h1>

<p>Other than <a href="https://hpc.guix.info/channel/guix" rel="nofollow">guix channel</a> itself, you may have a feeling of what a guix channel looks like taking a look at the <a href="https://codeberg.org/guix-science" rel="nofollow">guix-science</a> organization. In there, you’ll find collections of package definitions intended to be used in a context of specific research domains, both in its <a href="https://codeberg.org/guix-science/guix-science" rel="nofollow">free</a> and <a href="https://codeberg.org/guix-science/guix-science-nonfree" rel="nofollow">nonfree</a> forms. For one reason or another, they have not their place in the main guix repository, and they live their life in here. Interesting enough, <a href="https://codeberg.org/guix-science/guix-past" rel="nofollow">guix-past</a> channel brings old-fashioned software to the present, which reveals useful to reproduce results older than guix itself. Another interesting examples are <a href="https://sr.ht/~whereiseveryone/guixrus/" rel="nofollow">guixrus</a>, by the <a href="https://sr.ht/~whereiseveryone/" rel="nofollow">~whereiseveryone</a> collective, providing packages not yet fully tested, nightly releases or alpha quality software; or the <a href="https://hpc.guix.info/channel/guix-hpc" rel="nofollow">guix-hpc</a> channel, with packages related to high performance computing; also <a href="https://hpc.guix.info/channel/guix-bioc" rel="nofollow">guix-bioc</a>, containing bioconductor packages, etc. You get the idea. <br/>
To include additional channels, other than the default ones, in your <code>~/.config/guix/channels.scm</code> file, follow the <a href="https://guix.gnu.org/manual/en/html_node/Specifying-Additional-Channels.html" rel="nofollow">documentation</a>, and check with <code>guix describe</code>: you’ll get a list of all the channels in use next time you do a <code>guix pull</code>. At this point, you’ll have plenty of new packages to install at your hands ! Just remember the <a href="https://infosec.press/csantosb/guix-crash-course#packages" rel="nofollow">discussion</a> about substitutes: some channels provide substitute servers, some not. This might be an issue, or not, depending of the computing resources necessary to build certain packages. <br/></p>

<h1 id="your-first-personal-channel">your first personal channel</h1>

<p>Guix channels are <a href="https://guix.gnu.org/cookbook/en/html_node/Channels.html" rel="nofollow">simple to create</a> and use. Just put your <a href="/csantosb/tag:guile" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guile</span></a> modules, containing your package definitions, in a git repository. To use them, complete the default guix load path as in <br/></p>

<pre><code class="language-sh">guix install -L ~/guix-channel PACKAGE
</code></pre>

<p>You may opt to include this local channel into your <code>~/.config/guix/channels.scm</code> file, or include the <code>-L</code> flag in your commands. As an alternative, guix searches for package definitions in the <code>$GUIX_PACKAGE_PATH</code>, so you may put your channel path in this variable. Remember that guix builds packages by compiling source code: no substitute server will do the job for you in this case, so plan accordingly with your local resources and patience. <br/></p>

<h1 id="electronics-channel">electronics channel</h1>

<p>This is about <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> after all, so let’s see an example more on detail, my own <a href="https://git.sr.ht/~csantosb/guix.channel-electronics" rel="nofollow">guix-electronics</a> channel which complements the <a href="https://codeberg.org/guix/guix/src/master/gnu/packages/electronics.scm" rel="nofollow">./gnu/packages/electronics.scm</a> guix module. This is the channel I’m using daily, where I include all the packages I need and doesn’t yet exist upstream. When they <a href="https://codeberg.org/csantosb/guix-science/commit/d33b748" rel="nofollow">get merged somewhere else</a>, I remove them from here. In the meantime, I’m a happy user of my custom packages. <br/>
I classify all my package definitions in <a href="/csantosb/tag:guile" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guile</span></a> modules under <code>./electronics/packages/MODULE.scm</code>. This is the reason why the compilers module, for example, <a href="https://git.sr.ht/~csantosb/guix.channel-electronics/tree/main/electronics/packages/compilers.scm#L20" rel="nofollow">starts by</a> <br/></p>

<pre><code class="language-scheme">(define-module (electronics packages compilers)
</code></pre>

<p>Then, proceed as usual (check the guix repo itself for inspiration), putting your packages below <br/></p>

<pre><code class="language-scheme">(define-public mypackage
  (package
    (name &#34;mypackage&#34;)
    ...
    ))
</code></pre>

<p>I create a git repo, commit my changes and push it online to a <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a>. <br/>
I need a <code>.guix-channel</code> <a href="https://git.sr.ht/~csantosb/guix.channel-electronics/tree/main/.guix-channel" rel="nofollow">file</a> with the url of my channel and a version number. If the code in the channel <a href="https://guix.gnu.org/manual/en/html_node/Declaring-Channel-Dependencies.html" rel="nofollow">depends on other channels</a> other than main guix channel, one needs to specify this information in this file too.  In this case, for example, several of the package definitions include a dependency on <code>ghdl-clang</code>, provided by <code>guix-science</code> channel, so I need to declare it in here too. <br/>
It is also necessary a <code>.guix-authorizations</code> <a href="https://git.sr.ht/~csantosb/guix.channel-electronics/tree/main/.guix-authorizations" rel="nofollow">file</a>, with the (GPG key) list of developers <a href="https://guix.gnu.org/manual/en/html_node/Specifying-Channel-Authorizations.html" rel="nofollow">with right to push to this channel</a>. This is how guix <a href="https://guix.gnu.org/manual/en/html_node/Channel-Authentication.html" rel="nofollow">authenticates</a> channels. Finally, it is important to create a <a href="https://git.sr.ht/~csantosb/guix.channel-electronics/tree/main/readme.org#L50" rel="nofollow">readme</a> file or similar where introduce the channel (first commit, signing key fingerprint). That’s it. Include the channel introduction into your <code>~/.config/guix/channels.scm</code> file, and you’ll be pulling from your online guix channel. <br/></p>

<h1 id="contributing">contributing</h1>

<p>The advantage of using a custom channel is to benefit from an increased degree of freedom when using guix, without being dependent of someone’s else time availability to consider your particular needs. Remember, guix is a huge project which follows closely the <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> paradigm, and collaboration works in two directions. You take advantage of other developers contributions to guix, while you participate yourself to improving guix repositories with your package definitions, once they have been tested. <br/>
Take for example the list of <a href="https://gitlab.com/csantosb/guix/channel-electronics/-/blob/main/packages.org" rel="nofollow">packages</a> in my electronics channel. I have included here utilities for <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> code editing under <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a>, specific to the electronic digital design field, which are not yet available in guix upstream channel. There is also a cosimulation library, <a href="/csantosb/tag:cocotb" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">cocotb</span></a>, which allows coupling <a href="/csantosb/tag:python" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">python</span></a> and <a href="/csantosb/tag:ghdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ghdl</span></a> to simulate a design. I have submitted a <a href="https://issues.guix.gnu.org/68153" rel="nofollow">patch</a> to include this in guix itself, similarly to <a href="https://issues.guix.gnu.org/74242" rel="nofollow">python-vunit</a>; once merged, I’ll remove them from here. <br/>
Some packages, like <a href="https://git.savannah.gnu.org/cgit/guix.git/commit/?id=9fb7333fc9" rel="nofollow">open fpga loader</a> are now part of guix. Some others, like the <a href="https://codeberg.org/guix-science/guix-science/pulls/46" rel="nofollow">gnat ada compiler</a> and <code>ghdl-clang</code> stable have been merged already on <code>guix-science</code>, as <code>gnat</code> package is obtained from binaries (and so <code>ghdl-clang</code> is also impacted, from guix perspective). I include here a more recent <code>ghdl-clang</code>, non yet released, as it includes some fixes and improvements I am interested in, without having to wait until guix incorporates a new release. <br/>
Catch the idea ? <br/>
<a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">Contributing</a> your code to guix comes to <a href="https://git-send-email.io/" rel="nofollow">sending <a href="/csantosb/tag:email" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">email</span></a></a> <a href="https://www.futurile.net/2022/03/07/git-patches-email-workflow/" rel="nofollow">with your patches</a> attached, it’s that simple. Dont be intimidated by the workflow (this is used by the linux project, for example). Once your patches are submitted, a review of your code folllows, see <a href="https://libreplanet.org/wiki?title=Group:Guix/PatchReviewSessions2024" rel="nofollow">details</a>. Some tools, like <a href="https://www.youtube.com/watch?v=8m8igXrKaqU" rel="nofollow">mumi</a>, are helpful to that purpose. <br/>
These simple steps will allow you to start contributing to guix: <br/></p>
<ul><li>git clone guix itselft <br/></li>
<li>add and commit your changes, watch the commit message <br/></li>
<li>don’t forget to use <code>guix lint/style/refresh -l</code> to check your code <br/></li>
<li>use <code>git send-email</code> to create and send a patch <br/></li>
<li>cosider reviews, if any, updating your patch accordingly with <code>git ammend</code> <br/></li>
<li>resend a new patch including a patch version (v1, v2 ...) <br/></li></ul>

<p>Interested ? Consult <a href="https://guix.gnu.org/manual/en/html_node/Contributing.html" rel="nofollow">the doc</a> for details, you’ll learn a lot about how to contribute to a common good and collaboration with other people. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/guix-channels</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
    <item>
      <title>guix</title>
      <link>https://infosec.press/csantosb/use-guix</link>
      <description>&lt;![CDATA[img br/&#xA;As one understands with time and a bit of experience, keeping track of the whole bunch of #dependencies necessary to handle daily when doing digital hardware design may reveal as an error-prone task. And yet, this is not to speak about regressions, incompatibilities and most important, #reproducibility of results. Luckyly enough, this is precisely the problem that #guix intends to solve, in an elegant, minimalistic and open way, using only #freesoftware. !--more-- br/&#xA;&#xA;Nix&#xA;&#xA;Functional package management is a paradigm pioneer by nix and developed by Eelco Dolstra in his influential PhD Thesis. It is based on building each of the nodes in the graph of dependencies based solely in its inputs, contents, and node definition, producing a new node. The process repeats for every single node in the graph. In what concerns operating systems and software management, this comes to a radical different approach to classical package management. Simply put, every new build lives its life, regardless of the remaining builds. This makes it possible to have access to the kind of advanced utilities which ease our lives: declarative configurations, profiles, rollbacks, generations, etc. Forget the dependency hell. br/&#xA;Guix is founded on a similar approach, keeping its own set of rules as it only packages #freesoftware. There are around 30.000 of them as for today, including all the usual suspects. Within the context of #modernhw, guix is to be understood as a dependency management tool with advanced capabilities. Sure, it handles software, but there is no reason to use it exclusively as a software manager. It may handle IP blocks, documentation, bibliographic references, and more generally, all what concerns #plaintext files in a #gitforge. It understands versions, licences, dependencies, repositories and all kinds of relationships between them. Furthermore, it embeds a pragmatic language, guile, to script package definitions, declaring the behavior of nodes in our graph of dependencies. br/&#xA;&#xA;Reproducibility&#xA;&#xA;The most relevant feature of guix turns out to be its bootstraping capabilities. Full source bootstrap comes to building the whole dependency graph right from the bottom, based on a core minimum of trusted binary seeds. From that point upwards the whole distribution is self-contained, as all that it builds is included in guix itself. Any available package is founded on a package definition included in guix, its source code available online in a #git repository, and its dependencies. Each of the latest, follows the same rules, down to the bottom of the graph where a trust seed is necessary. br/&#xA;Why is this necessary and useful for #modernhw ? Because it provides #reproducibility for free, as reproducible builds are granted here. Turns out that this is at the very heart of guix and produces #determinism, meaning that same operations will produce same outputs, no matter when, no matter what, no matter where. Game over to ambiguity. Determinism, coupled to its declarative nature, reveals as a simple means to track our dependencies history without ambiguity. br/&#xA;Let’s take an example, and say we have a Vivado project in the form of a set of #tcl files. To build the logic of our favourite #fpga, we require also a couple of external firmware dependencies as IP block in their own git repositories, with tagged revisions, being mutually dependent and incompatible, following the tag in use. Not all of them are compliant with our project. Each of the firmware modules incorporates its own set of VHDL versioned dependencies, along with its associated documentation. We need to provide a python testing framework (you guess it), along with its verification libraries. We need to create a static web site with the instructions on how to download, instantiate, compile and deploy each different version of our project for a couple of thousand users out there, each with a different #gnulinux system, version and configuration of installed software and libraries. Take it for granted, each user need a different version of our project, as they need to guarantee compatibility with their own internal developments. And we need to provide the correct version of every tool, compatible with our code and scripts. The right version, I do mean. Not any random version. br/&#xA;Can you imagine the pain ? Now, suppose that you could describe in a couple of #plaintext manifest files the status of your project. End of the history. Users willing to reproduce your project and dependencies only need to git clone the manifest files and install them locally, regardless of their system, of their present libraries or of their abilities. No problem if the host OS doesn’t provide the necessary software, guix handles the situation. Users just do make and the whole project is deployed, tested and simulated, using the right version of each node in the graph. This is Guix at its best. br/&#xA;Not yet convinced ? Take a look at here. br/&#xA;Feeling like tempted ? Start by a crash course to guix. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/guix.png" alt="img"> <br/>
As one understands with time and a bit of experience, keeping track of the whole bunch of <a href="/csantosb/tag:dependencies" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">dependencies</span></a> necessary to handle daily when doing digital hardware design may reveal as an error-prone task. And yet, this is not to speak about regressions, incompatibilities and most important, <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> of results. Luckyly enough, this is precisely the problem that <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> intends to solve, in an elegant, minimalistic and open way, using only <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>.  <br/></p>

<h1 id="nix">Nix</h1>

<p>Functional package management is a paradigm pioneer by <a href="https://nixos.org/" rel="nofollow">nix</a> and developed by <a href="https://edolstra.github.io/" rel="nofollow">Eelco Dolstra</a> in his influential <a href="https://edolstra.github.io/pubs/phd-thesis.pdf" rel="nofollow">PhD Thesis</a>. It is based on building each of the nodes in the graph of dependencies based solely in its inputs, contents, and node definition, producing a new node. The process repeats for every single node in the graph. In what concerns operating systems and software management, this comes to a radical different approach to classical package management. Simply put, every new build lives its life, regardless of the remaining builds. This makes it possible to have access to the kind of advanced utilities which ease our lives: declarative configurations, profiles, rollbacks, generations, etc. Forget the <a href="https://en.wikipedia.org/wiki/Dependency_hell" rel="nofollow">dependency hell</a>. <br/>
<a href="https://guix.gnu.org/" rel="nofollow">Guix</a> is founded on a similar approach, keeping its own set of rules as it only packages <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>. There are around <a href="https://packages.guix.gnu.org/" rel="nofollow">30.000 of them</a> as for today, including all the usual suspects. Within the context of <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, guix is to be understood as a dependency management tool with advanced capabilities. Sure, it handles software, but there is no reason to use it exclusively as a software manager. It may handle IP blocks, documentation, bibliographic references, and more generally, all what concerns <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> files in a <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a>. It understands versions, licences, dependencies, repositories and all kinds of relationships between them. Furthermore, it embeds a pragmatic language, <a href="https://www.gnu.org/software/guile/" rel="nofollow">guile</a>, to script package definitions, declaring the behavior of nodes in our graph of dependencies. <br/></p>

<h1 id="reproducibility">Reproducibility</h1>

<p>The most relevant feature of guix turns out to be its <a href="https://guix.gnu.org/manual/en/html_node/Bootstrapping.html" rel="nofollow">bootstraping</a> capabilities. <a href="https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/" rel="nofollow">Full source bootstrap</a> comes to building the whole dependency graph right from the bottom, based on a core minimum of trusted binary seeds. From that point upwards the whole distribution is self-contained, as all that it builds is included in guix itself. Any available package is founded on a package definition included in guix, its source code available online in a <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repository, and its dependencies. Each of the latest, follows the same rules, down to the bottom of the graph where a trust seed is necessary. <br/>
Why is this necessary and useful for <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> ? Because it provides <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> for free, as <a href="https://reproducible-builds.org/" rel="nofollow">reproducible builds</a> are granted here. Turns out that this is at the very heart of guix and produces <a href="/csantosb/tag:determinism" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">determinism</span></a>, meaning that same operations will produce same outputs, no matter when, no matter what, no matter where. Game over to ambiguity. Determinism, coupled to its declarative nature, reveals as a simple means to track our dependencies history without ambiguity. <br/>
Let’s take an example, and say we have a <a href="https://www.amd.com/es/products/software/adaptive-socs-and-fpgas/vivado.html" rel="nofollow">Vivado</a> project in the form of a set of <a href="/csantosb/tag:tcl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">tcl</span></a> files. To build the logic of our favourite <a href="/csantosb/tag:fpga" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">fpga</span></a>, we require also a couple of external firmware dependencies as IP block in their own git repositories, with tagged revisions, being mutually dependent and incompatible, following the tag in use. Not all of them are compliant with our project. Each of the firmware modules incorporates its own set of VHDL versioned dependencies, along with its associated documentation. We need to provide a python testing framework (you guess it), along with its verification libraries. We need to create a static web site with the instructions on how to download, instantiate, compile and deploy each different version of our project for a couple of thousand users out there, each with a different <a href="/csantosb/tag:gnulinux" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gnulinux</span></a> system, version and configuration of installed software and libraries. Take it for granted, each user need a different version of our project, as they need to guarantee compatibility with their own internal developments. And we need to provide the correct version of every tool, compatible with our code and scripts. <strong>The right version</strong>, I do mean. <em>Not any random version</em>. <br/>
Can you imagine the pain ? Now, suppose that you could describe in a couple of <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> <a href="https://infosec.press/csantosb/guix-crash-course#examples" rel="nofollow">manifest files</a> the status of your project. End of the history. Users willing to reproduce your project and dependencies only need to git clone the manifest files and install them locally, regardless of their system, of their present libraries or of their abilities. No problem if the host OS doesn’t provide the necessary software, guix handles the situation. Users just do <code>make</code> and the whole project is deployed, tested and simulated, using the right version of each node in the graph. This is Guix at its best. <br/>
Not yet convinced ? Take a look at <a href="https://csantosb.gitlab.io/ip/talks/hdl-lib_proposal/" rel="nofollow">here</a>. <br/>
Feeling like tempted ? Start by a <a href="https://infosec.press/csantosb/guix-crash-course" rel="nofollow">crash course</a> to guix. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/use-guix</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
    <item>
      <title>emacs</title>
      <link>https://infosec.press/csantosb/use-emacs</link>
      <description>&lt;![CDATA[img br/&#xA;#modernhw, and in particular digital electronics design implies, for the most of it, writing #plaintext files. Creating code, scripts, configurations, documentation, emails, taking notes, handling bibliographic references, etc., all of these tasks involve writing in plain text files. Whether these files are created or modified, editing plain text is a must. An, when it comes to editing text, it is really worth investing some time on learning a bit more than the basics of a good text editor. !--more-- br/&#xA;At that point, once one decides to opt for a good tool, it takes the best of them all. In my case, I decided to start a long journey towards mastering emacs, even if GNU #emacs is much more than a text editor: it is a full customizable environment, and is #freesoftware. Emacs has been around longer than me, and benefits from the accumulated experience of all those having used it well before I discovered its existence. Inside of it one will have access to a plethora of existing applications, make use of killer tools as #magit and #orgroam, handle email with #mu4e, browse the web with #eww, manipulate windows with #exwm, run terminals, compile, develop non-existing utilities, modify emacs’s default behavior, change options, default bindings, etc. It is closer to a customizable working environmetn (a full OS some claim !) that you’ll take the time to build, than just a text editor. br/&#xA;Orgmode deserves a special chapter in here: #orgmode brings plain text editing to next level of productivity.  Check the quickstart if you’re not familiar with it. Don’t feel overwhelmed by the its large list of features: links, tables, markup, attachments, agenda, tags, export, publishing ... the list is endless. If you ever feel like tempted to use it, proceed step by step. As with emacs, just pick some appealing feature you think you’ need, and start making use of it. The more you practice, the more you’ll feel more comfortable with it. br/&#xA;You’ll manage, with time and some experience, to build a working environment suited to your particular and special needs, to a point you never imagined it was possible. And most important: you’ll be able to make it evolve with your own needs. It takes time, a lot of it, and the learning curve is quite step, so this is probably not the best choice for everyone. However, once you really learn how to use it (and get used to forget about your mouse !), the benefits in productivity are really impressive. You’ll never look backwards. br/&#xA;If you ever opt to follow the same path as I did, start by the tutorial, and read the documentation. Then, just use it, little by little, no hurries. You’ll be learning something new about emacs everyday during the next 20 years anyway. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/emacslogo.png" alt="img"> <br/>
<a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, and in particular digital electronics design implies, for the most of it, writing <a href="/csantosb/tag:plaintext" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">plaintext</span></a> files. Creating code, scripts, configurations, documentation, emails, taking notes, handling bibliographic references, etc., all of these tasks involve writing in plain text files. Whether these files are created or modified, editing plain text is a must. An, when it comes to editing text, it is really worth investing some time on learning a bit more than the basics of a good text editor.  <br/>
At that point, once one decides to opt for a good tool, it takes the best of them all. In my case, I decided to start a long journey towards <a href="https://www.masteringemacs.org/" rel="nofollow">mastering emacs</a>, even if GNU <a href="/csantosb/tag:emacs" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">emacs</span></a> is much more than a text editor: it is a full customizable environment, and is <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a>. Emacs has been around longer than me, and benefits from the accumulated experience of all those having used it well before I discovered its existence. Inside of it one will have access to a plethora of existing applications, make use of killer tools as <a href="/csantosb/tag:magit" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">magit</span></a> and <a href="/csantosb/tag:orgroam" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">orgroam</span></a>, handle email with <a href="/csantosb/tag:mu4e" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">mu4e</span></a>, browse the web with <a href="/csantosb/tag:eww" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">eww</span></a>, manipulate windows with <a href="/csantosb/tag:exwm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">exwm</span></a>, run terminals, compile, develop non-existing utilities, modify emacs’s default behavior, change options, default bindings, etc. It is closer to a customizable working environmetn (a full OS some claim !) that you’ll take the time to build, than just a text editor. <br/>
<a href="https://orgmode.org/" rel="nofollow">Orgmode</a> deserves a special chapter in here: <a href="/csantosb/tag:orgmode" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">orgmode</span></a> brings plain text editing to next level of productivity.  Check the <a href="https://orgmode.org/quickstart.html" rel="nofollow">quickstart</a> if you’re not familiar with it. Don’t feel overwhelmed by the its large list of features: links, tables, markup, attachments, agenda, tags, export, publishing ... the list is endless. If you ever feel like tempted to use it, proceed step by step. As with emacs, just pick some appealing feature you think you’ need, and start making use of it. The more you practice, the more you’ll feel more comfortable with it. <br/>
You’ll manage, with time and some experience, to build a working environment suited to your particular and special needs, to a point you never imagined it was possible. And most important: you’ll be able to make it evolve with your own needs. It takes time, a lot of it, and the learning curve is quite step, so this is probably not the best choice for everyone. However, once you really learn how to use it (and get used to forget about your mouse !), the benefits in productivity are really impressive. You’ll never look backwards. <br/>
If you ever opt to follow the same path as I did, start by the tutorial, and read the documentation. Then, just use it, little by little, no hurries. You’ll be learning something new about emacs everyday during the next 20 years anyway. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/use-emacs</guid>
      <pubDate>Tue, 31 Jan 2023 23:00:00 +0000</pubDate>
    </item>
  </channel>
</rss>