<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>osvvm &amp;mdash; csantosb</title>
    <link>https://infosec.press/csantosb/tag:osvvm</link>
    <description>Random thoughts</description>
    <pubDate>Wed, 13 May 2026 23:37:17 +0000</pubDate>
    <item>
      <title>ci (sourcehut): alu</title>
      <link>https://infosec.press/csantosb/ci-sourcehut-alu</link>
      <description>&lt;![CDATA[img br/&#xA;Remote #ci is the way to go in #modernhw digital design testing. In this #ciseries, let’s see how to implement it with detail using sourcehut and a real world example. !--more-- br/&#xA;Sourcehut is a lightweight #gitforge where I host my #git repositories. Not only it is based on a paradigm perfectly adapted to #modernhw, but also its builds service includes support for guix (x8664) images. This means that we will be able to execute all of our testing online inside guix profiles, shells or natively on top of the bare-bones image. br/&#xA;&#xA;Alu&#xA;&#xA;Let’s consider now a variant of the previous example with open-logic. Here, we concentrate on a toy design only for demonstration purposes, a dummy alu emulator, which uses #osvvm as verification framework and relies on a few #openlogic blocs. In this case, its dependencies are defined in a manifest.scm file, including both fw-open-logic and osvvm, among other dependencies. br/&#xA;Install dependencies locally, in a new profile with br/&#xA;&#xA;cd alu&#xA;mkdir deps&#xA;export GUIXPROFILE=open-logic/deps&#xA;guix install -P $GUIXPROFILE -m .builds/manifest.scm&#xA;. $GUIXPROFILE/etc/profile&#xA;&#xA;In this case, we will test the design using, first, a custom made makefile. Secondly, we will use hdlmake to automatically produce our makefile. Similarly to previous #openlogic example, two build manifest are used: br/&#xA;&#xA;    profile1 br/&#xA;    profile2 br/&#xA;&#xA;You’ll realise how some of the tasks are common with the case of previous #openlogic example (update channels, auth and update profile). br/&#xA;&#xA;osvvm&#xA;&#xA;In this case, we also need to compile osvvm libraries br/&#xA;&#xA;    compile\_osvvm, produce a compiled version of #osvvm verification libraries; this is necessary as we are using here the tcl  scripts included in the library itself to follow the correct order of compilation. Libraries will appear within the local profile under $GUIXPROFILE/VHDLLIBS/GHDL-X.Y.Z br/&#xA;&#xA;test&#xA;&#xA;    test, for a fully custom made testing pipeline; in this case, using a Makefile br/&#xA;    Just simply, source the .envrc file where the local $GUIXPROFILE variable is defined, cd to the ghdl directory and call make to compile the design and run the simulation in two steps: first, clean all and include sources in its corresponding libraries with br/&#xA;    &#xA;        make cleanall include&#xA;        &#xA;    Then, produce a new Makefile using ghdl. br/&#xA;    &#xA;        ./makefile.sh # ghdl --gen-makefile ...&#xA;        &#xA;    Finally, run the simulation with br/&#xA;    &#xA;        make GHDLRUNFLAGS=&#34;--stop-time=4us --disp-time --ieee-asserts=enable&#34; run&#xA;        &#xA;    This will produce a executable file before running it with the provided parameters. br/&#xA;    You may notice that, in this case, you need to produce somehow your own Makefile, or equivalent pipeline, right ? br/&#xA;&#xA;hdlmake&#xA;&#xA;Wouldn’t it be nice if we had a tool to deploy online which produces makefiles for us ? It exists, and its name is #hdlmake. br/&#xA;&#xA;    test\hdlmake br/&#xA;    Source the .envrc file where the local $GUIXPROFILE variable is defined, cd to the .builds/hdlmake directory where all Manifest.py files are located, and call hdlmake to produce the Makefile. Finally, just run make to compile the design, produce an executable and run it. br/&#xA;&#xA;Check the resulting logs inline here, for example. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/sourcehut.png" alt="img"> <br/>
Remote <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> is the <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">way to go</a> in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design testing. In this <a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a>, let’s see how to implement it with detail using <a href="https://sourcehut.org/" rel="nofollow">sourcehut</a> and a real world example.  <br/>
<a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">Sourcehut</a> is a lightweight <a href="/csantosb/tag:gitforge" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">gitforge</span></a> where I host my <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a> repositories. Not only it is <a href="https://infosec.press/csantosb/git-forges#sourcehut" rel="nofollow">based on a paradigm</a> perfectly adapted to <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a>, but also its <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds</a> service includes support for <a href="https://man.sr.ht/builds.sr.ht/compatibility.md#guix-system" rel="nofollow">guix</a> (x86_64) images. This means that we will be able to execute all of our testing online inside <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">guix profiles</a>, <a href="https://infosec.press/csantosb/guix-crash-course#shell-containers" rel="nofollow">shells</a> or natively on top of the bare-bones image. <br/></p>

<h1 id="alu">Alu</h1>

<p>Let’s consider now a variant of the <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">previous example with open-logic</a>. Here, we concentrate on a <a href="https://git.sr.ht/~csantosb/ip.alu/tree" rel="nofollow">toy design</a> only for demonstration purposes, a <a href="https://git.sr.ht/~csantosb/ip.alu/tree/master/item/src/alu.vhd" rel="nofollow">dummy alu emulator</a>, which uses <a href="/csantosb/tag:osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">osvvm</span></a> as verification framework and relies on a few <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> blocs. In this case, its dependencies are defined in a <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/manifest.scm" rel="nofollow">manifest.scm</a> file, including both <code>fw-open-logic</code> and <code>osvvm</code>, among other dependencies. <br/>
Install dependencies locally, in a new <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">profile</a> with <br/></p>

<pre><code class="language-sh">cd alu
mkdir _deps
export GUIX_PROFILE=open-logic/_deps
guix install -P $GUIX_PROFILE -m .builds/manifest.scm
. $GUIX_PROFILE/etc/profile
</code></pre>

<p>In this case, we will test the design using, first, a custom made makefile. Secondly, we will use <a href="https://hdlmake.readthedocs.io/en/master/" rel="nofollow">hdlmake</a> to automatically produce our makefile. Similarly to <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">previous</a> <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> example, two build manifest are used: <br/></p>

<p>    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/profile1.yml" rel="nofollow">profile1</a> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/item/.builds/profile2.yml" rel="nofollow">profile2</a> <br/></p>

<p>You’ll realise how some of the tasks are common with the case of previous <a href="/csantosb/tag:openlogic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">openlogic</span></a> example (update channels, auth and update profile). <br/></p>

<h2 id="osvvm">osvvm</h2>

<p>In this case, we also need to compile osvvm libraries <br/></p>

<p>    <strong>compile__osvvm</strong>, <a href="https://builds.sr.ht/~csantosb/job/1389146#task-compile_osvvm" rel="nofollow">produce a compiled version</a> of <a href="/csantosb/tag:osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">osvvm</span></a> verification libraries; this is necessary as we are using here the <code>tcl</code>  scripts included in the library itself to follow the correct order of compilation. Libraries will appear within the local profile under <code>$GUIX_PROFILE/VHDL_LIBS/GHDL-X.Y.Z</code> <br/></p>

<h2 id="test">test</h2>

<p>    <strong>test</strong>, for a fully custom made testing pipeline; in this case, using a <code>Makefile</code> <br/>
    Just simply, source the <code>.envrc</code> file where the local <code>$GUIX_PROFILE</code> variable is defined, cd to the <code>ghdl</code> directory and call make to compile the design and run the simulation in two steps: first, clean all and include sources in its corresponding libraries with <br/></p>

<p>    <code>sh
    make __clean_all __include
</code></p>

<p>    Then, produce a new <code>Makefile</code> using <code>ghdl</code>. <br/></p>

<p>    <code>sh
    ./makefile.sh # ghdl --gen-makefile ...
</code></p>

<p>    Finally, run the simulation with <br/></p>

<p>    <code>sh
    make GHDLRUNFLAGS=&#34;--stop-time=4us --disp-time --ieee-asserts=enable&#34; run
</code></p>

<p>    This will produce a executable file before <a href="https://builds.sr.ht/~csantosb/job/1389146#task-test" rel="nofollow">running it</a> with the provided parameters. <br/>
    You may notice that, in this case, you need to produce somehow your own <code>Makefile</code>, or equivalent pipeline, right ? <br/></p>

<h2 id="hdlmake">hdlmake</h2>

<p>Wouldn’t it be nice if we had a tool to deploy online which produces makefiles for us ? It exists, and its name is <a href="/csantosb/tag:hdlmake" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">hdlmake</span></a>. <br/></p>

<p>    <strong>test__hdlmake</strong> <br/>
    <a href="https://git.sr.ht/~csantosb/ip.alu/tree/8324cd0fcb838cfb8303aae9e668b6831a329cbb/.builds/profile1.yml#L39" rel="nofollow">Source</a> the <code>.envrc</code> file where the local <code>$GUIX_PROFILE</code> variable is defined, cd to the <code>.builds/hdlmake</code> directory where all <code>Manifest.py</code> files are located, and call <code>hdlmake</code> to produce the <code>Makefile</code>. Finally, just run make to compile the design, produce an executable and run it. <br/></p>

<p>Check the resulting logs inline <a href="https://builds.sr.ht/~csantosb/job/1389146#task-test_hdlmake" rel="nofollow">here</a>, for example. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-sourcehut-alu</guid>
      <pubDate>Fri, 13 Dec 2024 12:38:24 +0000</pubDate>
    </item>
    <item>
      <title>ci (intro)</title>
      <link>https://infosec.press/csantosb/ci-intro</link>
      <description>&lt;![CDATA[img br/&#xA;How to seek, detect, be notified, analyze logs, understand and react to the different possible kind of issues one may encounter in a digital design is a vast topic of research, well beyond the scope of this modest post. But there are a couple of things we may state about here, though: automatizing issue detection is the way to go. Continuous integration (#ci) testing is a practice to adopt in #modernhw as a way to ensure that our design complies with its constraints. Let’s see this in more detail. !--more-- br/&#xA;&#xA;git&#xA;&#xA;We said #git, then, as mandatory when tracking changes (in documentation, project development, taking notes, etc.). Meaningful changes imply new commits (and good commit messages, for what it takes), but this comes along with a risk of introducing issues. Some kind of mechanism is necessary to automatize the execution of a checkout list to be run per new commit. The list is project aware, for sure, but may also be different following the git branch, and even the kind of commit (merges are to be considered differently to regular commits in topic branches, for example). We need to consider what an issue exactly is, and then you’ll need to adopt a different perspective on kinds of checkout lists. br/&#xA;&#xA;verification&#xA;&#xA;First (ideally), one starts with clear specifications about the goals of current development effort (in practice this never happens in research, and if you ever have it, they’ll evolve with time). These specifications (you’ll figure out where to find them somehow) will define the tests to run. For example, if you need to implement in firmware a deep neural network, you’ll probably have access to a test data set to verify the outcomes are correct. You may tune, improve or even completely change the architecture of your network, at the very end, you’ll have to verify your design with help of the test data set. Additionally, you may define more sophisticated tests: consumption, area, resources, etc. These all fall into the category of verification testing. br/&#xA;&#xA;unit tests&#xA;&#xA;Secondly, you’ll be running unit tests during your whole design cycle (and they’ll evolve along with it), and target tests (the one we mentioned just before). Does this addition perform correctly ? What if we stress a module with random inputs ? Are we going through all code in a given design unit ? Do we cover all values of some input/output signal in this important module ? These are all unit testing checkouts, and they’ll help us to detect issues in an early stage of design. br/&#xA;&#xA;codesign&#xA;&#xA;Codesign falls somewhere in between the two previous: as a testing methodology, it includes concepts of verification and unit testing (and can be combined with them). It is way more ambitious and complex, but also more powerful. No matter your testing strategy, the point here is that you’ll be running these tests (fully or partially) automatically at the several different stages of your development cycle. If they fail, you’ll have to be warned. br/&#xA;&#xA;guix&#xA;&#xA;img br/&#xA;Guix, as a package manager, provides all necessary software to deploy our tests (and can be extended with additional tooling). It also includes all that&#39;s necessary to create a running environment where we will execute our tests. Most importantly, #guix does so in a #deterministic and #reproductible way: we will be able to reproduce our tests in the future under exactly the same conditions. Shell containers, profiles and the time machine mechanism allow the degree of #reproducibility we need here. All it takes is a couple of text files. br/&#xA;&#xA;---&#xA;&#xA;Most usually, we will focus on two strategies to seek for issues: local, and remote. Local strategies are greatly based on git hooks, and will be topic of another post. Let’s see now in practice what can be done with help of remote tools, based on #ci, understood as a methodology consisting on automatically executing a set of tests procedures on a digital design. br/&#xA;ciseries br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/ci.png" alt="img"> <br/>
How to seek, detect, be notified, analyze logs, understand and react to the <a href="https://infosec.press/csantosb/on-testing" rel="nofollow">different possible kind of issues</a> one may encounter in a digital design is a vast topic of research, well beyond the scope of this modest post. But there are a couple of things we may state about here, though: automatizing issue detection is the way to go. <a href="https://en.wikipedia.org/wiki/Continuous_integration" rel="nofollow">Continuous integration</a> (<a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>) testing is a practice to adopt in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> as a way to ensure that our design complies with its constraints. Let’s see this in more detail.  <br/></p>

<h1 id="git">git</h1>

<p>We said <a href="/csantosb/tag:git" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">git</span></a>, then, as mandatory when <a href="https://infosec.press/csantosb/on-dependencies" rel="nofollow">tracking changes</a> (in documentation, project development, taking notes, etc.). Meaningful changes imply new commits (and good <a href="https://www.freecodecamp.org/news/how-to-write-better-git-commit-messages/" rel="nofollow">commit messages</a>, for what it takes), but this comes along with a risk of introducing issues. Some kind of mechanism is necessary to automatize the execution of a checkout list to be run per new commit. The list is project aware, for sure, but may also be different following the git branch, and even the kind of commit (merges are to be considered differently to regular commits in topic branches, for example). We need to consider what an issue exactly is, and then you’ll need to adopt a different perspective on kinds of checkout lists. <br/></p>

<h1 id="verification">verification</h1>

<p>First (ideally), one starts with clear specifications about the goals of current development effort (in practice this never happens in research, and if you ever have it, they’ll evolve with time). These specifications (you’ll figure out where to find them somehow) will define the tests to run. For example, if you need to implement in firmware a deep neural network, you’ll probably have access to a test data set to verify the outcomes are correct. You may tune, improve or even completely change the architecture of your network, at the very end, you’ll <a href="https://infosec.press/csantosb/on-testing#osvvm" rel="nofollow">have to verify your design</a> with help of the test data set. Additionally, you may define more sophisticated tests: consumption, area, resources, etc. These all fall into the category of <strong>verification testing</strong>. <br/></p>

<h1 id="unit-tests">unit tests</h1>

<p>Secondly, you’ll be running <a href="https://infosec.press/csantosb/on-testing#vunit" rel="nofollow">unit tests</a> during your whole design cycle (and they’ll evolve along with it), and target tests (the one we mentioned just before). Does this addition perform correctly ? What if we stress a module with random inputs ? Are we going through all code in a given design unit ? Do we cover all values of some input/output signal in this important module ? These are all <strong>unit testing</strong> checkouts, and they’ll help us to detect issues in an early stage of design. <br/></p>

<h1 id="codesign">codesign</h1>

<p><a href="https://infosec.press/csantosb/on-testing#cocotb" rel="nofollow">Codesign</a> falls somewhere in between the two previous: as a testing methodology, it includes concepts of verification and unit testing (and can be combined with them). It is way more ambitious and complex, but also more powerful. No matter your testing strategy, the point here is that you’ll be running these tests (fully or partially) automatically at the several different stages of your development cycle. If they fail, you’ll have to be warned. <br/></p>

<h1 id="guix">guix</h1>

<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/guix.png" alt="img"> <br/>
<a href="https://infosec.press/csantosb/use-guix" rel="nofollow">Guix</a>, as a package manager, provides all necessary software to deploy our tests (and can be <a href="https://infosec.press/csantosb/guix-channels" rel="nofollow">extended</a> with additional tooling). It also includes <a href="https://infosec.press/csantosb/guix-crash-course" rel="nofollow">all that&#39;s necessary</a> to create a running environment where we will execute our tests. Most importantly, <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> does so in a <a href="/csantosb/tag:deterministic" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">deterministic</span></a> and <a href="/csantosb/tag:reproductible" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproductible</span></a> way: we will be able to reproduce our tests in the future under exactly the same conditions. <a href="https://infosec.press/csantosb/guix-crash-course#shell-containers" rel="nofollow">Shell containers</a>, <a href="https://infosec.press/csantosb/guix-crash-course#profiles-and-generations" rel="nofollow">profiles</a> and the <a href="https://infosec.press/csantosb/guix-crash-course#time-machine" rel="nofollow">time machine mechanism</a> allow the degree of <a href="/csantosb/tag:reproducibility" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">reproducibility</span></a> we need here. All it takes is <a href="https://infosec.press/csantosb/guix-crash-course#manifest-channels" rel="nofollow">a couple of text files</a>. <br/></p>

<hr>

<p>Most usually, we will focus on two strategies to seek for issues: local, and remote. Local strategies are greatly based on <a href="https://git-scm.com/book/ms/v2/Customizing-Git-Git-Hooks" rel="nofollow">git hooks</a>, and will be topic of another post. <a href="https://infosec.press/csantosb/tag:ciseries" rel="nofollow">Let’s see now in practice</a> what can be done with help of remote tools, based on <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a>, understood as a methodology consisting on automatically executing a set of tests procedures on a digital design. <br/>
<a href="/csantosb/tag:ciseries" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ciseries</span></a> <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/ci-intro</guid>
      <pubDate>Sun, 08 Dec 2024 21:19:43 +0000</pubDate>
    </item>
    <item>
      <title>on testing</title>
      <link>https://infosec.press/csantosb/on-testing</link>
      <description>&lt;![CDATA[img br/&#xA;Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in #modernhw a few usual tools are available to ease the process as a whole. Let’s overview some of them. !--more-- br/&#xA;Here on the electronics digital design side of life, we have mainly three #freesoftware fine tools (among many others) to perform code checking to a large extent: osvvm, cocotb and vunit. They are all compatible with the ghdl compiler, and they are all available from my own #guix electronics channel (cocotb and vunit will hopefully get merged on guix upstream at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. br/&#xA;They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. br/&#xA;&#xA;osvvm&#xA;&#xA;First, we have osvvm. #Osvvm is a modern verification #vhdl library using most up-to-date language constructs (by the main contributor to the vhdl standard), and I’ll mention it frequently in this #modernhw posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the documentation repository for up-to-date details about osvvm. br/&#xA;You’ll be able to install osvvm with br/&#xA;&#xA;guix search osvvm&#xA;guix install osvvm-uart osvvm-scripts&#xA;&#xA;You have a simple use of the osvvm vhdl library in the #aludesign, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log here, using the remote ci builds facility of sourcehut. br/&#xA;&#xA;vunit&#xA;&#xA;Then, we have Vunit as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the &#34;test early and test often&#34; paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the documentation for details. br/&#xA;Install it as usual with br/&#xA;&#xA;guix install python-vunit&#xA;&#xA;A clever example of its use is provided by the fw-open-logic firmware package (also included in the electronics channel). When you install it, you’ll need to build the package once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: br/&#xA;&#xA;guix build fw-open-logic:out&#xA;&#xA;By the way, if you need the simulation libraries, they are available too. br/&#xA;&#xA;guix install fw-open-logic:out&#xA;# guix install fw-open-logic:sim  # sim libraries&#xA;&#xA;Additionnaly, #vunit is compatible with running a testing #ci pipeline online, as explained here. br/&#xA;&#xA;cocotb&#xA;&#xA;Finally, we have the interesting and original cocotb. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand #ghdl simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read the documentation to understand how powerful cosumulation approach can reveal. By the way, install it with br/&#xA;&#xA;guix install python-cocotb&#xA;&#xA;---&#xA;&#xA;From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in #modernhw digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in #ci infrastructure. How it goes ? This is the topic of the ci posts in this series. br/]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://git.sr.ht/~csantosb/csbwiki/blob/master/pics/testing.png" alt="img"> <br/>
Creating something new from scratch implies a certain ratio of unpredictable issues (loosely defined in the scope of this post: new errors, regressions, warnings, ... any unexpected behavior one may encounter).  Most important, a digital design developer needs to define somehow what he considers to be a project issue, before even thinking about how to react to it. Luckily, in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> a few usual tools are available to ease the process as a whole. Let’s overview some of them.  <br/>
Here on the electronics digital design side of life, we have mainly three <a href="/csantosb/tag:freesoftware" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">freesoftware</span></a> fine tools (among many others) to perform code checking to a large extent: <strong>osvvm</strong>, <strong>cocotb</strong> and <strong>vunit</strong>. They are all compatible with the <a href="https://infosec.press/csantosb/ghdl" rel="nofollow">ghdl compiler</a>, and they are all available from my own <a href="/csantosb/tag:guix" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">guix</span></a> <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a> (<a href="https://issues.guix.gnu.org/68153" rel="nofollow">cocotb</a> and <a href="https://issues.guix.gnu.org/74242" rel="nofollow">vunit</a> will hopefully get merged on <a href="https://infosec.press/csantosb/guix" rel="nofollow">guix upstream</a> at some point). Each departs from the rest, adopting a different paradigm about how digital design testing should be understood: verification, cosimulation and unit testing are master keywords here. <br/>
They are all complementary, so you’ll be able to combine them to test your designs. However, you’ll need to be careful and check twice what you’re doing, as some of their features overlap (random treatment, for example). You’ve been warned. <br/></p>

<h1 id="osvvm">osvvm</h1>

<p>First, we have <a href="https://github.com/OSVVM" rel="nofollow">osvvm</a>. <a href="/csantosb/tag:Osvvm" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Osvvm</span></a> is a modern verification <a href="/csantosb/tag:vhdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vhdl</span></a> library using most up-to-date language constructs (by the <a href="https://www.linkedin.com/in/jimwilliamlewis" rel="nofollow">main contributor</a> to the <a href="https://gitlab.com/IEEE-P1076" rel="nofollow">vhdl standard</a>), and I’ll mention it frequently in this <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> posts series. Well documented and being continuously improved, it provides a large set of features for natively verifying advanced designs, among them, a constrained random facility, transactions, logging, functional coverage, scoreboards, FIFOs, sophisticated memory models, etc. Even some co-simulation capabilities are included here. Refer to the <a href="https://github.com/OSVVM/Documentation#readme" rel="nofollow">documentation repository</a> for up-to-date details about osvvm. <br/>
You’ll be able to install osvvm with <br/></p>

<pre><code class="language-sh"># guix search osvvm
guix install osvvm-uart osvvm-scripts
</code></pre>

<p>You <a href="https://git.sr.ht/~csantosb/ip.alu/tree/test/sim/alu_tb.vhd#L30" rel="nofollow">have a simple use</a> of the osvvm vhdl library in the <a href="/csantosb/tag:aludesign" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">aludesign</span></a>, where the random feature is used to inject inputs to a dut unit. Testing runs for as long as every combination of two variables hasn’t been fully covered. This provides a means to be sure that all cases have been tested, regardless of random inputs. You’ll see an example simulation log <a href="https://builds.sr.ht/query/log/1380968/test_profile/log" rel="nofollow">here</a>, using the <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">remote ci</a> <a href="https://infosec.press/csantosb/sourcehut-crash-course#builds" rel="nofollow">builds facility</a> of <a href="https://infosec.press/csantosb/sourcehut-crash-course" rel="nofollow">sourcehut</a>. <br/></p>

<h1 id="vunit">vunit</h1>

<p>Then, we have <a href="https://github.com/VUnit/vunit" rel="nofollow">Vunit</a> as a complete single point of failure framework. It complements traditional test benches with a software oriented approach, based on the “test early and test often” paradigm, a.k.a. unit testing.  Here, a pre-built library layer on top of the vhdl design scans, runs and logs unit test cases embedded in user test benches. This approach seeks for an early way to detect as soon as possible conception errors. It performs random testing, advanced checking, logging, advanced communication and an advanced api to access the whole from python. It may be called from the command line, adding custom flags, and configured from a python script file where one defines libraries, sources and test parameters. Simple, elegant and efficient as a testing framework, if you want my opinion. Check the <a href="https://vunit.github.io/" rel="nofollow">documentation</a> for details. <br/>
Install it as usual with <br/></p>

<pre><code class="language-sh">guix install python-vunit
</code></pre>

<p>A clever example of its use is provided by the <code>fw-open-logic</code> firmware package (also included in the <a href="https://infosec.press/csantosb/guix-channels#electronics-channel" rel="nofollow">electronics channel</a>). When you install it, you’ll need to <a href="https://infosec.press/csantosb/guix-crash-course#packages" rel="nofollow">build the package</a> once, which gets installed in the guix store for you to use. During the process, the whole testing of its constituent modules is performed. You may have an overview of how it goes with: <br/></p>

<pre><code class="language-sh">guix build fw-open-logic:out
</code></pre>

<p>By the way, if you need the simulation libraries, they are available too. <br/></p>

<pre><code class="language-sh">guix install fw-open-logic:out
# guix install fw-open-logic:sim  # sim libraries
</code></pre>

<p>Additionnaly, <a href="/csantosb/tag:vunit" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">vunit</span></a> is compatible with running a testing <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> pipeline online, as explained <a href="https://infosec.press/csantosb/ci-sourcehut" rel="nofollow">here</a>. <br/></p>

<h1 id="cocotb">cocotb</h1>

<p>Finally, we have the interesting and original <a href="https://www.cocotb.org/" rel="nofollow">cocotb</a>. It groups several construct providing a set of facilities to implement coroutine-based cosimulation of vhdl designs. Cosimulation, you say ? Yes. It requests on demand <a href="/csantosb/tag:ghdl" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ghdl</span></a> simulation time from software (python, in this case), dispatching actions as the time advances. Afterward, based on events’ triggers, you’ll stop simulation coming back to software. This forth and back dance goes on, giving access to advanced testing and verification capabilities. Flexible and customizable as much as needed, in my opinion. Go read <a href="https://docs.cocotb.org/en/stable/index.html" rel="nofollow">the documentation</a> to understand how powerful cosumulation approach can reveal. By the way, install it with <br/></p>

<pre><code class="language-sh">guix install python-cocotb
</code></pre>

<hr>

<p>From the previous, you’ll have understood that having access to verification, unit testing and cosimulation libraries is paramount in <a href="/csantosb/tag:modernhw" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">modernhw</span></a> digital design. Independly or combined (be careful!), they provide powerful tools to detect issues (of any kind) in your design. And yet, this is not enough, as the question arises about where, and when do we run these tests ? From the previous logs in the examples, you’ll have noticed that tests run online in <a href="/csantosb/tag:ci" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">ci</span></a> infrastructure. How it goes ? This is the topic of the <a href="https://infosec.press/csantosb/ci" rel="nofollow">ci posts</a> in this series. <br/></p>
]]></content:encoded>
      <guid>https://infosec.press/csantosb/on-testing</guid>
      <pubDate>Fri, 06 Dec 2024 09:32:14 +0000</pubDate>
    </item>
  </channel>
</rss>