ci (intro)

img
How to seek, detect, be notified, analyze logs, understand and react to the different possible kind of issues one may encounter in a digital design is a vast topic of research, well beyond the scope of this modest post. But there are a couple of things we may state about here, though: automatizing issue detection is the way to go. Continuous integration (#ci) testing is a practice to adopt in #modernhw as a way to ensure that our design complies with its constraints. Let’s see this in more detail.

git

We said #git, then, as mandatory when tracking changes (in documentation, project development, taking notes, etc.). Meaningful changes imply new commits (and good commit messages, for what it takes), but this comes along with a risk of introducing issues. Some kind of mechanism is necessary to automatize the execution of a checkout list to be run per new commit. The list is project aware, for sure, but may also be different following the git branch, and even the kind of commit (merges are to be considered differently to regular commits in topic branches, for example). We need to consider what an issue exactly is, and then you’ll need to adopt a different perspective on kinds of checkout lists.

verification

First (ideally), one starts with clear specifications about the goals of current development effort (in practice this never happens in research, and if you ever have it, they’ll evolve with time). These specifications (you’ll figure out where to find them somehow) will define the tests to run. For example, if you need to implement in firmware a deep neural network, you’ll probably have access to a test data set to verify the outcomes are correct. You may tune, improve or even completely change the architecture of your network, at the very end, you’ll have to verify your design with help of the test data set. Additionally, you may define more sophisticated tests: consumption, area, resources, etc. These all fall into the category of verification testing.

unitary tests

Secondly, you’ll be running unitary tests during your whole design cycle (and they’ll evolve along with it), and target tests (the one we mentioned just before). Does this addition perform correctly ? What if we stress a module with random inputs ? Are we going through all code in a given design unit ? Do we cover all values of some input/output signal in this important module ? These are all unit testing checkouts, and they’ll help us to detect issues in an early stage of design.

codesign

Codesign falls somewhere in between the two previous: as a testing methodology, it includes concepts of verification and unitary testing (and can be combined with them). It is way more ambitious and complex, but also more powerful. No matter your testing strategy, the point here is that you’ll be running these tests (fully or partially) automatically at the several different stages of your development cycle. If they fail, you’ll have to be warned.

guix

img
Guix, as a package manager, provides all necessary software to deploy our tests (and can be extended with additional tooling). It also includes all that's necessary to create a running environment where we will execute our tests. Most importantly, #guix does so in a #deterministic and #reproductible way: we will be able to reproduce our tests in the future under exactly the same conditions. Shell containers, profiles and the time machine mechanism allow the degree of #reproducibility we need here. All it takes is a couple of text files.


Most usually, we will focus on two strategies to seek for issues: local, and remote. Local strategies are greatly based on git hooks, and will be topic of another post. Let’s see now in practice what can be done with help of remote tools, based on #ci, understood as a methodology consisting on automatically executing a set of tests procedures on a digital design.
#ciseries