- 18 Jan, 2022 2 commits
-
-
Ben Avison authored
Just about every C module in the source tree uses CMHG to interface with the kernel. However, cppcheck doesn't understand CMHG files, and so it will incorrectly identify any C entry points from the CMHG object file as unused functions. The main command-line suppression options for cppcheck are: * --suppress=unusedFunction : blanket suppression of the warning for all files. This is undesirable because it would mean we miss many examples of dead code, and ironically, since modules commonly feature in ROMs which are space-constrained, it's particularly valuable to identify dead code in these cases. * --suppress=unusedFunction:[filename] : better, but a pain to implement the CI job for (we'd need to pass it the relevant filename(s) somehow), a pain to maintain (for every module, we'd need to identify the relevant file(s)) and could still miss some dead code. * --suppress=unusedFunction:[filename]:[line] : solves the dead code problem but at the cost of being even more hassle to maintain, due to having to keep line numbers up-to-date. Compared to these options, inline suppression markers look very attractive. However, objections have been raised to these also, so here we use a new feature of cppcheck 1.84 (available now that we have upgraded the GitLab runner machine to Ubuntu 20.04): the more verbose but more flexible option of passing a suppression specification to cppcheck in XML format. The XML file itself is generated during the "make standalone" command that is performed as part of the CI job. For non-module components, the XML file is not generated, and the option to cppcheck is silently removed. Required RiscOS/BuildSys!44
-
Ben Avison authored
The GitLab runner machine previously ran Ubuntu 18.04, which featured cppcheck 1.82. Some planned enhancements to the CI scripts required a newer version of cppcheck, so we have upgraded it to Ubuntu 20.04, which has cppcheck 1.90. However, the format of the diagnostics printed by cppcheck has changed in 1.90, so our code that parsed them needs adapting to match.
-
- 17 Dec, 2021 1 commit
-
-
Ben Avison authored
When working on an unrelated issue, it became clear that the reason why previously only the `merge_log` and `merge_whitesp` jobs appeared in detached pipelines (the ones that relate to an open MR) was really because they include a `rules` section. It is as though in the absence of a `rules` section, a default one applies, which adds the job to the pipeline only if that pipeline was due to an update of a branch or tag ref. When `rules` was present for a job, it overrides the default, and is evaluated irrespective of the the pipeline trigger. Now, it's useful to have all the jobs present in detached pipelines. The latest detached pipeline state, and its associated artifactes, are displayed at the top of the "Overview" tab of each MR page, and a new one can be easily triggered from the "Run pipeline" button at the top of the MR page's "Pipelines" tab, without having to navigate to the contributor's fork project (which may not even be public). It's also a problem that th...
-
- 01 Sep, 2021 1 commit
-
-
Ben Avison authored
Unlike all other components with CrossCompilationSupport branches, HostFS had two, and the correct one needed to be selected in place of the HAL branch. Now that its MRs have been merged, this can be removed.
-
- 08 Jun, 2021 1 commit
-
-
Ben Avison authored
From RiscOS/Env!15, we no longer use aliases, so there's no need to tell non-interactive shells to expand them.
-
- 03 Jun, 2021 2 commits
-
-
Ben Avison authored
-
Ben Avison authored
Various CI jobs run make on all the components within a project. The COMPONENT and TARGET were correctly set, but we neglected to change into the appropriate subdirectory first (where applicable).
-
- 01 Jun, 2021 2 commits
-
-
Ben Avison authored
These jobs were trying to fetch a build tree from a non-existent superproject `Products/IOMD32` when they should have referenced `Products/IOMDHAL`.
-
Ben Avison authored
On the Runner machine, each fork of each project gets its own directory, which is left in the state which the latest job for whatever pipeline was most recently run on it. This typically will include a large number of object and binary files, which are of no use to anyone (anything of interest will already have been packaged up into an artifact and uploaded to the main GitLab server). Address this by adding an additional job to the end of each pipeline, which does a `git clean` (it's worth leaving these in place to reduce the bandwidth requirement when doing a `git fetch` when a pipeline is next run for the fork). The Runner machine also stores cache files for each fork of each project, at least for jobs that complete fully successfully (and there are an increasing number of these). The way our pipelines use caches, these are tarballs of pre-built source trees for each target platform. These take up less space than the temporary files noted above, but will now become the dominant user of disc space. To address this, abandon use of GitLab Runner's own cache facility, and take advantage of the fact that shell executors actually have visibility of the gitlab-runner user's whole home directory to maintain a single, cached version of each tarball, shared across all forks of all projects. This is stored within ~/cache, but namespaced under ~/cache/common to avoid collisions with any users of GitLab Runner's cache facility.
-
- 17 May, 2021 1 commit
-
-
Ben Avison authored
Once the following have been merged: * Products/BCM2835!6 * Products/BuildHost!2 * Products/Disc!6 * Products/iMx6!2 * Products/IOMDHAL!3 * Products/OMAP3!3 * Products/OMAP4!3 * Products/OMAP5!3 * Products/Titanium!3 * Products/Tungsten!3 then we can have the pipelines for submodules fetch their source trees from the central projects rather than my forks.
-
- 14 May, 2021 1 commit
-
-
Ben Avison authored
For these versions, "git submodule update --remote" will fail for any submodules that don't specify a branch in .gitmodules, and which are currently checked out on a remote tracking branch that tracks a remote other than "origin". For our pipelines, this means almost any submodule which has a CrossCompilationSupport branch which has not yet been merged. Work around it by first doing "git submodule update" without "--remote", which will put all submodules into detacehd HEAD state.
-
- 23 Dec, 2020 1 commit
-
-
Ben Avison authored
Resolves bug introduced in commit 9c61e2f0.
-
- 11 Dec, 2020 2 commits
-
-
Ben Avison authored
If the following happens: * a GitLab runner recursively clones a superproject * one or more submodules has changes upstream and the submodule references in the superproject have been updated to point to it * a new pipeline is launched for the new superproject revision then, while the runner would fetch changes to the superproject, it wasn't doing so for the submodules. Ironically, we do both `git submodule update` and `git submodule update --remote` in different pipeline stages and the latter includes an implicit submodule fetch - but that was the later stage which we don't get as far as. To fix this, while retaining the pipeline stage order, include an explicit fetch in the earlier stage (and remove the implicit fetch in the later one, since that now merely wastes time).
-
Ben Avison authored
This one is a special case where the superproject name mismatches the corresponding Env (and thus Components) file. Our own CI script also didn't previously support the *removal* of an autogenerated YAML file. This should be a rare occurrence, but it's best to automate this also. Also support force-pushing to the submodule; this will be required in most cases where the pipeline was triggered by a force-push to *this* project.
-
- 25 Nov, 2020 1 commit
-
-
Ben Avison authored
-