soeren.codes | a blog by Søren Johanson < Go back home

GitLab CI and OTRS6 Unit Tests

In this post, I’ll be taking a look at how to automate OTRS 6 unit tests and integrate them into GitLabs Continuous Integration. This post assumes that you have a bit of previous experience:

  1. Previously worked with OTRS5 or OTRS6, at least as administrator
  2. Experience with Perl5, unit tests
  3. Some experience with GitLab CI

I will touch on all of the above points as good as I can throughout the post, but I may not go into great detail everywhere.

Background

In OTRS6, add-ons are delivered in .opm packages. To build these add-ons, an internal function Kernel::System::Package->PackageBuild() is called. Alternatively, one can use the console command Dev::Package::Build, specifying the path to the .sopm file (which then calls the above function and builds the final .opm file). To make our developer lives easier, our team has integrated the build process into the GitLab Continuous Integration. Using CI, after every commit, a package is built from the existing .sopm file and finally pushed to an internal repository server. From there, we can fetch packages as needed.

For reference, here is an example .sopm.

As you can see further down, there is a <Filelist> tag, which OTRS6 uses as a reference for files included in the add-on. In the final package, contents of included files are simply written into the .opm file in base64 format, between the <File> tags.

Wanting to slowly integrate unit tests into our add-ons, it was time to update the existing CI to automatically execute every unit test inside the add-on and pass or fail the build depending on the result of these unit tests.

Modifying the CI

Fortunately, GitLab offers the test stage, where we can execute our script(s) and fail the entire job if necessary. Initially, I was absolutely stumped as to how I’d integrate the unit tests, execute them and successfully return the results, nevermind calculate test coverage and display that. For reference, here is a quick overview of a pretty standard .gitlab-ci.yml file which triggers the process:

image: image-name

stages:
  - build
  - deploy

before_script:
  -

build:
  stage: build
  script:
    - mkdir build
    - PackageBuild
  artifacts:
    expire_in: 1 week
    paths:
      - build/*.opm

deploy:
  stage: deploy
  script:
    - upload build/*.opm /$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME/$CI_BUILD_REF_NAME/

At the very top is the image name, which is the image that is used by every stage in the process. This can be overwritten by an individual stage if that stage specifies their own image via the tag. The last part is relevant, because it is not something I had considered. The image we use internally is called otrs-build, which made me think it was a fresh OTRS install with nothing in it. However, when I actually tried to work with the standard OTRS directory (which is /opt/otrs), I never found it. As an alternative, I unsuccessfully tried to experiment with docker run inside the test stage to create a temporary OTRS container. There, I would copy over my test files via Net::OpenSSH and execute them via remote command, feeding the results back.

If this already sounds way too complicated to you, it’s because it actually is. While I was working on modifying the CI, I was also working on modifying an actual OTRS image which we regularly push to our internal docker registry. I was absolutely convinced that this was the image specified at the top of the CI file, but none of my changes actually had any effect.

After talking to a colleague about it and getting another perspective, I realized that

a) the otrs-build image was actually just a naked CentOS 7 (and I never bothered to ask)
b) instead of fiddling around with docker run, the image tag was my savior.

Here, I could use the image that I had worked on for the past few days just for the test stage. This gave me a fresh, local OTRS where I did not have to work with any SSH shenanigans to execute my unit tests.

Writing and executing unit tests

Unit tests in OTRS6 are substantially different from the way they are handled in standard Perl. In the latter, tests are usually stored in the t/ directory, where they are then executed with the help of TAP::Harness or similar, maybe even during make.
By comparison, OTRS6 stores its unit tests under the otrs/scripts/test/ directory, executing when needed via the console command Dev::UnitTest::Run or, alternatively, directly via Kernel::System::UnitTest->Run().

I will not cover unit tests in detail here, but instead link to a quasi reference on the helper methods of OTRS6 unit tests.

Referencing this rather lengthy unit test, we can see that one needed object is Kernel::System::User, as that is the object being tested with this unit test. As one would expect, this object actually has to be present within /opt/otrs/Kernel/System/ so that the unit test can call on it. In practice, this means that, to execute unit tests successfully, we need a fully functioning OTRS6 installation.

Setting up the test stage

This is extremely simple and straightforward. As I mentioned above, using the image tag is crucial here as it allows us to load in a fresh OTRS6 installation during the test stage only. To successfully add and execute a test stage in the CI, the following lines have to be added to the .gitlab-ci.yml:

stages: 
- test

test:
  image: otrs6
  stage: test
  script:
    - TestExecution

The script TestExecution is actually a .pl script pulled in from a shared library and then copied over to /usr/local/bin; I will cover this in more detail in a later category.

Quick reference: Structure of an OTRS6 add-on

.
├── otrs
│   └── Custom
│       └── Kernel
│   └── Kernel
│       └── Config
│           └── Files
│               └── XML
│   └── scripts
│       └── test
├── OTRS-addon.sopm
├── .gitignore
├── .gitlab-ci.yml

9 directories, 3 files

The typical tree structure of an OTRS6 add-on

By convention, new or modified files are placed inside the Custom/ directory. The structure of the original Kernel/ directory is mirrored so that package names do not have to be changed when files are placed in the Custom/ directory. The majority of new or modified files are placed inside Custom/Kernel/, which is why I’ve written it down in the tree structure.

However, there are certain exceptions to this rule: OTRS6 cannot (or refuses to) read out certain files from the Custom/ directory. Most prominently, this includes the .xml configuration files; these always have to be placed within the Kernel/Config/Files/XML directory.

Writing the test execution script

With the test execution script TestExecution, a few objectives have to be completed:

  1. The contents of our add-on have to be copied over to /opt/otrs, the working directory of our OTRS6 installation.
  2. Iterating over our local otrs/scripts/test directory, we have to execute every unit test.
  3. The results of the unit tests have to be saved, failing the build if any tests fail.
  4. (Optional) Somehow calculate test coverage of the existing unit tests.

The root node in the above tree structure is also the root node when executing our test script TestExecution.pl. Our first job is going to be copying over the contents of the “local” otrs/ directory to /opt/otrs, so that we can execute unit tests.
I experimented with using open to copy files over recursively, which would look like this:

open my $fh, '|-', 'cp', '-pR', 'otrs/', '/opt/otrs', or die "Can't open pipe: $!";
my @lines = <$fh>;
close $fh or die "Can't close pipe: $!";

However, this approach unfortunately does not work with cp. What I found was that CentOS7 defined the alias cp -i inside the .bashrc, which forces a prompt when trying to overwrite files and would fail the above command (since it’s unable to respond to command prompts).
Using either \cp or unalias cp before the above command had no effect, so switching over to File::Copy::Recursive proved to be a much better alternative. With that, it boils down to simply executing dircopy("otrs/", "/opt/otrs") and the contents are copied without issue.

To manage step 2, we first have to find the directory in which the unit tests are stored. Naming the subdirectory found in otrs/scripts/test the same as the actual add-on allows us to read out the .sopm, find the <Name> tag and then iterate over the final directory otrs/scripts/test/$DirectoryName.

Due to internal differences, when there are no unit tests in the directory, the test stage automatically passes. This is due to both legacy add-ons that have no unit tests and not every future add-on requiring unit tests.

Now that we can iterate over our unit tests, let’s take a look at how we can execute them using Dev::UnitTest::Run.

The command Dev::UnitTest::Run only needs the name of the .t file to run the unit test, completely ignoring the subdirectory structure. So if a unit test would be stored in /opt/otrs/scripts/test/addon-folder/unitTest.t, the command to be executed would look like this:

sudo -u otrs /opt/otrs/bin/otrs.Console.pl Dev::UnitTest::Run --test unitTest

In practice, we would loop over our local otrs/scripts/test/<dirname> subdirectory, extracting the name of the unit test via RegEx and then executing said unit test and returning the result. The result determines if the unit test was successful or not, and if the job fails or not.
Let’s take a look at a sample output of Dev::UnitTest::Run:

shell:/opt/otrs> bin/otrs.Console.pl Dev::UnitTest::Run --test Calendar
+-------------------------------------------------------------------+
/opt/otrs/scripts/test/Calendar.t:
+-------------------------------------------------------------------+
................................................
=====================================================================
yourhost ran tests in 2s for OTRS 6.0.x git
All 48 tests passed.

Disclaimer: Calendar.t was shortened for formatting reasons. Originally, it included 97 unit tests.

This is a non-verbose output; with the --verbose flag set, every output from functions like True would be shown in detail. The problem with the above approach should become apparent when seeing the output. How is our script going to know what unit test failed or succeeded? The output does not return 0 or 1, which would be the easiest to work with.

Fortunately, Dev::UnitTest::Run has a flag called --post-test-script. With this, we can execute a bash script after each run of the command. In addition to that, the command passes certain variables to this bash script, which we can then output. The most relevant ones are %TestOk% and %TestNotOk%. The resulting string output from the command can then simply be filtered via RegEx and our variables $TestOK and $TestNotOK, where we save successful and failed tests respectively, can be incremented depending on the result.

This is what the final TestExecution.pl would look like:

package TestExecution;

use strict;
use warnings;
use utf8;

use File::Copy::Recursive qw(dircopy);
use libs::Kernel::System::Main;

# This is for our shared library
use FindBin qw($Bin);
use lib "$Bin/libs";
use lib "$Bin/libs/Kernel/cpan-lib";
use lib "$Bin/libs/Kernel";
use lib "$Bin/libs/cpan";

# allocate new hash for object
my $Self = {};
bless($Self);

# Save successful and failed tests respectively
my $TestOK = 0;
my $TestNotOK = 0;

my $FileString;

# Find the .sopm file in our root node
my $SourcePath = glob("*.sopm");

# Read out the content from the .sopm file
my $ContentRef = Kernel::System::Main->FileRead(
    Location => $SourcePath,
    Mode     => 'utf8',        # optional - binmode|utf8
    Result   => 'SCALAR',      # optional - SCALAR|ARRAY
);

if ( !$ContentRef || ref $ContentRef ne 'SCALAR' ) {
    die "File $SourcePath is empty / could not be read!\n";
}

$FileString = ${$ContentRef};

my $DirName;

# Here we filter out the <Name> </Name> tag inside the .sopm to use as our directory name
if ($FileString =~ m/<Name>([a-zA-Z0-9üäö,\-\_]+)<\/Name>/gm) {
    $DirName = $1;
}

print STDOUT "otrs/scripts/test/$DirName\n";

# Read out the unit tests from our local subdirectory
my @Directory = Kernel::System::Main->DirectoryRead(
    Directory => "otrs/scripts/test/$DirName",
    Filter    => '*',
    Recursive => 1,
);

if (!@Directory) {
    print STDOUT "No unit tests found!\n";
    1;
}

# Execute step 1, copying local to remote
dircopy("otrs", "/opt/otrs");

# Execute step 2
for my $ID (@Directory) {
    # Find the unit test name with regex
    if ($ID =~ m/([a-zA-Z0-9]*)\.t$/g) {
        # Execute the unit test, returning our --post-test-script output
        my $Result = `sudo -u otrs /opt/otrs/bin/otrs.Console.pl Dev::UnitTest::Run --test $1 --post-test-script 'echo Test_OK:%TestOk%###Test_NOT_OK:%TestNotOk%'`;
        
        # Increment our variables depending on result.
        if ($Result =~ m/Test_OK:(\d+)/gm) {
            $TestOK += 1;
        }
        if ($Result =~ m/Test_NOT_OK:(\d+)/gm) {
            $TestNotOK += $1;
        }
    }
}

print STDOUT "Test OK: $TestOK\n";
print STDOUT "Test NOT OK: $TestNotOK\n";

# If a test fails, fail build
if ($TestNotOK) {
    die "One ore more tests failed!";
}

1;

Notice that for executing Dev::UnitTest::Run, I use backticks instead of open. This is due to readability, as open would require splitting every single parameter into individual strings.

Now, our test stage is able to pass or fail the build depending on whether or not the unit tests were successful.

Gathering test coverage in OTRS6

I have tried many times and in many different ways to calculate test coverage in OTRS6. In a regular Perl Module, the file structure is completely different and as such, tests work completely different as well. For example, here is the tree structure of File::Copy::Recursive, the Perl module we used to copy our directory contents:

.
├── Changes
├── lib
│   └── File
│       └── Copy
│           └── Recursive.pm
├── Makefile.PL
├── MANIFEST
├── META.json
├── META.yml
├── README
├── README.md
└── t
    ├── 00.load.t
    ├── 01.legacy.t
    ├── 02.legacy-symtogsafe.t
    ├── 03.github-issue-5.t
    ├── 04.readonly-dir.t
    └── 05.legacy-pathmk_unc.t

4 directories, 14 files

As we can see, the test files are stored within the t/ directory (instead of otrs/scripts/test), the actual logic is stored in lib/ (instead of Kernel/System or Custom/Kernel/System) and there is a Makefile.PL, which does not exist in OTRS6. We also do not use modules like Test::More or Test::Harness when writing test files (though Kernel::System::UnitTest differs under the hood).
An excellent module for code coverage is Devel::Cover. Given the above structure, it can find its way around and calculate code coverage very fast and fairly accurately. However, since OTRS6’ file structure is completely different, Devel::Cover does not work at all and there is no reasonable way to get it to work.

So far, I’ve not come up with a way to accurately calculate test coverage in OTRS6.

Conclusion

In conclusion, we can see that implementing automatic unit test execution in combination with GitLab CI is fairly simple. First, we have to prepare our .gitlab-ci.yml test stage, where we specify a separate TestExecution.pl file to be executed. In this separate Perl file, we first extract the name of the OTRS add-on via the .sopm files’ <Name> tag, which we later use to execute all unit tests found inside the otrs/scripts/test/<dirname> subdirectory. Then, we copy over all contents of the “local” otrs/ directory to the “remote” /opt/otrs directory to be able to execute unit tests and have the proper dependencies (e.g. Kernel::System::Valid) in place. The customized output from the unit tests is then returned to the CI, which passes or fails the job depending on the state of the unit tests.

One question remains: How can we make sure that all dependencies that are specified in the <ModuleRequired> tag are installed on the system? This is still something I need to work on, and I will update this post once I’ve found a solution. Though admittedly, it will probably be fairly straightforward, provided App::cpanminus is installed on the system.

Reply by email Back to top