Updating AssemblyInfo.cs version information via batch file
A somewhat belated post which describes how you can use sed to automatically update version information in `AssemblyInfo.cs` files from a batch file.
I've mentioned elsewhere on this blog that our core products are built using standard batch files, which are part of the products source so they can be either build manually or from Jenkins. Over the last year I've been gradually converting our internal libraries onto Nuget packages, hosted on private servers. These packages are also built with a simple batch file, although they currently aren't part of the CI processes and also usually need editing before they can be ran again.
After recently discovering that my StartSSL code signing certificate was utterly useless, I spent the better part of a day rebuilding and publishing all the different packages with a new non-crippled certificate. After that work was done, I decided it was high time I built the packages using the CI server.
Rather than continue with the semi-manual batch files, I decided to make use of the pipeline functionality that was added to Jenkins, which to date I hadn't looked at.
I suppose to start with it would be helpful to see an existing build file for one of our libraries and then show how I created a pipeline to replace this file. The library in question is named Cyotek.Core and has nothing to do with .NET Core, but has been the backbone of our common functionality since 2009.
These are the steps involved for building one of our Nuget packages
AssemblyInfo.cs
file with a new version (manual)A few inconvenient manual steps there, lets see how Jenkins will help.
As it turns out, due to the way my environment is set up and how projects are built, my scenario is a little bit more complicated that it might otherwise be.
Our SVN repository is laid out as follows
/
- Contains a nuget.config
file so that all projects
share a single package folder, and also contains the strong
name key used by internal libraries/build
- Numerous batch scripts for performing build actions
and InnoSetup includes for product deployment/lib
- Native libraries for which a Nuget package isn't (or
wasn't) available/resources
- Graphics and other media that can be linked by
individual projects without having multiple copies of common
images scattered everywhere/source
- Source code/tools
- Binaries for tools such as NUnit and internal
deployment tools so build agents have the resources they need
to work correctlyOur full products check out a full copy of the entire repository and while that means there is generally no issues about missing files, it also means that new workspaces take a very long time to checkout a large amount of data.
All of our public libraries (such as ImageBox) are self contained. For the most part the internal ones are too, except for the build processes and/or media resources. There are the odd exceptions however with one being Cyotek.Core - we use a number of Win32 API calls in our applications, normally defined in a single interop library. However, there's a couple of key libraries which I want dependency free and Cyotek.Core is one of them. That doesn't mean I want to duplicate the interop declarations though. Our interop library groups calls by type (GDI, Resources, Find etc) and has separate partial code files for each one. The libraries I want dependency free can then just link the necessary files, meaning no dependencies, no publicly exposed interop API, and no code duplication.
At the simplest level, a pipeline breaks your build down into a series of discrete tasks, which are then executed sequentially. If you've used Gulp or Grunt then the pattern should be familiar.
A pipeline is normally comprised of one or more nodes. Each node represents a build agent, and you can customise which agents are used (for example to limit some actions to being only performed on a Windows machine).
Nodes then contain one or more stages. A stage is a collection of actions to perform. If all actions in the stage complete successfully, the next stage in the current node is then executed. The Jenkins dashboard will show how long each stage took to execute and if the execution of the stage was successful. Jenkins will also break the log down into sections based on the stages, so when you click a stage in the dashboard, you can view only the log entries related to that stage, which can make it easier to diagnose some build failures (the full output log is of course still available).
The screenshot below shows a pipeline comprised of 3 stages.
Pipelines are written in custom DSL based on a language named Groovy, which should be familiar to anyone used to C-family programming languages. The following snippet shows a sample job that does nothing but print out a message into the log.
Jenkins offers a number of built in commands but the real power of the pipeline (as with freestyle jobs) is the ability to call any installed plugin, even if they haven't been explicitly designed with pipelines in mind.
To create a new pipeline, choose New Item from Jenkins, enter a name then select the Pipeline option. Click OK to create the pipeline ready for editing.
Compared to traditional freestyle jobs, there's very few configuration options as you will be writing script to do most of the work.
Ignore all the options for now and scroll to the bottom of the page where you'll find the pipeline editor.
As the screenshot above shows, I divided the pipeline into 3 stages, each of which will perform some tasks
AssemblyInfo.cs
Quite a list! Lets get started.
Jenkins recommends you create the pipeline script in a separate
Jenkinsfile
and check this into version control. This might be a good idea once you have finalised your script, but while developing it is probably a better idea to save it in-line.With that said, I still recommend developing the script in a separate editor and then copying and pasting it into Jenkins. I don't know if it is the custom theme I use or something else, but the editor is really buggy and the cursor doesn't appear in the right place, making deleting or updating characters an interesting game of chance.
I want all the actions to occur in the same workspace / agent, so I'll define a single node containing my three stages. As a lot of my packages will be compiled the same way, I'm going to try and make it easier to copy and paste the script and adjust things in one place at the top of the file, so I'll declare some variables with these values.
In the above snippet, you may note I used a combination of single and double quoting for strings. Similar to PowerShell, Groovy does different things with strings depending on if they are single or double quoted. Single quoted strings are treated as-is, whereas double quoted strings will be interpolated - the
${TOKEN}
patterns will be automatically replaced with appropriate value. In the example above, I'm interpolating both variables I've defined in the script and also standard Jenkins environment variables.You'll also note the use of escape characters as if you're using backslashes you need to escape them. You also need to escape single/double quotes if they match the quote the string itself is using.
I hadn't noticed this previously given that I was always
checking out the entire repository, but the checkout
command
lets you specify multiple locations, customising both the remote
source and the local destination. This is perfect, as it means I
can now grab the bits I need. I add a checkout
command to the
Build stage as follows
I didn't write the bulk of the
checkout
commands by hand, instead I used Jenkins built in Snippet Generator to set all the parameters using the familiar GUI and generate the required script from that, at which point I could start adding extra locations, tinkering formatting etc.
As you can see, I can have configured different local
and
remote
attributes for each location to mimic the full repo.
I've also set the root location to only get the files at the
root level using the depthOption
- otherwise it would check
out the entire repository anyway!
If I now run the build, everything is swiftly checked out to the correct locations. Excellent start!
Well actually, it wasn't. While I was testing this pipeline, I was also checking in files elsewhere to the repository. And as I'd enabled polling for the pipeline, it kept triggering builds without need due to the fact I'd included the repository root for the strong name key. (After this blog post is complete I think I'll do a little spring cleaning on the repository!)
In freestyle projects, I configure patterns so that builds are
only triggered when the changes made to the folders that
actually contain the application files. However, I could not get
the checkout
command to honour either the includedRegions
or
excludedRegions
properties. Fortunately, when I took another
look at the built-in Snippet Generator, I noticed the command
supported two new properties - changelog
and poll
, the
latter of which controls if polling is enabled. So the solution
seemed simple - break the checkout
command into two different
commands, one to do the main project checkout and another (with
poll
set to false
) to checkout supporting files.
The Build stage now looks as follows. Note that I had to put the "support" checkout first, otherwise it would delete the results of the previous checkout (again, probably due to the root level location... sigh). You can always check the Subversion Polling Log for your job to see what SVN URI's its looking for.
A few minutes later I checked something else in... and wham, the pipeline built itself again (it behaved fine after that though). I had a theory that it was because Jenkins stored the repository poll data separately and only parsed it from the DSL when the pipeline was actually ran rather than saved, but on checking the raw XML for the job there wasn't anything extra. So that will have to remain a mystery for now.
As I'm going to be generating Nuget packages and running tests, I'll need some folders to put the output into. I already know that NUnit won't run if the specified test results folder doesn't exist, and I don't want to clutter the root of the workspace with artefacts even if it is a temporary location.
For all its apparent power, the pipeline DSL also seems quite
limiting at times. It provides a (semi useless) remove directory
command, but doesn't have a command for actually creating
directories. Not to worry though as it does have bat
and sh
commands for invoking either Windows batch or Unix shell files.
As I'm writing this blog post from a Windows perspective, I'll
be using ye-olde DOS commands.
But, before I create the directories, I'd better delete any
existing ones to make sure any previous artefacts are removed.
There's a built-in deleteDir
command which recursively deletes
a directory. The current directory, which I why I referred to
it as semi-useless above - I would prefer to delete a directory
by name.
Another built-in command is dir
. Not synonymous with the DOS
command, this helpful command changes directory, performs
whatever actions you define, then restores the original
directory - the equivalent of the PUSHD
, CD
and POPD
commands in my batch file at the top of this post.
The following snippets will delete the nuget
and testresults
directories if they exist. If they don't then nothing will
happen. I found this a bit surprising - I would have expected it
to crash given I told it to delete a directory that doesn't
exist.
We can then issue commands to create the directories. Normally
I'd use IF NOT EXIST <NAME> MKDIR <NAME>
, but as we have
already deleted the folders we can just issue create commands.
And now our environment is ready - time to build.
First thing to do is to restore packages by calling nuget restore
along with the filename of our solution
Earlier I mentioned that I usually had to edit the projects
before building a Nuget package - this is due to needing to
update the version of the package as by default Nuget servers
don't allow you to overwrite packages with the same version
number. Our .nuspec
files are mostly set up to use the
$version$
token, which then pulls the true version from the
AssemblyInformationVersion
attribute in the source project.
The core products run a batch command called
updateversioninfo3
will will replace part of that version with
the contents of the Jenkins BUILD_NUMBER
environment variable,
so I'm going to call that here.
I don't want to get sidetracked as this post is already quite long, so I'll probably cover this command in a different blog post.
If you're paying attention, you'll see the string above looks
different from previous commands. To make it easy to specify
tool locations and other useful values our command scripts may
need, we have a file named initbuild.bat
that sets up these
values in a single place.
However, each Jenkins bat
call is a separate environment.
Therefore if I call initbuild
from one bat
, the values will
be lost in the second. Fortunately Groovy supports multi-line
strings, denoted by wrapping them in triple quotes (single or
double). As I'm using interpolation in the string as well, I
need to use double.
All preparation is completed and it's now time to build the
project. Although my initbuild
script sets up a msbuildexe
variable, I wanted to test Jenkins tool commands and so I
defined a MSBuild tool named MSBuild14
. The tool
command
returns that value, so I can then use it to execute a release
build
With our Build stage complete, we can now move onto the Test stage - which is a lot shorter and simpler.
I use NUnit to perform all of the testing of our library code. By combining that with the NUnit Plugin it means the rest results are directly visible in the Jenkins dashboard, and I can see new tests, failed tests, or if the number of tests suddenly drops.
Note that the NUnit plugin hasn't been updated to support reports generated by NUnit version 3, so I am currently restricted to using NUnit 2
After that's ran, I call the publish. Note that this plugin
doesn't participate with the Jenkins pipeline API and so it
doesn't have a dedicated command. Instead, you can use the
step
command to execute the plugin.
Rather unfortunately the Snippet Editor wouldn't work correctly for me when trying to generating the above step. It would always generate the code
<object of type hudson.plugins.nunit.NUnitPublisher>
. Fortunately Ola Eldøy had the answer.
However, there's actually a flaw with this sequence - if the
bat
command that executes NUnit returns a non-zero exit code
(for example if the test run fails), the rest of the pipeline is
skipped and you won't actually see the failed tests appear in
the dashboard.
The solution is to wrap the bat
call in try ... finally
block. If you aren't familiar with the try...catch pattern,
basically you try an operation, catch any problems, and
finally perform an action even if the initial operation
failed. In our case, we don't care if any problems occur, but we
do want to publish any available results.
Now even if tests fail, the publish step will still attempt to execute.
With building and testing out of the way, it's time to create
the Nuget package. As all our libraries that are destined for
packages have .nuspec
files, then we just call nuget pack
with the C# project filename.
Optionally, if you have an authenticode code signing certificate, now would be a good time to apply it.
I create a Deploy stage containing the appropriate commands for signing and packaging, as follows
Once the package has been built, then we can publish it. In my
original batch files, I have to manually update the file to
change the version. However, NUGET.EXE
actually supports
wildcards - and given that the first stage in our pipeline
deletes previous artefacts from the build folder, then there
can't be any existing packages. Therefore, assuming our
updateversioninfo3
did its job properly, and our .nuspec
files use $version$
, we shouldn't be creating packages with
duplicate names and have no need to hard-code filenames.
And that seems to be it. With the above script in place, I can now build and publish Nuget packages for our common libraries automatically. Which should serve as a good incentive to get as much of our library code into packages as possible!
During the course of writing this post, I have tinkered and
adapted the original build script multiple times. After
finalising both the script and this blog post, I used the source
script to create a further 3 pipelines. In each case all I had
to do was change the libName
and testsName
variables, remove
the unnecessary Cyotek.Win32
checkout location, and in one
case add a new checkout location for the libs
folder. There
are now four pipelines happily building packages, so I'm going
to class this as a success and continue migrating my Nuget
builds into Jenkins.
My freestyle jobs have a step to email individuals when the
builds are broken, but I haven't added this to the pipeline jobs
yet. As subsequent stages don't execute if the previous stage
has failed, that implies I'd need to add a mail
command to
each stage in another try ... finally
block - something to
investigate another day.
The complete script can be downloaded from a link at the end of this post.
Like what you're reading? Perhaps you like to buy us a coffee?