Updating AssemblyInfo.cs version information via batch file
A somewhat belated post which describes how you can use sed to automatically update version information in `AssemblyInfo.cs` files from a batch file.
After my first experiment in building and publishing our Nuget packages using Jenkins, I wasn't actually anticipating writing a follow up post. As it transpires however, I was unhappy with the level of duplication - at the moment I have 19 packages for our internal libraries, and there are around 70 other non-product libraries that could be turned into packages. I don't really want 90+ copies of that script!
As I did mention originally, Jenkins does recommend that the build script is placed into source control, so I started looking at doing that. I wanted to have a single version that was capable of handling different configurations that some projects have and that would receive any required parameters directly from the Jenkins job.
Fortunately this is both possible and easy to do as you can add custom properties to a Jenkins job which the Groovy scripts can then access. This article will detail how I took my original script, and adapted it to handle 19 (and counting!) package compile and publish jobs.
Parameters are switched off and hidden by default, but it's easy enough to enable them. In the General properties for your job, find and tick the option marked This project is parameterised.
This will then show a button marked Add Parameter which, when clicked, will show a drop-down of the different parameter types available. For my script, I'm going to use single line string, multi-line string and boolean parameters.
The parameter name is used as environment variables in batch jobs, therefore you should try and avoid common parameter names such as
PATH
and also ensure that the name doesn't include special characters such as spaces.
By the time I'd added 19 pipeline projects (including converting the four I'd created earlier) into parameterised builds running from the same source script, I'd ended up with the following parameters
Type | Name | Example Value |
---|---|---|
String | LIBNAME | Cyotek.Core |
String | TESTLIBNAME | Cyotek.Core.Tests |
String | LIBFOLDERNAME | src |
String | TESTLIBFOLDERNAME | tests |
Multi-line | EXTRACHECKOUTREMOTE | /source/Libraries/Cyotek.Win32 |
Multi-line | EXTRACHECKOUTLOCAL | .\source\Libraries\Cyotek.Win32 |
Boolean | SIGNONLY | false |
More parameters than I really wanted, but it covers the
different scenarios I need. Note that with the exception of
LIBNAME
, all other parameters are optional and the build
should still run even if they aren't actually defined.
There are at least 3 ways that I know of accessing the parameters from your script
env.<ParameterName>
- returns the string parameter from
environment variables. (You can also use env.
to get other
environment variables, for example env.ProgramFiles
)params.<ParameterName>
- returns the strongly typed
parameter"${<ParameterName>}"
- returns the value via interpolationOf the three types above, the first two return null
if you
request a parameter which doesn't exist - very helpful for when
you decide to add a new parameter later and don't want to update
all the existing projects!
The third however, will crash the build. It'll be easy to diagnose if this happens as the output log for the build will contain lines similar to the following
So my advice is to only use the interpolation versions when you can guarantee the parameters will exist.
In my first attempt at creating the pipeline job, I had a block of variables defined at the top of the script so I could easily edit them when creating the next pipeline. I'm now going to adapt that block to use parameters.
I'm using params
to access the parameters to avoid any
interpolation crashes. As it's possible the path parameters
could be missing or empty, I'm also using a combinePath
helper
function. This is a very naive implementation and should
probably be made a little more robust. Although Java has a
File
object which we could use, it is blocked by default as
Jenkins runs scripts in a sandbox. As I don't think turning off
security features is particularly beneficial, this simple
implementation will serve the requirements of my build jobs
easily enough.
Note: The helper function must be placed outside
node
statements
The multi-line string parameter is exactly the same as a normal
string parameter, the difference simply seems to be the type of
editor they use. So if you want to treat them as an array of
values, you will need to build this yourself using the split
function.
Some of my projects are slightly naughty and pull code files from outside their respective library folders. The previous version of the script had these extra checkout locations hard-coded, but that clearly will no longer suffice. Instead, by leveraging the multi-line string parameters, I have let each job define zero or more locations and check them out that way.
I chose to use two parameters, one for the remote source and one for the local destination even though this complicates things slightly - but I felt it was better than trying to munge both values into a single line
I simply parse the two parameters, and issue a checkout
command for each pair. It would possibly make more sense to do
only a single checkout
command with multiple locations
, but
this way got the command up and running with minimum fuss.
As not all my libraries have dedicated tests yet, I had defined
a hasTests
variable at the top of the script which will be
true if the TESTLIBNAME
parameter has a value. I could then
use this to exclude the NUnit execution and publish steps from
my earlier script, but that would still mean a Test stage
would be present. Somewhat to my surprise, I found wrapping the
stage
statement in an if
block works absolutely fine,
although it has a bit of an odour. It does mean that empty test
stages won't be display though.
Those were pretty much the only modifications I made to the existing script to convert it from something bound to a specific project to something I could use in multiple projects.
In my original article, I briefly mentioned one of the things I wanted the script to do was to archive the build artefacts but then never mentioned it again. That was simply because I couldn't get the command to work and I forgot to state that in the post. As it happens, I realised what was wrong while working on the improved version - I'd made all the paths in the script absolute, but this command requires them to be relative to the workspace.
The following command will archive the contents of the libraries output folder along with the generated Nuget package.
Now that I've got a (for the moment!) final version of the script, it's time to add it to SVN and then tell Jenkins where to find it. This way, all pipeline jobs can use the one script and automatically inherit any changes to it.
The steps below will configure an existing pipeline job to use a script file taken from SVN.
.*
in the Excluded Regions fieldNow instead of using an in-line script, the pipeline will pull the script right out of version control.
There are a couple of things to note however
workspace@script
. In other-words, it is
checked out directly into your Jenkins installation.
Originally I located the script in my \build
folder along
with all other build files, until I noted all the files were
being checked out into multiple server paths, not the
temporary work spaces. My advice therefore is to stick the
script by itself in a folder so that it is the only file that
is checked out, and perhaps change the Repository depth
field to files.It is worth reiterating the point, the contents of this folder will be checked out onto the server where you have installed Jenkins, not slave work-spaces
As it got a little tiresome creating the jobs manually over and over again, I ended up creating a dummy pipeline for testing. I created a new pipeline project, defined all the variables and then populated these based on the requirements of one of my libraries. Then I'd try and build the project.
If (or once) the build was successful I'd clone that template project as the "official" pipeline, then update the template pipeline for the next project. Rinse and repeat!
To create a new pipeline based on an existing job
Using this approach saved me a ton of work setting up quite a few pipeline jobs.
Of course, as I was finalising the draft of this this post it occurred to me that with a bit more work I could actually get rid of virtually all the parameters I'd just added.
LIBNAME
parameter in favour of the built
in JOB_BASE_NAME
parameter<ProjectName>.Tests
, I could auto generate that value and
use the fileExists
command to detect if a test project was
presentLIBFOLDERNAME
and TESTLIBFOLDERNAME
parameters are
required because not all my libraries are consistent with
their paths - some are directly in /src
, some are in
/src/<ProjectName>
and so on. Spending a little time
reworking the file system to be consistent means I could drop
another two parametersHappily thanks to having all the builds running from one script, this means when I get around to making these improvements there's only one script to update (excluding deleting the obsolete parameters of course).
And this concludes my second articles on Jenkins pipelines, as always comments welcome.
Like what you're reading? Perhaps you like to buy us a coffee?