If you’re writing Step templates, here are some things you may want to consider.
* Write a script you can run inside or outside of Octopus
There are two reasons why you're unlikely to code substantial scripts within the Octopus IDE itself. First off, you don't get the Intellisense and completion you get with the Powershell ISE. Secondly, there's no kind of a scratchpad for debugging scripts in Octopus itself: the only way you can test your code is to create a release, pick the step or steps you want to test, and deploy it to an environment. There's nothing wrong with that, but in my experience it does tend to mean that Octopus finds itself the recipient of scripts you've already developed and debugged elsewhere, rather than the place your coding efforts begin.
While that might be unfeasible, I'd note parenthetically that there will always be a place in the market for third party tools that offer some of the development and/or debugging niceties of your favourite IDE, without tying you into a formal project structure. That's what makes LINQPad so great, but before that there were tools like The Regulator, Snippet Compiler, ScriptCS and CS-Script.
So how do you ensure your scripts work both inside and outside Octopus? All but the most trivial Step templates are going to be parameterised, which means you'll be working with the $OctopusParameters hashtable. That's not going to be available outside of Octopus, so what do you do? Daniel Little suggests one approach here. He adds a helper method Get-Param, which checks for the existence of the $OctopusParameters object. If it finds it, it retrieves the parameter of your choice; if it doesn't, it calls Get-Variable to pull back a standard Powershell variable. This is great, but if you structure your Step templates a certain way -- as I'll show you in the next section -- you may not need to do this after all.
* Let Step templates call Library scripts; then you can use either
Now I have a particular way that I want to use Octopus, which goes back to the days when I used to use Nolio to deploy applications. I want reusable Step templates that expose nice pieces of modular functionality (eg. creating websites, downloading artefacts from TeamCity, etc.), and I want to back those Step templates with reusable scripts inherited from the Script libraries I choose to include in my project. There's a three-fold reason for this. Firstly, this means I can EITHER use a Step template in my workflow, OR I can use a script (I'll go into the reasons why you might want to choose one approach rather than the other in a subsequent post). Secondly -- and more importantly -- I know that behind the scenes the same code is being called, so that if someone finds a bug in my implementation of (say) a script to copy files and folders, I only need to fix the code in one place. I think that's pretty critical. Thirdly and finally, you can develop and test your scripts outside of Octopus without having to worry about the OctopusParmeters hashtable, because this is now handled in your Step template; your script remains blissfully ignorant of the way in which it was invoked.
* Writing scripts once gives you more workflow options
There's also a subtler reason why it really helps to put the bulk of your deployment functionality into Powershell scripts rather than Step templates: Octopus Deploy offers only the most rudimentary of workflows, which execute from beginning to end, and then stop. You can't easily add decision points, and you certainly can't make your workflows re-entrant. And if you decide you want to refactor a workflow (and, trust me, you'll want to do this a lot), you can't compress a set of parent steps by turning them into child steps, or expand a set of child steps back into parent steps, or shuffle child steps between different parents. You have to delete and re-code each step you want to change.
However, that's just how the UI works, and you don't have to do things that way. Think about things like this: Octopus provides ONE workflow through the deployment, but that doesn't mean you can't create a much more sophisticated workflow; you just need to code your sophisticated workflow within the comparatively unsophisticated workflow container that you were supplied. And what *that* means is, if you build up a decent script library, there's no reason that your Step templates need to perform single and trivial activities. Within the confines of any particular template, they can loop, branch, and do anything you like. I find this particularly useful when I have, say, a common deployment script that I want to apply to half a dozen different deliverables. Rather than instantiate the same Step template six times, plugging in almost identical parameters, I loop around inside a single template, doing the things I need. This way, I have a few system-wide steps with names like "Shut down Websites", which are trivial refactorings of what were previously steps like "Shut down IIS on App server", "Shut down IIS on Web server", etc. Note that I'm not advocating in general that you make your steps any less granular than you might otherwise have done. What I'm saying is that for a deployment tool to work effectively you need to match the granularity of your deployment steps with that of the particular deployment activity you are trying to perform within them. With one project, for example, I used a Step template to mark up each key in my web.config file. Once I got to half a dozen keys, I realised that this was a waste of a Step template, so I replaced it with the more generic step "Mark up Config files" -- within which I then pasted in my half dozen calls to Markup-ConfigFile.
* Use naming conventions for your parameters
As I've mentioned in a previous post, Octopus Deploy has a few vagaries wrt. parameter resolution. If you can come up with a naming convention for your parameters, and then stick to it, it'll save you a lot of confusion.
* Scripts which don't quite succeed should fail
What do I mean by this quixotic statement? I mean that you should always set -ErrorAction Stop at the top of your script, and then individually handle the errors you know about (either with a try/catch or a SilentlyContinue). Yes, initially this makes for whiny, temperamental scripts, but if you stick with it, they grow into mature, dependable adults. Pragmatically, I'd always rather have a script that complains about nothing than one that claims to have done something that in reality it hasn't. When something is deployed wrong, the deployment log is your first port of call, and if you can't trust the detail inside that log, you will be in a world of pain.