]]>Whenever I give a demo of, or a training class on, the XebiaLabs DevOps Platform, I mention that all actions in deploying an application can be broken down into two categories: moving data (files) and executing commands.

In a typical deployment, the XL Deploy module of the XebiaLabs DevOps Platform performs these two actions on a remote system, to which a connection is made via SSH for Unix/Linux operating systems and WinRM for Windows.

XL Deploy has a convenient control task called “Check Connection” that proves that these two actions can be performed successfully. It transfers a dummy data file and runs a command to list the contents of a temporary directory. It proves that the protocols are correctly configured, firewalls and ports are open, and the login credentials are valid.

So are we ready to deploy?

Some users are tempted at this point to structure a deployment using plugins that provide these exact two actions in their most basic form: the “command” plugin with its cmd.Command object type and the “file” plugin with its file.File object type.

A user goes into XL Deploy and configures a deployment in this fashion:

Commands that precede the file transfers

cmd.Command object with order = 45

cmd.Command object with order = 55

File transfers

file.File object for the first file with default order 60

file.File object for the second file with default order 60

file.File object for the third file with default order 60

file.File object for the fourth file with default order 60

Commands that follow the file transfers

cmd.Command object with order = 65

cmd.Command object with order = 75

What are the pros and cons of this approach?

As seen in the next two images, the objects involved in the deployment are:

Easy to configure

Easy to read and comprehend

For the command object, the user simply enters the command to be executed in the command line property. For the file object, the user uploads the file and indicates the target path, at minimum. The example below uses a placeholder for the latter.

On the other hand, this approach has some shortcomings:

Fragile. You’re working with an open command line susceptible to errors.

Incomplete. The commands don’t take rollbacks and reruns into account effectively.

Not portable. You’ll have to rewrite the commands for another OS such as Windows.

Doesn’t identify that all these items go together if they are part of a package containing other deployables.

Doesn’t provide the benefits of XL Deploy’s object model. For example, if you wanted to “subclass” this configuration for behavior slightly different than this, you have to rewrite it.

Best Practices

XebiaLabs recommends the following best practices for XL Deploy when it comes to deployments not already supported by an existing plugin.

Combine the file artifacts, both text and binary, into a single zip-style archive.

This follows the same rationale for bundling application files together as a jar, war, or ear file. They move together through your CI/CD pipelines as a single unit, developed and deployed together. Pack them in the build job, and unpack them when they reach their final home on the target system.

Make use of classpath resources when able.

These might be installation binaries or control templates that don’t change between deployed versions and therefore don’t have to be bundled into the actual application files.

Control the commands required for your deployment with xl-rules and classpath scripts.

The scripts are easily parameterized, and XL Deploy can send the Linux or Windows version of a script depending on the OS targeted.

A Plugin Example

Now we’ll begin constructing a plugin by working in the XL-DEPLOY-SERVER/ext directory. As a first step, add a definition to your synthetic.xml:

Adding a definition to our synthetic.xml gives us the ability to define a custom artifact, along with some properties for the deployment, in this case the target directories for each of the files when we unzip them. Of course, you can define any properties necessary for the deployment, making use of such data-types (kinds) as strings, integers, booleans, key-value maps, or even references to other objects.

We have replaced each of our four commands with an os-script section, pointing to a script in the XL Deploy classpath. Each os-script tag represents a step, and for each one there are four properties that will be applied to it: a description, a script path, an order number, and a boolean to tell XL Deploy whether or not to upload the artifact(s) in the object.

Here is our deployment output for the this script, which illustrates how XL Deploy uploads the script and the artifact are uploaded to a temporary directory, and then executes the script from there on the host’s operating system.

Notice that the rules file only specified demo/createBestPracticeUpload, without the .sh or .ftl extensions. Since XL Deploy knows this deployment is going to a Linux system, it will look for the version of the script having the .sh extension. The .ftl extension directs XL Deploy to process the script throughout FreeMarker. If we wanted to deploy to Windows, we would have included a .bat or .cmd version of the script.

So we end up with the same result as we had with the four command and the four file objects. And there are many more options available in XL Deploy to make this example fully functional:

This type can be subclassed for variations on the core behavior.

Rollback behavior can be controlled with additional rules. See the rules reference for all the options available with XL Rules.

]]>https://blog.xebialabs.com/2019/01/22/best-practices-for-custom-deployments-in-xl-deploy/feed/0Creating a Deployment Model for Scripted DB Updateshttps://blog.xebialabs.com/2017/01/24/ordering-scripted-database-updates-across-multiple-schemata-xl-deploy/
https://blog.xebialabs.com/2017/01/24/ordering-scripted-database-updates-across-multiple-schemata-xl-deploy/#respondTue, 24 Jan 2017 13:56:20 +0000https://blog.xebialabs.com/?p=14241It is common for enterprises to work with hundreds of applications, each of those calling for a number of database scripts to be executed […]

]]>It is common for enterprises to work with hundreds of applications, each of those calling for a number of database scripts to be executed in a particular order. Needless to say, the execution and organization of these scripts can become complex and tedious. Instead of creating new steps for every application release, modeling your scripted database updates will organize your script execution into an automated, repeatable process that allows for quick changes and stability within the release.

Your application may call for a number of scripts to be executed in a particular order, where the flow bounces from one schema to another, then back to the first, then to another one altogether. Take close look at the example below:

Let’s look at how to model the above scenario in XL Deploy.

To begin, we’ll model the infrastructure.

Each schema is modeled by a database client that connects to that schema on the proper database. Each schema could be in the same database or in different databases. Although the database is one of the properties of this client object, the schema is not, so we’ll rely on tag-matching to get the correct scripts deployed to it.

Next, we’ll create the second part of our model, an Environment with the following containers as members:

Finally, the third part of our model is the application package. We are starting with a repo with a number of database script or SQL files, along with a mapping.txt file that gives us a mapping and ordering between file and schema.

Here the mapping tells us to run the 1*.sql files against schema A, then the 2*.sql files against schema B, and so on, to produce the deployment step list we started with.

Learn about XL Deploy and how it’s Model-based deployment approach can organize your deployments into an automated repeatable process.

To package up this repository into objects that XL Deploy can handle, we’re going to run a little packager script that interprets the mapping file and bundles the scripts appropriately. And, we’re going to run it all under Jenkins to take advantage of the XL Deploy post-build action to create a Deployment Package and publish it to XL Deploy’s repository. In the course of that, we’ll place tags on the deployable objects we create in order to facilitate the correct mappings. We can also invoke a deployment from Jenkins, but we’ll stop short of that so we can analyze the resulting mappings in detail.

Jenkins will invoke it with two parameters. $1 is the mapping.txt file. As we saw above, it is committed to the same source-control repo as the SQL files. And $2 is an output file, buildVariables.properties, which we’ll pass to the Jenkins Inject-Variables plugin to help tag our objects later.

Of course, we could get much more sophisticated here as to how we group the source *.sql files into bundles, perhaps using number ranges or regex matching, but this simple script conveys enough of the idea.

The result of this script is a zip file for each line of the mapping file, numbered sequentially: sql-obj-1.zip, sql-obj-2.zip, sql-obj-3.zip, sql-obj-4.zip, and sql-obj-5.zip. Each zip contains the SQL scripts we instructed the packager to include. In this case, the first argument on the mapping line is matched against the first character of the SQL file’s name, so we bundle scripts 10-13 into the first object, 20-23 into the second object, and so on.

Note that the matching-number here (NUM) and the sequence number (SEQ) are independent — for this example they happen to coincide, but I’ll show later an example where they don’t.

Also resulting from the script is the buildVariables.properties file, where the right side of each assignment is a tag for XL Deploy:

We use the Inject-environment-varibles plugin (https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin) to set variables holding the tag values.

Finally, we come to the post-build action that creates an application package. We set the app name and version:

And then we have a sql.SqlScripts deployable for each line in the mapping.txt file that makes use of an injected variable for its tag. Here is the first one of the five needed for this example. Change the ‘1’ in the Name, Tags variable, and Location to ‘2’ for the next one, and then to ‘3’, ‘4’, and ‘5’ for the rest.

Key to our ordering scenario is that these artifacts will be executed in order by the Name field, sql-obj-1, sql-obj-2, etc., while the individual scripts within each one will be ordered by their file names, 10-select.sql, 11-select.sql, etc.

Note a limitation here: we have to have one deployable for each line in mapping.txt, and we don’t know how many we will need ahead of time. An easy workaround is to take a liberal guess as to your need and configure a fixed number of objects. The unused ones will be empty and tag-less so they won’t play a role in the deployment. A dynamic approach that adds a variable number of deployables is beyond the scope of this post.

When we run the Jenkins build, we will get the next package into XL Deploy‘s repo, then the drag-and-drop deployment action will give the mapping and ordering we expect:

Finally, let’s change the mapping like this to show the independence of the matching and sequencing numbers. This mapping yields the result shown, demonstrating just how flexible this approach is!

]]>https://blog.xebialabs.com/2017/01/24/ordering-scripted-database-updates-across-multiple-schemata-xl-deploy/feed/05 Reasons to Deploy Dailyhttps://blog.xebialabs.com/2017/01/19/6-reasons-deploy-daily/
https://blog.xebialabs.com/2017/01/19/6-reasons-deploy-daily/#respondThu, 19 Jan 2017 14:20:38 +0000https://blog.xebialabs.com/?p=13807Gary Gruver recently released his new book: Starting and Scaling DevOps in the Enterprise. In it he promotes the practice of releasing software on a daily […]

1. Eliminates waterfall requirements inventory.

Waterfall planning involves a requirements inventory that’s done before the developer starts to work on a new feature. That inventory tends to slow down the flow of value through the deployment pipeline and creates waste and inefficiencies in the process. By the time the requirement gets to the developer, it might need to be updated due to questions from developer, changes in the market and so on.

2. Allows you to evolve your priorities with the market.

As the marketplace evolves, so should the priorities of the organization. Organizations that lock themselves into a large inventory of requirements tend to deliver lower value features because they aren’t able to keep up with market changes. This leads to the organization having to regularly reprioritize its requirements or forge ahead with features that don’t align with market needs.

3. Lets you find and fix repeating issues.

Infrequent, manual deployments don’t allow you to see issues repeating enough so that you can begin to identify a common cause and fix them. However, when you increase deployment frequency to daily, you start to see patterns of problems that may have been plaguing your organization for years. DevOps allows you to automate deployment so you can deploy every day.

Starting and Scaling DevOps in the Enterprise

In his latest book, “Starting and Scaling DevOps in the Enterprise,” renowned DevOps expert Gary Gruver provides a quick, user-friendly guide for any large organization needing to understand how to start and succeed with DevOps in their own IT environment. Download the E-book Now!

4. Lets you focus on features that matter to the customer.

A full 50% of new software features are never used or don’t meet their business intent. Daily deployments allow you to get fast feedback about what’s in your release pipeline to you can identify where you need to reduce waste. This helps you deliver new features to customers faster so you can see which parts of the 50% are not meeting their business objective and stop devoting precious resources to them.

5. Promotes a just-in-time approach for requirements.

Deploying daily will help users move to a JIT approach because companies can limit their long-term commitments to less than 50% of capacity and use that capacity in shorter timeframe horizons. This increases speed of value through the system because new ideas can now move into development and ultimately production more quickly – instead of waiting in queue behind lower-priority ideas that were previously planned.

]]>https://blog.xebialabs.com/2017/01/19/6-reasons-deploy-daily/feed/0DevOps and the DBA: Best Practices in XL Deployhttps://blog.xebialabs.com/2016/11/01/devops-dba-best-practices-xl-deploy/
https://blog.xebialabs.com/2016/11/01/devops-dba-best-practices-xl-deploy/#respondTue, 01 Nov 2016 13:13:00 +0000https://blog.xebialabs.com/?p=13231As Agile practices and the DevOps movement transform delivery pipelines throughout the world, let’s take a moment to reflect on the database administrator’s role in […]

More Posts

As Agile practices and the DevOps movement transform delivery pipelines throughout the world, let’s take a moment to reflect on the database administrator’s role in pushing the latest features out the door. Here are six best practices for users of XL Deploy:

1. Remember that the DBA is part of the application development team(s).

The DBA coaches the developers on database architecture, proper SQL standards, Explains, etc. as they design, build, and test their code. All team members operate under Agile principles.

2. Standard environments that include the database should be able to be stood up quickly for testing the latest build.

In XL Deploy: Include a dedicated database in every Environment to which the application is deployed

3. Database configuration (DDL, SQL) is developed and saved as version-controlled code.

To paraphrase Gene Kim, Infrastructure-as-Code allows modern development practices to applied to the entire development stream to enable fast deployment, continuous integration, continuous delivery, and continuous deployment.

4. Database changes are aligned with the application changes that need them.

In XL Deploy, an Application Package is a complete and environment-independent package, able to be deployed to any environment. By complete, we mean that It includes all the binary artifacts, configuration settings, startup commands, etc. to get the application properly configured and running. A dictionary/placeholder scheme is used to substitute environment-dependent values (port numbers, credentials, etc.) at deployment time.

For example, an application needs a new index on a table, so the DDL for the index is part of the application change package to ensure that both are always deployed together.

In XL Deploy: Include sql.SqlScripts element in the application package. This element is simply a zip file of lexicographically ordered scripts. See below for an example.

Database configurations used by more than one application can be managed by using XL Deploy’s application dependencies set at the udm.DeploymentPackage object. See https://docs.xebialabs.com/xl-deploy/concept/application-dependencies-in-xl-deploy.html.

5. Database configuration changes are accumulated in new versions of the sql.SqlScripts object as they are developed.

At any point in time, the object can be deployed to a new or old environment and bring the database up to the state necessary for that version of the application. So, in a brand new environment with just the DBMS software, XL Deploy would run all the scripts. In an environment with a previous deployment(s), XL Deploy runs only scripts not run previously.

Here is a simple example of the evolution of the scripts in sql.Scripts.

Version 4: — this iteration drop the bad index. It may seem redundant to create and drop the index in the same set of scripts, but remember that this package may update an Environment whose current state is at Version 2 and we want to make we drop the bad index.

6. Rollback scripts are included to allow backout and undeployment.

XL Deploy uses a regex pattern to recognize which rollback script corresponds to which forward-going script. Developers and DBAs should include these as they build the package of change. Each one undoes the forward-going action, e.g. the first one drops the tables created by the first script. With the rollbacks included, the package should look like this:

XL Deploy can also support referencing a web-based file-hosting service such as Dropbox with a minimum of configuration. Let’s take a look at how to set up and deploy an artifact from Dropbox.

From Dropbox, get the url by pressing the Share button for your artifact and copy the link from the resulting popup.

Then enter the url in an XL Deploy artifact, changing the server from www.dropbox.com to dl.dropboxusercontent.com.

In this example, we are deploying a file.Folder type, so the zip file will be expanded during the deployment. We could also use file.File for a simple file or file.Archive for a zip archive we don’t wish to expand.

]]>https://blog.xebialabs.com/2016/01/04/dropping-artifacts-xl-deploy-dropbox/feed/0Tracking the Dev/QA Cycle with XL Releasehttps://blog.xebialabs.com/2015/12/14/tracking-devqa-cycle-xl-release/
https://blog.xebialabs.com/2015/12/14/tracking-devqa-cycle-xl-release/#respondMon, 14 Dec 2015 17:34:38 +0000http://blog.xebialabs.com/?p=11080Let’s take some inspiration from this diagram from Gene Kim, author of The Phoenix Project, and consider how DevOps tooling can help you automate […]

]]>Let’s take some inspiration from this diagram from Gene Kim, author of The Phoenix Project, and consider how DevOps tooling can help you automate “The Third Way”: the continual experimentation and learning cycle.

Software development has shifted to a new paradigm: we build/deploy/test in short cycles. Our Third Way is one of continuous feedback and improvement, which improves productivity at a small and totally justifiable cost of managing all those cycles. How many cycles have we run? Where are we in the current cycle? How many more do we need to run? XL Release can help in this challenge.

Let’s consider a two-phase XL Release template that manages the Dev-QA cycle for the Acme Anvil Company. In the Development phase, some one-time initialization tasks are followed by development and build tasks, then a gate to mark the phase for approval. The QA phase is a simple provision/deploy/test.

Before we introduce automation, here is how we run the cycle manually: we proceed through the development tasks and into QA, and whenever the Test task fail, we restart at the Development task. This represents the iterative development process, and we repeat it until we can test successfully. With XL Release managing and tracking the cycle, this is accomplished by pressing the Restart Phase button in the GUI, and then choosing the phase and task at which we want to pick up again.

Now let’s perform some enhancements to promote automation: First, we’ll insert a restart marker — a simple python script that stores its own task id into a release variable. This will be our restart point; the variable will be used in a REST API call later.

Some coding comes into play here. The Restart Marker is not an out-of-the-box type, but it’s easy to extend the built-in xlrelease.PythonScript by adding this code to XL Release’s synthetic.xml file:

We also provide this short script in demo/restartMarker.py so the task saves its own task id:

restartTaskId = '-'.join(getCurrentTask().id.split('/')[1:])

And we provide a variable denoted by the ${…} notation when we place this task in the release template, so our task id is accessible to any subsequent task within this release.

On the QA side, we replace the manual test task with an automated test; on failure it sets another release variable which conditionally runs or skips the subsequent task to create a Jira issue. The task id for our restart will appear in the Jira comment:

The developer who addresses the failure will include this task id in the commit message when committing to Git:

Finally, to close the loop, a post-commit Git hook invokes a short Python script to parse the commit message and take the appropriate action: either begin a new release or restart another cycle if a task id is present.

]]>https://blog.xebialabs.com/2015/12/14/tracking-devqa-cycle-xl-release/feed/0A Tale of Two Build Tools: Integrating XL Release with Jenkins and Bamboohttps://blog.xebialabs.com/2015/03/24/tale-two-build-tools/
https://blog.xebialabs.com/2015/03/24/tale-two-build-tools/#respondTue, 24 Mar 2015 12:35:22 +0000http://blog.xebialabs.com/?p=8957I discovered that XL Release does not have built-in support to integrate with the Bamboo build tool as it does for Jenkins. But I […]

]]>I discovered that XL Release does not have built-in support to integrate with the Bamboo build tool as it does for Jenkins. But I also discovered that XL Release’s extensibility makes it easy to configure a type definition and a script to enable an interface with Bamboo.

Let’s look at the support for Jenkins. XL Release provides a task definition:

and an entry under the configuration tab:

With these we can define one or more Jenkins servers and use them to execute the build jobs defined on them. Nice.

Bamboo doesn’t have such out-of-the-box support, so let’s take a look at how we could configure the objects we need. Since Bamboo has a REST API, we can extend the HttpConnection object to provide us a Bamboo Server object by defining a type in xl-release-server/ext/synthetic.xml:

<type type="bamboo.Server" extends="configuration.HttpConnection" />

Now we need a task to call out to the API; let’s extend the PythonScript object for this so we can take advantage of Python tremendous versatility. The script will actually run under Jython, so we can utilize Java classes too if needed. Our input will be the project-plan key to identify the Bamboo plan, and let’s code a few output fields to return some information about the build after it completes.

Our next task is to write the Python script to call out to Bamboo. The namespace of the type, “bamboo” in this case, determines the script directory, and the type name determines the script name. So our script will be called RunPlan.py and will live in xl-release-server/ext/bamboo.

The script starts with some typical Python imports. We’ll use com.xhaus.jyson.JysonCodec for json since that’s included in the XL Release libraries. Next we set some variables for contentType and headers for all of our HTTP calls, and define some boolean and text fields from the build result.

Finally the main body of the code make a post request to Bamboo’s url using the built-in HttpRequest object. We supply the url, content type and headers. We could have added authentication here to override what’s defined on the Bamboo Server object, but let’s leave that for later. An empty set of curly braces is required for the JSON content body.

Then we use the JSON library module to parse two items out of the results: the build number and the build result key.

We store the latter in the variable brkey to keep it handy to pass to our helper methods. A simple while loop polls for a finished result every five seconds. That interval would be better as a configurable value; this is something else we’ll save for later.

When the loop ends, we query the job’s results (note the change in the query string from “queue” to “result”), print some messages and set the two state output variables so XL Release can use them to control future actions. See the xlr-bamboo-plugin repo for future updates to the code.

]]>https://blog.xebialabs.com/2015/03/24/tale-two-build-tools/feed/0Continuous Delivery With Bamboo And XL Deployhttps://xebialabs.com/products/xl-deploy/
Fri, 16 Jan 2015 16:53:05 +0000http://blog.xebialabs.com/?p=8488One of the best things about the current generation of continuous delivery tools is their ease of integration and interoperability. Each tool offers a […]

]]>One of the best things about the current generation of continuous delivery tools is their ease of integration and interoperability. Each tool offers a rich selection of plugins or other means of interfacing with other tools. At XebiaLabs, we offer plugins that allow popular build tools such as Bamboo, Jenkins and Maven, to communicate with XL Deploy.

Let’s take a look at Bamboo’s interface today by considering a four-step build job that extracts from a code repository, builds the modules to be deployed, publishes a deployment package to XL Deploy, and finally deploys this package to a target environment. We’ll use Ant’s plugin for Bamboo along with XL Deploy’s.

A few of the plugins offered for Bamboo

Bamboo has built-in support for a variety of Source Code Managers too, including CVS, Git, Perforce and Subversion. With Git, we can easily define a repository with the following structure.

Under the NerdDinner-Files directory are all the files that are part of the Nerd Dinner application. There are too many to go into detail here and their functions are beyond the scope of this article. But the other two XML files do interest us, as they tell Bamboo and XL Deploy how to build and deploy the application: the build.xml file directs the build of the application under Ant as a Bamboo task; and the XL Deploy-manifest.xml file plays a role in the deployment under XL Deploy.

The first step in the Bamboo build job is the Git extract, which pulls this structure and its contents from the Git repository and places it all in Bamboo’s workspace.

Next is an Ant build step, which executes in the workspace using the resources placed there by Git. The relevant portion is the final step which packages the deployment manifest and the application files into a “Deployment Archive” or DAR within our workspace. This also illustrates that there is no magic formatting involved in XL Deploy — any tool that can assemble directories and files together and zip them up into an archive can be used.

The third step, Publish-to-XL-Deploy, will search the workspace using the **/*.dar wildcard and copy the DAR to the XL Deploy instance using the connection and credentials given via the plugin. The application name, as coded in the Deployment Manifest, is used to place the package within the correct application.

Step 3: publish to XL Deploy

Finally, the Deploy step is configured similarly to the Publish step. We provide the connection info and credentials as before, and now we add the application name and the target environment. It conducts the actual deployment to the target environment, in this case a Windows/IIS server image running under VMWare.

Step 4: Deploy with XL Deploy

The execution of this job can be triggered by a check-in to our original Git repository, and the flow of code from developer to deployment demonstrates the efficiency of integrating these four tools together. No time is lost between steps and there is a single point of control. As the plugin configurations themselves were quite straightforward, this approach shows the benefits of investing even a little time in automation.

]]>Some users of WebSphere Application Server prefer to upgrade their EAR/WAR applications using WAS’s “update-in-place” option instead of uninstalling it and then re-installing. The latter, of course, is XL Deploy’s default behavior, but it can be changed with a simple tweak to a type definition in an XML file. Here is a look at how the WAS Admin Console presents these options:

But before we describe the tweak, let’s take a closer look at XL Deploy’s analysis of a deployment. Each artifact or resource is compared against what is already out on the target server, if anything. Then XL Deploy assigns one of four possible actions: create, destroy, modify or noop. Noop, of course, is no-operation, for the case of no changes.

XL Deploy has pre-defined behavior for create and destroy when working with EAR/WAR files, and invokes a python script for each of these. Expanding the WAS plugin JAR file reveals these as deploy-application.py and undeploy-application.py. But the operation that pertains to this scenario is modify: the user wants to replace an EAR or WAR with an upgraded version. As there is no specific “modify” behavior defined, XL Deploy executes a destroy and then a create operation, first uninstalling the old module and then subsequently installing the new one, as in steps 2 and 4 in this deployment steplist:

The result is a “clean” installation — nothing is carried over from the previous installation of the app, and this is a good way to fight configuration drift. However a user who wants complex applicaton configuration settings carried forward will want to opt for update-in-place even if this approach violates the spirit of XL Deploy’s design philosophy — any deployment package be a complete incaranation of the application that is deployable to any environment, whether a first-time deployment or update.

To override the modify behavior, we simply define a modifyScript property on the War or Ear type:

XL Deploy’s tagging mechanism provides a handy way to control the automated mapping of deployable object to their containers. When your environment has multiple containers of a particular type, and multiple deployables that can map to them, XL Deploy will perform a deployment for each possible mapping. A good example is this: you have two jee.War files in your deployment package, and two GlassFish domains in your environment. This will result in a deployment plan of four steps, as each WAR file is deployed to each domain.

But if you intended that WAR “A” should go to domain1 and WAR “B” should go to domain2, then this is the perfect use case for tagging. Simply place a tag on each deployable and container, and then the artifacts will be deployed accordingly. A tag is nothing more than a text string: “A” and “B” would be perfectly functional in this respect, but of course you’d want something more descriptive. Wildcarding is also possible.

Within the XL Deploy GUI, tags are a set of strings under the “Deployment” tab on a container or a deployable. Inside the deployment manifest, tags appear as a series of values in this fashion:

Tagging also comes in handy when working with xl-rules to contribute additional steps to your deployment.
Let’s use a simple example: we’re deploying a cmd.Command script object to two hosts in same environment, and we want to have xl-rules contribute a second script to only one of the hosts. Our hosts here are WAS85ND-host and WAS85SA-host.