AWS Cloud Formation Template for WebSphere MQ Cluster

Upload our template, create AWS Cloud Formation stack and have your IBM MQ architecture ready

This guide will provide step by step instructions how to create AWS Cloud Formation stack containing IBM MQ cluster using template prepared by us.

1. Overview

Being able to install your IBM WebSphere MQ licensed and ready to go in the cloud is a great thing. Here you can find our IBM MQ image which lets you do it with just a few clicks. But in real world scenario having IBM MQ running on just one Amazon EC2 instance is not the best solution. To make your IBM MQ architecture better for high availability and enable workload distribution and dynamic routing of messages received by Queue Managers you need ability to have multiple instances of a queue, you need a cluster of Queue Managers running on different machines (EC2 instances in the cloud).

And this is the reason we decided to prepare AWS Cloud Formation template to help you create your IBM MQ Cluster architecture based on our IBM WebSphere MQ base image.

2. Solution


AWS Cloud Formation stack that you will create by using our template contains 4 AWS EC2 instances (all of them are based on our IBM WebSphere MQ image) and 1 AWS S3 bucket. On every EC2 instance there is a running Queue Manager (1 full repository Queue Manager and 3 Queue Managers participating in the cluster with Q.Common queue). AWS S3 bucket has a shared storage role and all of the Queue Manager configuration is stored there.

Click the diagram to the right to understand proposed solution better.

2.1 Step by step tutorial how to use our template

  1. Download template from here.
  2. Log in to you AWS account and choose AWS Cloud Formation from Management Tools section
  3. You should now see blue “Create Stack” button on the top left corner.
  4. Click “Create Stack” and then from “Choose template” section use “Upload a template to Amazon S3” and selected downloaded template and click “Next”.
  5. You will now have to specify a few parameters to configure your IBM MQ Cluster.
    – “Stack name” – this is just a name of your stack,
    – “AccessKeyID” and “AccessSecretKey” – credentials that are necessary to use AWS S3 bucket as a shared storage. You should be able to see those here. If not – please contact your AWS admin.
    – “BATCHHB“, “HBINT“, “LONGTMR“, “SHORTRTY“, “SHORTTMR” – all of these parameters will be used to create Channels between Queue Managers. You can leave default values or if you want to get more knowledge about it please visit IBM documentation page.
    – “ClusterName” – name of your cluster,
    – “InstanceTypeParam” – instance type that you want to assign to your AWS EC2 instances,
    – “KeyName” – select one of your existing keys,
    – “PortA“, “PortB“, “PortC“, “PortD” – you can specify listener ports for your Queue Managers
  6. After specyfing parameters click “Next” button,
  7. On the next panel also click “Next”,
  8. Review your configuration and click “Create”Note that during the creation of stack 4 Elastic IPs are created so make sure that you can create 4 IP assosiactions at this region and of course that it is possible to create AWS S3 bucket.  
  9. You will need to wait about 20 minutes to have your stack ready.
  10. When stack is created you can go to your EC2 instances list and see that there are 4 new instances initialising : Full Repo QM, Cluster QM1, Cluster QM2, Cluster QM3.

What is worth to mention even if on your EC2 list you can see that instances Status Check is “2/2 checks passed” it is possible that scripts that are creating Queue Managers and Channels are still running. To be sure that your resources are create login to the instances

3. Initial login

Once the instance has started up (you can see it by having ” 2/2 checks passed ” in EC2 console).

  1. Log onto the instance from the EC2 console or via SSH as the ‘mqm’ user, using the key you selected above.  For example:
    • From the EC2 console by clicking  the “Connect to your instance” button with username “mqm“, using the previously selected .pem keyfile.
    • Via SSH from your desktop, for example
      ssh -i ./MidVisionUSMC.pem
  2. You should see the MidVision banner and then you are placed in a setup wizard.
    Welcome to                                                                                                                                             
     __  __ _     ___     ___     _                    ____ _                 _
    |  \/  (_) __| \ \   / (_)___(_) ___  _ __        / ___| | ___  _   _  __| |
    | |\/| | |/ _` |\ \ / /| / __| |/ _ \| '_ \ _____| |   | |/ _ \| | | |/ _` |
    | |  | | | (_| | \ V / | \__ \ | (_) | | | |_____| |___| | (_) | |_| | (_| |
    |_|  |_|_|\__,_|  \_/  |_|___/_|\___/|_| |_|      \____|_|\___/ \__,_|\__,_|
                                                           A MidVision Service
            * WebSite:
            * Support:
            * Forum:     
    Welcome, this is MidVisionCloud Websphere MQ image first run configuration
    Note that you can rerun this configuration wizard again by executing /home/midvision/ script
    Configuration steps
    1. Set RapidDeploy framework initial password
    2. Open ports on RHEL firewall
  3. Set initial password for RapidDeploy user “mvadmin“.
  4. Choose which ports to open on Red Hat Linux firewall. You should open port 9090 for the RapidDeploy web
    console if required.
See the cloud-init.log file to make sure that Queue Managers are defined and configured.
less /var/log/cloud-init.log
You should see similar logs to the ones shown to the right (click to enlarge).

4. Maintaining your IBM MQ Cluster

As it was mentioned before 4 EC2 instances are created.

All of them contain IBM MQ v. Cluster QM1, Cluster QM2, Cluster QM3 have common queue: Q.Common.

This is a multi-instance queue so when you put messages on this Queue (on one of the Queue Managers)
they are automatically distributed to other instances.
You can check it by using for example

./amqsput Q.Common MQB

from Cluster QM1 instance and then use

./amqsbcg Q.Common MQ(B/C/D)

on Cluster QM1, Cluster QM2, Cluster QM3 and see that messages have been distributed .

This configuration also supports failover which means that if one of your Queue Managers breaks – messages
are not being sent to this Queue Manager and they are distributed to others that works fine.
In order to check that your cluster is working fine you can ssh to Full Repo QM instance and use command:
runmqsc MQA and then DISPLAY CLUSQMGR(*)