DevOps and ITIL

Working with some ideas about how to merge DevOps with the real world (ITSM),  where ITIL 3.0  is one of the most used ITSM framworks, where a lot of it was created 10 years ago.
Since July 2013 it’s owned by AXELOS.

At from 2015, you can see the different ITSM frameworks in use and ITIL where ITIL is the leader.

A lot of companies are using frameworks which most of their processes are defined from.
These frameworks demands trackabilities for everything that changes, fails, etc.

You can solve some of the problems where DevOps meets ITIL by for instance categorizing your CR’s (change requests) into different types. Like small-, medium- , high complexity.
You have a lot of different plugins you can put in your ITSM-tool (my example has ServiceNow), for automate the approval process where you do not need CAB to do so (where CAB does not need to do further analysis of the CR to verify the impact).
Do you need to create CR’s for the test-environments?

No matter what you do, you should try to automate as much as possible of the CI/CD pipeline. No tests, documentation of tests, creation of CR’s, etc. should be done manually.

My example (see picture above):

  1. Developer triggers a release from Jenkins, ansible, other
  2. The automated framework creates a change with documentation of tests to ITSM tool
  3. Depends on the release type
    1. Complex CR – CAB has to approve CR manually
    2. Type small-, medium Complex CR – automated approval of CR
  4. Automated framework performs deployment in Production
  5. Dynatrace managed or your preferred APM tool,  monitors the application for abnormalities after the release
    1. Can your APM-tool automate rollback when serious errors occurs?
    2. Will your APM-tool be able to create incidents and problems in ServiceNow or your preferred ITSM tool?
    3. Give feedback to devops team on Jira or other interfaces about the result/tasks
  6. Can your APM tool provision CI’s to your ITSM tool?

In the end… There is a lot of work to do to make the real world more agile. You have to change existing processes, mindset, tools etc. You almost never can’t start with DevOps and Agile way of work before you have solved all the other issues that will show up as big red stopsigns on your way.


Creating a Soap Generator to emulate SoapUi-functionality + extra features.



This started due to a problem the customer had in their test-environment.
During big test-scenarios, they needed a large amount of testers to execute different tasks. Several times, they encountered problems with the applications where they did not know if the application worked before they started testing, if the data input was wrong(functional errors) and such. This lead to a huge amount of hours being spent, without being able to test while they waited for AD/AM to find the problem. This was a cost- and time-consuming situation that they would like to avoid. They wanted Synthetic tests to be executed on all applications on the middleware plattform.

Dynatrace on the middleware plattform

We had already installed the monitoring and analysis system Dynatrace in the environment to get full insight into the operations inside the JVM’s
Du to license cost we could not install the DT agents on all JVM’s. The test-environment is built up by Loadbalancer in front, Webservers, Websphere ODR’s, multiple Websphere application servers with 26 Clusters running 157 WebSphere enterprise applications, DB’s, MQ and support/legacy systems. These applications are divided into Connection Layer, Application Layer, Façade Layer, Service Layer and System Layer.


Due to small amount of traffic on the test-system during non-test-period, they could not see if it was working or not in dynatrace, since there where almost no transactions running.

I wanted it to run every 5 minutes, forking all the defined soap-generator and logging the errors.

The functionality of the soap-generator


The Business Transactions

AD/AM had to document the Business Transactions for each application in Dynatrace.
They had to document the classes and methods in use for each application.
We ended up with prox. 30 business transactions (BT) containing prox. 150 classes and 300+ Methods.



The BT’s is using count as Aggregate in the filter for the BT


Which Webservice and which application?

Now I had to find out which webservices was running in front of these applications(classes) in use in the Business Transactions defined. I had to try to find the webservices that triggered most applications and legacy systems, to avoid having to create webservice-requests for each application.

I started by doing drilldown to purepaths in Dynatrace for each BT. This showed me the webservices that triggered these classes. Now I could download the wsdl’s from the webservice and import it to SoapUI. Since I wanted to avoid doing changes in the applications repository, I avoided using methods that executed Update, Create and such. Preferably, I wanted to use DeepPing methods where that was available. I went into the production systems and performed tcpdumps on the JVM’s webcontainer for the http transport port, to get valid data to enter into the soap-request. It would take too long time to get AM to provide me with all this data and I needed to test the requests as well (to find the most efficient request to use). Most of the services was using basic authentication, so I added the possibility for basic authentication to the soap-generator.

The Soap-Generator

The first thing I created was the page for creating the clusters, connected to server with the port for the HttpQueueInboundDefault traffic in the webcontainer of the Websphere applicationservers.

This page is protected with a login prompt.


Then I created the Probe Admin page (with login prompt). Where you could define webservice request to Cluster.
And yes, I did not use to much resources in the design , since I had a lot of other tasks to do as well.


After creating the soap request, you can test it by pressing the green arrow. If it fails and you need to edit it, you can press the pencil symbol or the red X to delete it.
The request executes toward all of the clustermembers defined in the “Admin Servers” page.


Now I also wanted these probes to be available for the AO-team in India, so they can test the webservice during error or after changes in the applications.


This will contain the probe name, service(url), Cluster(endpoint), basic Auth. User and the possibility to execute the probe.
This will give a result to the user based on checking if the webservice was available and searching for the needle in the haystack. If it fails, it the background of the response turns red and shows the error from the webservice.



We also wanted to see the synthetic requests created by my soap-generator, to differentiate between functional and technical errors when a BT turns red. I added my own user-agent info in the header of the soap-requests and created a Business Transaction that looked for that user-agent.



The Final Dashboard in Dynatrace

The overview of the applications (showing both syntetich and usermade requests).


Tagging the requests with a specific User-Agent, gives us the possibility to connect the syntetich transactions to applications and see the static tests that fails.


You can now drill down to the purepaths that fails or take a long time and see what actually fails or where the transaction is using most of the time.


Dynatrace is a really powerful tool which gives you insight in the transactions, classes, methods, database and more. It tags the IP packet, so you can follow the transactions through several systems that has dynatrace agents installed. You can use it in all of the IT Quality Tools Quadrant.


So if I could roll the dice, Dynatrace would have got 5/6…





Tibco B2B – not satisfied with original Search gui…


Working with Tibco B2B and Business Connect, I was not satisfied as you can see in an earlier post with the search gui for the logs. In previous versions (3.x) it took 5-10 minutes for each search and then 5-10 minutes to see the details. This was a bug in BC logviewer and I created my own back then.

A year a go, I created a new search gui for Business Connect.
This was for BC 5.x. I did not like the Gui and it also has som faults.

Original GUI

On the first page, you can’t see if the transaction fails or not. You have to open the details

Looking at the details, it’s not clear where it’s using time and there are also an error regarding sorting on time (see the date/time stamp in the end of the transaction.
The sorting of date, does not work if you try it.


My Gui

So I created yet another search gui for Tibco, so the Offshore-resources in INDIA could get a better view of the error, bottlenecks and the overall status of the transactions. And it’s faster 😉


  1. Here you can choose the date and time for start/end of your search. Default is 30 minutes back in time. You will get up a list where the completed transactions without error is green, pending transactions are yellow and failed transactions are red.
  2. In the details of the transaction you can see each step of the transaction
  3. XML-icon provides you the xml sendt in the actual step in the transaction.

Details of a transaction.

Here you can see each step the transaction did (in the correct order 🙂 ).
In the top view, you have a tabular view of the transaction.
The splunk icon, sends you to splunk searching in the correct timeframe for the b2b-transaction over different systems.
The bottom view shows you the transaction in a TCP-IP frame-like view and shows you the time it takes between each step, so you can easy pinpoint where the transaction is using time.

Transaction that completed

Ends with a green mark.
In this example you can see the trading partner is using the majority of the time in the transaction.


Transaction that fails

Ends with error signs where it fails.



I also added a statistics-gui, where you can see the number of transactions, who is sending them and also a pie-chart of the transactions, so you can see the amount of completed transactions and who has the failed transactions.


Getting control over complex IBM MQ-environment


A customer have complex system-integrations, where several of them are using MQ to handle messages between applications/systems. It was hard for AO to troubleshoot the errors involving MQ. I decided to create a websolution where you do not need to have deep knowledge about MQ (logging on to queuemanagers, commands, listing queue information to find out where the traffic goes, etc…). This solution should give users easy overview of configurations and statistics back in time for different queues on different queuemanagers.

The solution fetches the Qmgr configuration once a day by cronjobs running C-code. It also fetches qmgr statistics every five minutes on all qmgrs.

(All images on the page of the qmgrs, are created in realtime)

The Solution



Qmgrs link


Pressing the statistic button on Qmgrs page:


This gives you statistics the last 24 hours (live feed updated each 5 min.) It shows PUT/GET/ONQUEUE
on the given queuemanager you choose. You can also create a visual dashboard for monitoring and search within the queuemanager you have chosen.

Pressing the blue infobutton on Qmgrs page:


Pressing the link in statistics under eg. Alias Queues:


Search Engine

(name is only for internal use and just for fun 😉


You can search for qa, qr, xmit, channels, qmgrs etc.

Image visualization of search


Transition project

I’m currently working with the largest transition project in Norway for a customer, and I’m currently working on an application which analyzes network traffic and creates HLD and LLD based on the pattern, since HP could not give us the functionality around the dependency mapping as we needed.

There are several hundred different systems and several thousand servers affected by the project.

It will also have a lot of functionality, where it analyzes different documents, databases and finds relations to create the most accurate dependency between different systems.
Here is one drawing created by the program based on the network traffic for a specific system (blue circle in the middle).
Green is a system «talking» to the investigated system.
Orange is a server with multiple systems registered on it. This can be MQ-servers, database-servers and so on.
Red: unknown servers (e.g. clients from internet)
Blue: unknown internal servers (e.g. clients from the customer’s network)

System to system HLD
System to system HLD

System to system HLD
Another drawing shows the specific ports and servers between source- and destination system
The reason for the arrows and lines to have different colors is because some systems have a lot of different communication, so it’s a lot easier to follow the different lines. There is also a lot of auto generated tables and text which describes the systems and it’s communication.

One system to another
One system to another

Stock trading and Mathematics

I’v just started to create a little demo for amateur-traders on the stock market.

(For free of course).

It is based on you going to

Finding the ticker you want to analyze.

Then you choose the tab «vansert graf og historikk» (marked in red)

On the top left on the next page, you have a «choice» box, which is called «Tidsperiode».

Here you can choose for how long amount of time, you want to analyze the stock.

After you have chosen the amount of time, press the button «Last ned».

Choose to open the excell sheet and press the little square up to the left marked with red on the next image.

This will cause all the values to be chosen.

Hold in [Ctrl] and press C (at the same time).

This makes a copy of the content.

Then you go to the webpage which is not available at present time (not sure if I’m allowed to let people do use numbers from the stock market in Oslo).


You then open the page and click the mouse in the big box.

Then you hold [Ctrl] and press V (at the same time)

This causes the data to be loaded into the box (as in img below):

Then you press «Analyser».

This creates to images:


Theese two images, has been created from the stock data.

The first image, shows you the development and the amplitude/standard deviation of the stock development.

The second one, calculates presumptions of the further development based on work by fibonacci.

I will explain it in details later on….

In the meantime, you can read a little bit about it here:

Image Gallery

Iv’e just created my own image gallery.

There is an admin-site for this gallery, where you can manage the thumbnails and images.

I wanted the gallery to be simple and easy and without to much eye-candy.

I’m not using a database, so it should be possible to implement it on sites where you don’t have any database available.

You can see a demo here: [Gallery]

I’ll explain the functionality and will provide screenshots later on.

Soap Probe Monitor

I’ve started creating a soap probe monitor, where you should be able to create probes and get statistics over your webservice.

This probe monitor should be able to use basic authentication if your webservice are implemented with basic auth. I’ve created the admin page where you are able to create the probes.

You will here be able to see the progress of the pages.

I’m planning to use php with php_curl and a mysql database.

I’ve so far sucessfully created soap-tests with basic authentication and managed to make it work on a testsite.

Digital painting

This is a bit off topic of what I’m doing on a daily basis…

I’ve been impressed by the artists on the internett, creating live-like pictures using photoshop and probably a drawingboard.

I used to draw when I was younger, and wanted to do a test to see if I was able to paint a picture on my computer.

I had to use GIMP, since I’m running Linux (Ubuntu distro) on my laptop.

There was a couple of steps connecting the wacom board to my linux, but if you are running Ubuntu 8.10,

you should just write this command in your console:

sudo apt-get install xserver-xorg-input-wacom wacom-tools

I watched som speedpainting videos on youtube, to understand the steps in the digital painting.
I could then see the techniques they used to create the pictures..
Since I don't have to much patience, I decided to start by painting some eyes..
In the beginning, I used a lot of time to find the right tools and techniques...
I've now only painted two pictures.. And I was very satisfied with the last one.

I'll probably make a tutorial of the steps creating a digital painting and techniques you can use to create one.
But it won't be for a little while, since I'm very busy these days since I'm 60% in a project.
(I was supposed to get 60% off my daily taskt... but ended up working 160% instead :) )

How does the java application work? (UseMon case)

I’m working with a Java Enterprise solution, running 100+ of java applications communicating with each other by RMI/MQ/JMS over MQ/SOAP.

There was a need for getting more information from how the applications «worked inside» and which methods called another application and so on.

Paul Rene Jørgensen and steinar.cook created a application called UseMon which is a monitoring system for trends , response time and dependency analysis of plain Java applications or big multi-clustered Java Enterprise applications running in production.

Usemon loggs the monitoring data to a MySQL database in our environment.

I then created a php-solution which fetches data from the database (by reading the Usemon v1.0 Database Schema.pdf document.)

I’ve so far created a page which analyze the communication in the applications and creates a dot-language graph which then is exported to a png by using GraphViz .

Here is a result for calls running a couple of seconds.

Here you can see the applications with it’s methods inside the «application-box» and how they call each others and other methods in other applications.

I have a lot to do at work now, so it may be a little while before I can continue on this application.