Continuous improvement with retrospectives

Every now and then you need to reflect and see where you can improve. The retrospective can be a tool to find the most important parts you and or your team can improve on. This post will show you a possible retrospective format and its results.

As our team started using Agile/Scrum we took the retrospective format straight from the Art of Agile book. We used it for months with great results. After a while we wanted to include some metrics to see if we are still on track. We introduced the happiness factor. We gave each iteration a school grade and thought up some positive things that happened the last period. Lower grade’s can be used as a signal to create structural organizational improvements. Also according to The Happiness Advantage, starting out with a positive mindset opens up the brain and should lead to much better brainstorming results.

Our current retrospective format is something like this:

  • Discuss the results of the improvements of the previous iteration
  • Everyone writes down at-least two positive notes and gives the current iteration a grade. Everyone reads both out loud.
  • Brainstorm session
    • Everyone writes things that could improve his/her grade on a post-it (one topic per post-it)
    • Read them out loud
    • Short discussion
    • Repeat until most idea’s are on the table
  • Cluster the post-its into groups (preferable using mute-mapping)
  • Give each cluster a name
  • Everyone votes on three clusters with three, two or one point
  • From the cluster with the most votes we create around two improvements that we can accomplish in the next iteration

In our case the improvements can be anythings, as-long it helps the team to be more effective or motivated to-do their jobs. Sometimes small annoyances like noise would disappear without any action points, because it was more top of mind after the session and it would not return for a while.

Major results are a better working environment, improved coding guidelines, process improvements, better workstations, comfortable sized iterations, less overhead, but also training and better coffee. Overall our happiness is very stable. We really handle the major, but also the minor issue’s. This gives the team the feeling they are in control.

We tackled small local issue’s, but also bigger organizational impediments. By taking small steps we can see if the improvements have a real effect, instead of re-organizing everything in one go. We made some great improvements over the last years using these technique’s.

Written by Niels van Reijmersdal

September 8th, 2013 at 13:28

Posted under Uncategorized

Comments: No comments yet!

Creating a better maintainable Selenium 2 Grid setup

Setting up a basic Selenium Grid is pretty good documented and has a lot of examples on the internet. After you setup your Selenium Grid and daily run tests against it. You might run in to some issue’s just like I have. The Grid setup is relative stable, but I would run into one of the following problems every other month.

  • Selenium Node Java processes run out of memory
  • Browsers sometimes crash and are not closed correctly
  • Selenium HUB Java processes stops responding totaly
  • Node Operating System out of memory

This led to restarting of the nodes and hub when all the tests where failing. I was not very happy with restarting everything manual. Also I wanted a good way to update the Selenium Grid version and settings from a single central location, since updating a lot of nodes manual is tiresome.

Our current setup looks like this:

Here we have a number of systems with different roles

  • Continuous Integration Server (for example Jenkins): Starts and runs the tests
  • Grid HUB: Communicates the test steps to the nodes
  • Grid nodes: Performs the tests against the real browsers. All nodes have a SSH server installed. I use a Cygwin setup for the Windows nodes.
  • SMB: Central file share, contains the configs, shell scripts and the Selenium software (jars and additional third party drivers)

We put the configs, shell scripts (for starting hub and nodes) and the selenium software on the central file share. Mount the central file share on all the nodes/hub and setup the operating system to run the start-up shell scripts just after the systems is booted (you might need to configure auto login first). This means that a clean reboot of every machine leads to a fresh grid situation. The nodes automatically wait and connect to the grid hub.

The next step is to create a C.I. job which resets the Selenium Grid on certain intervals. One problem is that the Selenium Grid currently does not offer a graceful shutdown. Which means when you shutdown any element of the grid the currently running test will fail. To tackle this we need to make sure no tests are running. For Jenkins we use the Exclusive Execution Plugin to put Jenkins in maintenance mode, then it waits for all other jobs to finish and it runs the job marked as exclusive. After the exclusive job is finished it returns Jenkins to normal mode. Our Selenium Grid restart job executes the following steps:

  1. Shutdown the Grid HUB to prevent any new tests to start, by hitting the shutdown url with wget: http://yourhubip:4444/lifecycle-manager?action=shutdown
  2. Restart the Grid Nodes. We SSH into each node and send the reboot command (for windows its shutdown /r )
  3. Sleep for 10 minutes to let the machines finish the reboot (still need to find a better way to check if the nodes are back)
  4. Start the Grid HUB in the background (over SSH we send this command: echo “sh -c ‘cd /mnt/shared/selenium; nohup sh start_hub.sh &'” | at now +1 min )

Now we scheduled the job to run every night when all the developers are sleeping. Everyday we have a fresh grid setup to work against, the joy!

If we want to upgrade the Selenium version, we just update the jars in the central location and run the Jenkins grid restart job.

 

Not sure if this the most optimal setup, but I hope this post gives an idea of how you could create a pretty stable selenium grid setup.

 

 

Written by Niels van Reijmersdal

March 10th, 2013 at 16:04

Hooking up a old bakelite phone

My wife wanted to be able to use her old bakelite phone, which used to be from her grand-father. It has been in the family for decades. Unfortunately our DSL VOIP connection does not support these old phone types. You need a pulse to tone converter. After searching the internet it seems no company in Europe sells one of these converters, seriously?

Eventually I bought the Dailgizmo pulse to tone converter from Australia. The price was 50 dollars including shipping and handling. Personally I thought the price was a bit high, but I couldn’t find a cheaper one and pleasing the wife has no price card, or does it? ;-)

It took about two-three weeks for the mail order to arrive at my home. The Dutch customs department added a extra 15 dollars in taxes and handling costs. The bastards! Next time I should remember to send it as a gift. I think you don’t get charged extra then.

This weekend I decided to hook the thing up. The converter seemed pretty straight forward in its usage. After unscrewing the phone cable and the converter I connected the wires and screwed everything back together again. Since I never had a chance to try the phone nor the converter, when plugging it in I prayed it would work. It was a small miracle, it worked straight of the box. Awesome!

Recorded a short video as proof that it works. Back to the fifties, pretty cool.

Written by Niels van Reijmersdal

July 29th, 2012 at 17:23

Posted under English,Personal

Comments: No comments yet!