My Stuff

2010 Conferences

OSGi DevCon @ JAX London

February 23 - Keynote titled OSGi in the Enterprise: Agility, Modularity, and Architecture’s Paradox

EclipseCon

March 22 - 25 - Tutorial on Modular Architecture

Über Conf

June 14 - 17 - Sessions titled Turtles and Architecture and Patterns of Modular Architecture

Catalyst

July 26 - 30 - Two sessions on rich mobile applications and one on agile development. Half day tutorial on software process improvement.

Tweets @ Twitter

re: #apple event "We sold more iPads than any PC manufacturer sold of their entire PC line." 2012-09-12

re: #Apple Event ""Our notebooks now rank #1 in the US in Market share in the last three months." 2012-09-12

Right on. I just won a Best Buy drawing worth $1000. Either that or I won a shiny new virus by clicking the link. Hmm...what to do. 2012-08-29

The #osgi alliance response (http://t.co/KrN8XNWg) to the @mreinhold #jigsaw announcement (http://t.co/9YvcDdqC). 2012-08-29

Good Q&A with @mreinhold on project #jigsaw. http://t.co/9YvcDdqC. Modularity will change the way we design and run apps! 2012-08-28

LinkedIn Profile

The opinions expressed on this site are my own, and not necessarily those of my employer.

Big Teams & Agility - Take 2

Filed Under Agile, Development |  

In Big Teams & Agility, I talked about a macro process for agile development on large teams (those up to, possibly exceeding, 100 developers). The article was posted on AgileZone at JavaLobby (where I now help out as zone leader, though did not at the time the article was posted), where I was accused of having my head in the cloud. While I could construe that as a compliment given the buzzwords du jour, I probably shouldn’t make that mistake. So I want to take a moment and respond because there were some good points made, and clarification is necessary.

First, you can apply agile practices on big teams, and it does work. I was pretty clear that I’ve used the structure on teams up to 100 developers, and have done so for a good share of the work I’ve done since 2001. It’s worked marvelously. However, I never said it was easy. In many ways, it’s the most difficult approach to developing software I have ever taken, but it’s also the most successful. One question posed by a commenter follows:

Do you really expect big company to go through integration/testing/whatever every week?

Absolutely! The more often you integrate, the earlier you’ll discover problems before they’ve had an opportunity to fester within the system for prolonged periods of time. I recognize this is against the grain of conventional wisdom we’ve been taught for decades. As I mentioned in the original post:

The economies of scale lead us to believe we need longer iterations
because there is so much more to manage. But that’s flawed because it
delays risk mitigation and discovery.

At the end of the post, I stated that there were many micro process details omitted, such as how to keep the build running quickly. To address slow build times on very big systems, you may have to implement staged builds. A staged build is basically a pipeline of builds that perform different build activities.

For instance, a stage 1 build might perform a subset of the overall build steps to provide rapid feedback to the team. A stage 1 build is performed hourly or anytime new code is checked into the source code repository. A stage 2 build is a more complete build process. It might integrate all system components, or execute a complete suite of tests. The actual tasks are going to vary given the context, but the idea remains the same. It’s ok to have multiple build processses for a single software system. It’s up to the development team to identify the components of each build, and it may change throughout the life of the project. But the key element is the rapid feedback the team receives.

Another point of contention with the article centered around the misunderstood suggestion to release frequently to the customer, and the perceived lack of QA, acceptance testing, etc., etc. In fact, I stated the exact opposite, and it’s the continuous integration strategy that allows us to close the loop and perform these types of testing frequently. I stated pretty clearly the following:

We should also frequently execute a variety of tests. Not just unit and
acceptance tests, but usability tests, performance tests, load tests,
and more.

The key element here is that we increase project transparency because we have the ability to get our product in front of the customer on a frequent basis. The customer experiences the growth of the application with the development team. They see it’s evolution, and can provide valuable feedback along the way. There are fewer surprises at the end of the project.

However, I have never said that each build should be released to the customer for use as a production software system. Never! I did say the following:

Once the build executes successfully, the application can be deployed to an environment where it’s accessible by the customers.

The environment is likely a test environment that the customer has access to, and can experiment with the system to provide feedback. Or a QA environment where acceptance testing can be performed. It’s also a place that can be used for system demonstrations. I recognize the pain in delivering large enterprise software systems, and also realize the impossibility of releasing each build to production. But we should be striving for that level of quality each time we write a line of code.

Regardless, the key takeaway is that because we always have a functional system, we are able to perform various lifecycle activities at any time, and many times, throughout the development effort. This increases project transparency where the customer, management, and developers have a consistent understanding of the current state of the system. We avoid those nasty surprises late in the development effort that plague many projects.

This is not overly zealous. It is not unrealistic. It is not some abstract theory born of academia that has never been proven in the real world on a large enterprise development effort. In fact, while it adopts various practices from popular agile methods, it is not Scrum, nor is it XP.  But it is agile, and it does work. It captures the essence of agile development - rapid feedback through software that works while maintaining the ability to respond quickly to change. This is, quite simply, one of the best approaches to software development I’ve ever used. But it’s not easy, nor will it ever be easy. Software development is hard work, and it will always be hard work. The comments are welcome,  the questions are valid, and the discussion is important.

Comments

One Response to “Big Teams & Agility - Take 2”

  1. Toby on March 12th, 2009 1:28 pm

    Hardware is cheap. You have supercheap vservers and clouds like, Amazons EC2. You can install alots of servers, checkers, unit testers, anything. As soon you invested time to test your application, and this area has its own change-management, you simply press a button, go to your weekend - and monday morning you have pages over pages of automated testresults. Results, if made manually, in terms of required tester-workforce or/and risk-analysis, would cost 100times more.

    Computers are cheap. Hardware space is cheap.

    The only reason I can see you don’t want todo that is, that you think its not good to give any level of management constant “leverage”, in terms of code-quality, testing-coverage and the quality of of your timeline estimation. Maybe some development-departments are so deep in the fight with their controllers, vice presidents and the world, that creating that mystical cloud of “creation magic” is needed as a toolset for the day to day war called “software development”.

    Toby

Leave a Reply