My Stuff

2010 Conferences

OSGi DevCon @ JAX London

February 23 - Keynote titled OSGi in the Enterprise: Agility, Modularity, and Architecture’s Paradox

EclipseCon

March 22 - 25 - Tutorial on Modular Architecture

Über Conf

June 14 - 17 - Sessions titled Turtles and Architecture and Patterns of Modular Architecture

Catalyst

July 26 - 30 - Two sessions on rich mobile applications and one on agile development. Half day tutorial on software process improvement.

Tweets @ Twitter

re: #apple event "We sold more iPads than any PC manufacturer sold of their entire PC line." 2012-09-12

re: #Apple Event ""Our notebooks now rank #1 in the US in Market share in the last three months." 2012-09-12

Right on. I just won a Best Buy drawing worth $1000. Either that or I won a shiny new virus by clicking the link. Hmm...what to do. 2012-08-29

The #osgi alliance response (http://t.co/KrN8XNWg) to the @mreinhold #jigsaw announcement (http://t.co/9YvcDdqC). 2012-08-29

Good Q&A with @mreinhold on project #jigsaw. http://t.co/9YvcDdqC. Modularity will change the way we design and run apps! 2012-08-28

LinkedIn Profile

The opinions expressed on this site are my own, and not necessarily those of my employer.

Java With a Bit of OSGi - The Book

Filed Under Agile, Architecture & Design, Development, Java, OSGi | 2 Comments 

I’m dancing. By god I’m dancing on the walls. I’m dancing on the ceiling. I’m ecstatic. I’m overjoyed. I’m really, really pleased.
- An excerpt from the Foreword by Uncle Bob (aka. Robert C. Martin)


My book, Java Application Architecture: Modularity Patterns With Examples Using OSGi is now available. Uncle Bob and Peter Kriens each contributed Forewords to the book. The book itself is part of the Robert C. Martin series. The book is intended for all software developers interested in designing better software using modularity. Though the examples use Java, the techniques can be applied to other languages and platforms, such as .NET, with relative ease.

Even if you’re not using OSGi (or perhaps not even familiar with OSGi), I’m confident you’ll find the book valuable. The book (and patterns) has been designed to allow you to realize the benefits of modularity whether you’re using a module framework, or not. As Uncle Bob says in the Foreword, “This is how you build a Java application, people.” Peter sums it up nicely too, in saying “This book…will give you a view into the magic of modularity.

You can order it online at Amazon (print edition & Kindle edition), iBooks, InformIT, or a number of other publishers. For more details on the book, please see the book’s website. Over the next couple of weeks, I plan to post a sample chapter or two that will give you a feel for the book’s contents.

Here’s what a few people have to say:

  • “The fundamentals never go out of style, and in this book Kirk returns us to the fundamentals of architecting economically interesting software-intensive systems of quality. You’ll find this work to be well-written, timely, and full of pragmatic ideas.” Grady Booch, IBM Fellow
  • “Along with GOF ‘Design Patterns’ - ’Java Application Architecture’ is a must own for every enterprise developer and architect, and on the required reading list for all Paremus engineers.” – Richard Nicholson, Paremus CEO & President of the OSGi Alliance
  • “In writing this book, Kirk has done the software community a great service: he’s captured much of the received wisdom about modularity in a form which can be understood by newcomers, taught in Computer Science courses, and referred to by experienced programmers. I hope this book finds the wide audience it deserves.” Glyn Normington, Eclipse Virgo Project Lead
  • “Our industry needs to start thinking in terms of modules – it needs this book!” Chris Chedgey, Founder and CEO of Structure 101
  • “In this book Kirk Knoernschild provides us with the design patterns we need to make modular software development work in the real world. While it’s true that modularity can help us manage complexity and create more maintainable software, there’s no free lunch. If you want to achieve the benefits modularity has to offer, buy this book.” Patrick Paulin, Consultant and Trainer at Modular Mind
  • “Kirk has expertly documented the best practices for using OSGi and Eclipse runtime technology.  A book any senior Java developer needs to read to better understand how to create great software.’ Mike Milinkovich, Executive Director Eclipse Foundation
I’d like to thank all of you who helped me along this journey. I hope you enjoy the book.

The Exciting Enterprise

Filed Under Agile, Development | 1 Comment 

Developing enterprise software is hard work. Bureaucracy. Large teams. Legacy systems. The list of challenges is immense. But enterprise development can also offer tremendous opportunity and reward. It can even be fun! I hope that at some point, you have the opportunity to experience this feeling.

The Exciting Enterprise

I worked here once. In the exciting enterprise. Working here was different. It was actually exciting. What made it different? Well, it was the process. Exciting process? What’s that? It’s the process that makes software development fun again, productive, meaningful, and satisfying.

So what kind of process is this, you ask? Iterative? Agile? RUP? XP? Scrum? Kanban? Hell no! We never talked process lingo. We didn’t care how agile people thought we were. We didn’t want to get caught up in the bureaucracy and political mayhem surrounding software process improvement. There was work to be done. That’s what we talked about. And then we did what we needed to do to get it done.

Not Always Easy

Working in the exciting enterprise wasn’t always easy though. It required stamina, determination, and discipline. There were lots of people who didn’t appreciate how we went about our business. We didn’t submit weekly status reports. We didn’t have the Gantt chart. Actually, we didn’t have a lot of things. But we had what we needed. To us, it all seemed like common sense. Of course, we had a very experienced team, and we knew what worked and what didn’t.

Plan driven? Predictive? Estimates? Models? Oh yeah…we had it all. We had a 18 month project road map. It was quite a plan. Took a small group of us about 1.5 days to develop. All on a single spreadsheet. Showed all the major systems that we’d retire right along with the new functionality that was going to come online. Probably could have gotten another three months of project time if we wanted to do more planning. But we had code to write.

Oh That Build

Code? But what about the requirements? Yep. Gathered them as we went. And when developers had questions, they asked them. Right to the customers face too. Blasphemy! The business analyst working with the developers. Those BAs were awesome too. They worked real hard to clear any confusion. Made sure developers always had what they needed. And we got ‘em as quickly as we could handle them. A steady stream of requirements flowing in and right back out as an executable piece of software.

Oh sure, it wasn’t always easy. We had lots of important checkpoints along the way to make sure we were on the right track. Weekly checkpoints. Daily checkpoints. Hourly checkpoints. Developers and customers sitting in a room together to see what we’d gotten done the past week. Continuous deployment to an environment where folks could actually use the software. Hourly builds that made sure we never strayed too far from a working system.

I still think about that build. Actually, it was more than a build. It was a piece of frickin’ art. That build was the coolest piece of process I’ve ever seen. It was staged. It was fast. It did pretty much anything we asked it to do. It never got in our way. It just went on it’s merry way, building our system. Hourly. Every hour. Automatically. It was the glue that held the team together as we grew in size from a fledgling crew of six developers to upwards of 100 at times.

We protected that build. When somebody cause it to fail, they’d feel the wrath. Eventually, they all grew to love what it could do. Honestly, what choice did we have if we wanted to ensure we could have weekly checkpoints throughout the process? The only way to pull it off was to emphasize software that works. It always had to work. ALWAYS! Guard the source. The prime directive - software that works!

The Source

Now don’t get me wrong. We had documentation. We had lots of documentation. Some of it was pretty nice looking stuff too. But we weren’t afraid to let it go. Let it serve it’s purpose and then move onto what was real important…the source code. What else really matters, anyway? All the pretty documents, models and plans don’t amount to a hill of beans if the source code isn’t provably correct. That was our focus. It was the teams focus. And we guarded that source code very closely.

Improving the Right Process

I look back now. An 18 month project. A team that clicked. Not just a development team. A team. Including the customers. Honestly, we didn’t just build a piece of software, though. We improved the process. We automated it. We optimized it. No, I’m not talking about the software process. I mean the business process! That’s what made it so cool…

Agile Animations - Big Teams

Filed Under Agile, Development | Leave a Comment 

Agile: The New Era

Filed Under Agile, Development | Leave a Comment 

It’s housecleaning time again, and like last time, I stumbled across an article I wrote back in 2006 that I don’t believe ever reached publication (at least, I don’t think it did…how am I expected to remember what I did in 2006?). For the most part, I’ve left it in its original state, except that I removed the Agile Manifesto and 12 supporting principles. There are easily enough found on the Agile Manifesto website, and the article is long enough without this duplication. The wordle at right shows the most common words used in this document (click to enlarge). Here, in it’s otherwise unadulterated glory, is Agile: A New Era of Software Development.

Agile: A New Era of Software Development

Embrace Change

Writing code is easy, but developing software is hard. While syntax errors are common, their severity pales in comparison to the logic flaws inherent in many software systems. The difficulty in software development is not writing code or applying a certain technology stack. Instead, the challenge lies in the specification and design of the software itself.  Therein lies the essential complexity of software development, an idea introduced by Frederick Brooks in his 1987 article titled, “No Silver Bullet” [Brooks]. The most difficult aspect of software development is deciding what, exactly, needs to be built.

There is certainly evidence backing this claim. The original Chaos Report shows the top three impediments to a successful development effort are lack of user input, incomplete requirements and specifications, and changing requirements and specifications [CHAOS]. No other activity, if done incorrectly, stands to compromise the system more than incorrect requirement specifications.

It might not be so difficult were software a concrete entity, existing in a world where we could easily visualize it’s structure and behavior, allowing us to more reliably assess and share the impact of change. But software is a highly conceptual, invisible construct. It is considered infinitely malleable by those not intimately familiar with the conceptual complexity of it’s structure and behavior. The contractor building your home would look at you with incredulous disbelief if you suggested that the house he has 90% complete no longer met your needs, and you asked that he move walls. Or imagine how ridiculous it would sound to suggest that a new third floor be inserted to a 100 story skyscraper. Physicists labor on with firm belief that there exist an underlying set of unifying principles to serve as guidance. Or at least, there are laws of physics that we hold to be true. There are no such rules or principles that guide software development. We are left with the imagination and dreams of our clients, and they demand and deserve rapid response to change.

We have made valiant attempts at conformity. Ceremonial processes attempting to define standardized activities that guide the development process have failed, however. We cannot define detailed up-front requirements specifications and expect them to survive the development lifecycle intact. We cannot establish an initial design of the conceptual construct and expect the structure to go unscathed throughout the process of construction. Software development is an error prone human activity involving experts with varying backgrounds and skills who must come together and attempt to communicate uniformly, working as a team toward a common goal. While tools and process do help, we must also accept that change is expected. We cannot treat change as evil. Instead, the tools and process used must allow us to accommodate change, treating it as an inherent part of software development. Changing requirements is a rule of our game. The software we develop must be malleable and adaptive to change, and the process we use must embrace change.

We often draw comparisons between software development and various manufacturing processes. As Larman points out, however, manufacturing is a predictive process [Larman]. Herein lies one of the greatest differences between software development and the manufacturing processes to which we often draw comparisons. Manufacturing is a repeatable activity, with high rates of near-identical creation where a consistent product is produced in assembly line fashion. Little change is expected, making it possible to reliably estimate and predict the outcome. Software development is much more like new product development, where evolutionary specifications and adaptive planning is necessary to deal with the many unknowns that lie ahead.

Agile Principles

In early 2001, a small group of industry experts converged in Utah to discuss alternatives to heavy, document driven software development methods. Emerging from this meeting was the Agile Manifesto, a symbolic proclamation endorsing the virtues of a lighter, more flexible, people-oriented approach to software development, giving birth to the agile software development movement. (Since this is already a long article, I’ve snipped the manifesto and principles, which were included in the original version. If you’re interested, you can view the manifesto and its 12 principles on the Agile Manifesto website.)

<Snipped the Manifesto for Agile Software Development and 12 Principles>

The ideas behind these 12 principles are simple, and contain no hidden messages. Of course, there are different techniques embodied within various agile processes that support these principles. The one certainty is that agile teams definitely work differently from their less agile peers. They recognize there is one end goal - to create a working, functional software product. With that in mind, they work very closely with project stakeholders throughout the development lifecycle, knowing it is the stakeholders who possess the knowledge the system must embody. Agile teams work very hard to deliver working software iteratively and incrementally, and they adopt techniques representative of that ideal.

Agile project managers tend to favor intense communication and collaboration over heavy documentation. Empowering team members to make decisions enables responsiveness to change. Facilitating and negotiating requirements scope provides important feedback, helping plan future iterations, where each iteration produces a deliverable that can be shared with clients and stakeholders. Instead of forcing the team to follow a predictive project plan, agile project managers are more opportunistic. They prioritize features based on stakeholder feedback, and make adjustments as the iterations progress. Concurrent and parallel activities are favored over a phased approach. Agile project managers tend to guide the team instead of manage the team, and strongly discourage unnecessary overhead.

Agile developers work with a similar set of goals, knowing functional software must be delivered early and often. They work judiciously to grow a code base built upon a solid foundation, where each day represents a step forward. They integrate frequently, and do not tolerate failed builds. A rich suite of tests provide the courage necessary to respond to change when the need arises. They avoid the notion of code ownership, empowering other developers to make improvements to any component of the software product.

A common misconception is that agile processes discourage all documentation. This is untrue. Agile processes discourage unnecessary documentation, favoring collaboration as the preferred technique. Instead of using documentation to drive communication, agile processes favor face-to-face communication. Documents are encouraged by agile processes, so long as the need is immediate and significant.

Transitioning to Agile

Agile software development is based upon the fundamental premise that we must drive and respond to change quickly. The Agile Manifesto and 12 supporting principles serve this premise well. Advocates of agility claim speedier delivery of software, software with more business value, increased team productivity, higher quality systems, and a more enjoyable development experience. I believe each of these to hold true. Agile teams not only welcome change, they are able to respond to change at all levels of development. A project manager might discuss a changing requirement with a business client, empower a business analyst to schedule a meeting with the client to discuss further details, while a developer assesses the impact of change knowing she has the courage to accommodate the request because of the rich suite of unit tests in place.

Saying you’ll be more responsive to change and creating an environment that embraces change are separate beasts, however. Practicing agility is hard work, especially if your team is accustomed to more traditional approaches. As with many things new and unfamiliar, some resistance will no doubt arise by those who aren’t fully convinced. Agile projects differ greatly from their less agile counterparts, and skeptics will have many opportunities to express their discontent. As someone experimenting with agility, you may even have doubts. But don’t be discouraged, and give your agile transition the time it deserves.

One of the most significant changes you may experience is a feeling that you’ve been thrust into a chaotic nightmare. I doubt it’s unusual to feel this way. You’ve lost the security of detailed requirements specification and user sign-off. You are writing code without the comfort of knowing exactly what your stakeholders want. The detailed plans that have served as your security blanket on past projects no longer exist. And the celebrations accompanying completion of your various phase milestones are gone. Of course, these were all false comforts anyway. Stakeholders always changed their minds. Your detailed requirements and plans were outdated as quickly as they were completed.

Instead, you’re now working in shorter iterations with vague requirements. Initial releases early in the lifecycle may be completely thrown away. Your first few weeks may seem wasted, with little or no valuable artifacts produced. Naysayers will immediately come forward and cite the lack of progress. Previously, those first few weeks or months were spent producing very detailed requirement specifications and beautiful design models. But don’t give up yet. In that previous world, you were only delaying risk and postponing integration, avoiding the most difficult aspect of software development until the end of the lifecycle. Now you’re attacking risk early, prioritizing features, and working hard to develop a functional piece of software as early as possible. Progress may not be at breakneck speeds, but you are learning a tremendous amount about the requirements of the system, and your velocity is sustainable. Additionally, you are also performing a number of other important lifecycle activities.

Depending on the level of ceremony and bureaucracy within your organization, you will experience varying degrees of success when adopting agile techniques. As with any new technology adoption, it’s best to phase the transition. Some agile techniques are easier to adopt than others, and some serve as valuable catalysts to adopting additional techniques in the future. Don’t attempt to completely redefine how you work. It’s relatively easy to phase the agile transition, and you’ll want to adopt those principles that offer you the greatest initial reward.

For instance, if you’re struggling to produce quality software at a consistent rate, implementing a continuous integration strategy will help you frequently verify the quality of your work. In addition to the comfort of knowing you have a product always in a functional state, the ability to share the product with clients using functional demos and prototypes will tighten the feedback loop and offer valuable insight to the client’s perception of the software. In a number of cases, I’ve found this to be valuable in identifying subtle requirements that can be difficult to identify in other requirements elicitation venues.

Empirical Evidence

In recent years, there has been a significant amount of research comparing agile development methods to their waterfall counterpart. In Agile and Iterative Development: A Manager’s Guide, Craig Larman illustrates the advantage of agile development through detailed analysis of multiple studies[Larman]. The compilation of his results are illustrated below.

A study by Alan MacCormack at Harvard Business School explored whether evolutionary development techniques yielded better results than the waterfall model. The study included applications ranging from application software to embedded systems, with median values of nine developers spanning a 14 month development cycle.

A key conclusion of the study, in which 75% of participants used agile techniques compared to 25% using waterfall, explained releasing software earlier, rather than later, contributed to a lower defect rate and higher productivity. There was little evidence showing that a detailed design specification resulted in a lower defect rate, however, reviews with peers did help in reducing the rate of defects. The study found that iterative and agile practices have a significant impact on defect and productivity factors, as indicated by the following points.

  • Releasing a system with 20% of the functionality complete is associated with a decrease in the defect rate of 10 defects per month per million lines of code as compared to waiting to release a product until 40% of the functionality is complete, and an increase in productivity of eight more lines of source code per person-day.
  • Continuous Integration, the idea of integrating and testing code as it is released to your source code repository, resulted in a decrease in the defect rate of 13 defects per month per million lines of code, and an increase in productivity of 17 lines of source code per person-day.

The study also found four practices that were consistently used by the most successful development teams. The first two are deeply embedded in the ideals of agile software development.

  • Releasing early and often to project stakeholders, using an iterative lifecycle.
  • Continuous integration, with daily builds including regression testing.
  • Teams with broad experience delivering multiple projects.
  • Careful attention to modular and loosely coupled, componentized architectures.

In a separate study [Shine], 88% of organizations cited improved productivity when using agile methods, and 84% cited improved productivity. 49% stated that the cost of development was less when using agile methods. Additionally, 83% claimed increased business satisfaction and 26% claimed significantly better satisfaction. Another study by Boehm and Papaccio [Boehm] discovered that a typical project experiences a 25% change in requirements, while yet another [Johnson] showed  that 45% of features were never used.

There have also been many research efforts devoted exclusively to the analysis of waterfall methods. Below is a summary of these findings, taken from a variety of studies.

  • Scope management related to detailed up-front requirements was a significant contributing factor of failure [Thomas].
  • The U.S. Department of Defense (DoD), when following a waterfall lifecycle, experienced a 75% failure rate [Jarzombek]. This resulted in the DoD adopting a more iterative and agile approach.
  • On a study including 400 waterfall projects, only 10% of the code was deployed. Only 20% of code deployed was used. The main factors included changing and misunderstood requirements [Cohen].

As these studies clearly illustrate, there is significant evidence showing that agile and iterative techniques offer significant advantages over the waterfall model of development. In fact, for larger projects, the statistics supporting agility were even more pronounced.

Conclusion

There are a variety of agile processes available to choose from, and each abide by the spirit of the manifesto and it’s 12 supporting principles. The agile movement and it’s supporters recognize that software development is a human (though not always humane) activity. Instead of forcing process on people, agile methods allow process conformance to people. Good people, working toward a common goal, can achieve great things will little ceremonial process, assuming you give them an environment that empowers them. Solid empirical evidence backs this claim. And if the quality of people is in question, it’s doubtful that any process will produce success.

References

  • [Alliance]. The Agile Alliance. Manifesto for Agile Software Development. 2001. http://www.agilemanifesto.org
  • [Boehm]. Boehm, B, and Papaccio, P. Understanding and Controlling Software Costs. IEEE Transaction on Software Engineering. October 1988.
  • [Brooks]. Brooks, Frederick. No Silver Bullet: Essence and Accidents of Software Engineering. 1987.
  • [CHAOS]. The Standish Group International, Inc. The CHAOS Report. 1995.
  • [Cohen]. Cohen, D., Larson, G., and Ware, B. Improving Software Investments through Requirements Validation. IEEE 26th Software Engineering Workshop. 2001.
  • [Jarzombek]. Jarzombek, J. The 5th Annual JAWS S3 Proceedings. 1999.
  • [Johnson]. Johnson, J. Keynote speech, XP 2002, Sardinia, Italy. 2002.
  • [Larman]. Larman, Craig. Agile and Iterative Development: A Manager’s Guide. Addison-Wesley, 2004.
  • [MacCormack]. MacCormack, A. Product-Development Practices That Work. MIT Sloan Management Review. 2001.
  • [Shine]. Corporate Report. Agile Methodologies Survey Results. Shine Technologies Pty Ltd. Victoria, Australia. 2003.
  • [Thomas]. Thomas, M. IT Projects Sink or Swim. British Computer Society Review. 2001.

Small Things Matter

Filed Under Agile, Development | 5 Comments 

Story of the Concorde

On July 25th, 2000, flight 4590 crashed. It was the first, and only, crash of the famed Concorde. Eventually, it would lead to retirement for the amazing aircraft. Investigators spent countless hours poring over the wreckage, and placed blame on a piece of runway debris that slashed the tire. A piece of that tire struck one of the fuel tanks, causing it to rupture and the plane caught fire. Case closed, right? Not so fast.

Surely a small piece of runway debris shouldn’t bring down a commercial airliner! As it turns out, there is quite a bit of contention among experts surrounding other factors that may have contributed to the crash. Some argue that it was a complex chain of events, all coming together, that brought down the aircraft. The plane was missing a spacer between the two wheels, had too much fuel in the tank, attempted to takeoff in unstable conditions, and was overweight. The flight was also delayed, causing angst among the flight crew. Finally, a required daily runway inspection was not performed.

Perhaps, if the runway inspection had been performed, the piece of debris would have been spotted. Or had the aircraft not been overfueled, the piece of tire may not have caused an increase in fuel tank pressure that some say caused the tank to burst. Had the spacer been installed, it’s possible the tire would have never burst. A series of small events, each contributing in their own way, to the fatal crash.

No Matter How Small

The story reminds me of the importance of attention to detail in software development. Of how the aggregate of all of the small, seemingly insignificant decisions we make on a continuous basis can have long-term consequences on the future of the software system. Possibly even your organization. Every time you design a class, define a variable, write a test, create a package, build a module, modify a method, or make a design decision, you are affecting the future of your system in some unsuspecting way.

What may seem insignificant today, can have a detrimental affect tomorrow. Really, there are no small, insignificant decisions in software development. I’m reminded of how important it is to make conscious decisions that are given careful thought, no matter how small. It also reminded me of a poem on build automation and continuous integration that I read a while back on the Test Early blog.

FOR WANT OF A BUILD

For want of a build, a test case was not executed
For want of test case, a defect was not detected
For want of a defect report, a bad release was promoted
For want of a good release, a strategic customer was lost
For want of a customer, a development team was reduced
For want of developers, a product stagnated
For want of a product, a company was lost
And all for the want of a build…

3 Pillars of Architecture

Filed Under Agile, Architecture & Design | 3 Comments 

I’ve spent some time thinking a bit more deeply about a few of my recent posts on software architecture, and have come to the following revelation.

Architects architect architecture!

Don’t let the triviality of this statement undermine its depth. While each of the three words are variations of the same thing, each have different contextual meaning. Let’s tease the statement apart to see what I mean.

  • Architects - Humans create software architecture, and for architecture to be effective, we must also be able to understand the architecture. In Eliminate Architecture, I cited a definition of architecture that introduces the social dimension. Architects signify the social pillar.
  • Architect - The way we arrive at architecture is through some process or series of steps. We might create diagrams or software architecture documents. We might write a little code (proofs, spikes, prototypes) to determine the viability of architecture. There are many different activities we perform as we create the architecture. Architect signifies the process pillar.
  • Architecture - In Modularity and Architecture, I offered up a few industry definitions of architecture. Common keywords that span definitions include components, composition, interfaces, subsystems, and structure. Architecture signifies the technology pillar.

To ensure balance, we must give attention to each of the three pillars. Additionally, each pillar is related to the other. For instance, ignoring the social pillar impacts the other two in unexpected ways.

The Social Pillar

Turtles and Architecture generally focused on the social pillar of software architecture, but also talked a bit about how the technology pillar can increase understanding, visibility, and transparency. The general sentiment is that architects must focus on more than just concepts and developers must focus on more than just code. There is a middle ground that demands attention, as well.

I used a visual similar to what’s at right (click to enlarge) to illustrate the middle ground. It also illustrates how aspects of the technology pillar can help increase understanding and transparency of architecture. Increased understanding of the architecture hopefully leads to improved structural quality (technology pillar) and transparency of the process (process pillar).

The Technology Pillar

Architecture All the Way Down primarily focused on the technology pillar. The visual at right (click to enlarge) illustrates this view. Again, we see the huge gap that exists if we fail to emphasize the importance of modularity. The rightmost portion illustrates how modularity fills the gap - architecture all the way down. Of course, other gaps are also created if we ignore any of the other aspects, such as class design. Note that as we move from services to modules to packages and classes, we increase the granularity along the way. Our classes should not be as fine-grained as our modules, nor our modules as fine-grained as our services.

Additionally, each entity solves a different type of problem (or provides a different type of advantage), as illustrated by the bars at the bottom. All are units of composition, but only services and modules are units of deployment. Services are reused through distributed computing, whereas module reuse is constrained by process boundaries. The technology pillar affects the other pillars. An inflexible rigid architecture makes it difficult for people to understand and communicate (social pillar) and inhibits how quickly we’re able to respond to change (process pillar).

The Process Pillar

The process pillar is one that I’ve not spent much time discussing. Certainly, it’s important though, and includes various aspects like deferring commitment to significant architectural decisions, evolutionary and emergent architecture, and reversibility. The visual at right illustrates the process pillar (click to enlarge). It’s not as descriptive as the other diagrams, I admit.  Anyone have something better that illustrates the process pillar?

I did talk a little about these ideas in Agile Architecture, Lean Principles. But certainly more needs to be fleshed out surrounding the process pillar. This tends to be where most spend their time when discussing agile architecture, but the other pillars are certainly important. The process pillar affects the other pillars. A bad process accompanied by bad practices results in an inflexible architecture (technology pillar) that noone is able to understand (social pillar).

That Rotting Design

Filed Under Agile, Architecture & Design, Java, OSGi, Platforms | 6 Comments 

Note: This is a re-post, with slight modifications, from an entry in October 2007. And now, two years later, we’re just about there! For the abridged version of this post, focus on the text in bold. You can see the original version here.

Design Rot

We’ve all experienced that sinking feeling when maintaining a piece of crappy software. Has my change broken the system in some unintended way? What is the ramification of my change on other parts of the system? If you’re lucky, and the system has a robust suite of unit tests, they can offer some support in proving your work. In practice, however, few systems have thorough automated test coverage. Mostly we’re alone, left to verify our changes as best as possible. We might privately criticize the original developers for creating such garbage. It certainly lends a plausble excuse in explaining why the maintenance effort is so costly or time-consuming. Or it might serve as the basis upon which we recommend a re-write. But mostly, we should wonder how it happened.

For sure, most software doesn’t start out this way. Most software starts out clean, with a clear design strategy. But as the system grows over time, strange things begin to happen. Business rules change. Deadline pressures mount. Test coverage slips. Refactoring is a forgotten luxury. And the inherent flaws present in every initial design begin to surface. Reality has proven that few enterprise development teams have the time or resources to fix a broken design. More often, we are left to work within the constraints of the original design. As change continues, our compromises exacerbate the problem. The consequence of rotting design is seen throughout the enterprise on a daily basis. Most apparent is the affect on software maintenance. But rotting design leads to buggy software and performance degradation, as well. Over time, at least a portion of every enterprise software system experiences the problem of rotting design. A quote from Brook’s sums it well:

All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing the original design flaws; more and more is spent on fixing flaws introduced by earlier fixes. As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress.

The most obvious question is, “How do we prevent rotting design?” Unfortunately, rotting design is not preventable, only reducable. Of the past ten years, the design patterns movement has provided insight to the qualities of good design. Dissecting design patterns reveals many important design principles that contribute to more resilient software design. Favor object composition over class inheritance, and program to an interface, not an implementation, are two examples. Of the 23 patterns in the GOF book, all adhere to these fundamental statements. Alone however, design patterns that emphasize class structure are not enough to help reduce rotting design.

Reducing Rot

Most patterns emphasize class design, and present techniques that can be used in specific contexts to minimize dependencies between classes. Teasing apart the underlying goal of most patterns shows us that each aim to manage the dependencies between classes through abstract coupling. Conceptually, classes with the fewest dependencies are highly reusable, extensible, and testable. The greatest influence in reducing design rot is minimizing unnecessary dependencies. Yet enterprise development involves creating many more entities beyond only classes. Teams must define the package structure in which those classes live, and the module structure in which they are deployed. Increasing the survivability of your design involves managing dependencies between all software entities - classes, packages, and modules.

But if minimal dependencies were the only traits of great design, developers would lean towards creating very heavy, self-contained software entities with a rich API. While these entities might have minimal dependencies, extreme attempts to minimize dependencies results in excessive redundancy across entities with each providing its own built-in implementation of common behavior. Ironically, avoiding redundant implementations, thereby maximizing reuse, requires that we delegate to external entities, increasing dependencies. Attempts to maximize reuse results in excessive dependencies and attempts to minimize dependencies results in excessive redundancy. Neither is ideal, and a gentle balance must be sought when defining the behavior, or granularity, of all software entities - classes, packages, and modules. For more on the use/reuse paradox, see Reuse: Is the Dream Dead?

Software design is in a constant quandary. Any single element key to crafting great designs, if taken to its individual extreme, results in directly the opposite - a brittle design. The essential complexity surrounding design is different for every software development effort. The ideal design for a software system is always the product of it’s current set of behavioral specifications. As behavior changes, so too must the granularity of the software entities and the dependencies between them. The most successful designs are not characterized by their initial brilliance, but instead through their ability to withstand the test, and evolve over the course, of time. As the complexity of software design is an essential complexity surrounding software development, our hopes lie with technologies and principles that help increase the ability of your design to survive. Such is the reason why agile architecture is so important.

A Promising Future

I’m hopeful that all software developers have experienced the pleasure of a design that, through the course of time, has withstood the test of time. Unfortunately, many enterprise development teams have too few of these experiences. Likewise, few enterprise development teams devote adequate effort to package and module design. It’s unreasonable to believe that even the most flexible class structure can survive should the higher level software entities containing those classes not exhibit similarily flexible qualities. The problems are rampant. Increased dependencies between packages and modules inhibit reusability, hinder maintenance, prevent extensibility, restrict testability, and limit a developer’s ability to understand the ramification of change.

Services offer some promise to remedy our failures with object-oriented development. Yet, while services may offer tangible business value, within each awaits a rotting design. There exists a world between class design and web services that deserves more exploration, and as an industry, we are beginning to notice. OSGi is a proven module system for the Java platform, while Jigsaw aims to modularize the JDK. JSR-294 aims to improve modularity on the Java platform. While some friction might exist between the constituencies involved, it’s only because they too recognize that something has been missing, and are passionate about fixing the problem. Of course, it doesn’t stop there. A plethora of application servers and tools are also including support for modularity using OSGi, which has grown into the defacto standard module system on the Java platform.

All aim to help manage the complexity, from design through deployment, of enterprise software development. With each, new practices, heuristics, and patterns will emerge that increase the ability of a design to grow and adapt.

Architecture All the Way Down

Filed Under Agile, Architecture & Design, Development, OSGi, SOA | 8 Comments 

In Turtles and Architecture, I talked about how important it is that we “architect all the way down”. It helps increase transparency and understanding between developers and architects by emphasizing a lot of the middle ground that noone ever seems to focus on. It’s just another reason why modularity is so important. I used the diagram at right to illustrate the point (click to enlarge). Look at the huge gap that exists if we don’t focus on the stuff highlighted by the gray bar in the middle.

One reason I like this visual is that it illustrates the social aspect of software architecture. Yet, there are other significant advantages to architecture all the way down that we haven’t explored yet. Another is structural flexibility.

Structural Flexibility - Different Entities, Different Purpose

Another benefit of module design in filling that middle ground is that modules can offer different benefits than classes and services. The diagram at right (click to enlarge) illustrates some of the capabilities of different types of entities.

For example, classes are pretty easily reused within an application, but because classes aren’t a unit of deployment, it’s difficult to use them across applications. Of course, intra-application reuse is a sweet spot of services, since they’re method of invocation is through some distributed protocol (SOAP, HTTP, even RMI/IIOP). Yet because services are invoked remotely, we typically like to make them coarser-grained because of the performance implications of distributed requests. So this begs the question - If a service is too coarse-grained to reuse (ie. it does more than what we want), how do I reuse some desirable behavior across applications? Well, without modules, our only other choice is the class. Since a class can’t be reused intra-process, we do one of either two things. Expose it as a service or copy the class (ie. the code). Given the circumstances, neither of these may be ideal. Another option is desirable.

Modules represent that other option. They are a finer level unit of granularity than services, and are a unit of deployment. Since each of these different types of entities are units of composition, we have tremendous flexibility in how we assemble applications. Possibly more important though is our increased ability to accommodate the architectural shifts that naturally occur as requirements evolve. Let’s look at a more concrete example.

A Bit More Concretely

Let’s say I have a business function called Pay Bill for which I develop a web service that can be invoked by many different consumers. That service happens to be relatively coarse-grained, and performs all the steps involved in paying the bill. These happen to include the following:

  • audit bill - apply a discount to the bill based on payee
  • check for duplicate - ensure the bill hasn’t already been paid
  • remit payment - cut the check
  • reconcile payment - reconcile with accounts payable financials

This seems reasonable. We have a nice little service that we can reuse any time we want to pay a bill. Unfortunately, the real world often gets in the way of our idealistic solutions. In fact, there are two problems that will eventually surface, and modularity benefits both scenarios. Let’s start by looking at the first scenario.

What should I do when a different scenario arises that demands I follow a slightly modified Pay Bill function?

As part of the remit step, I have a new payee that demands electronic payment. This is pretty easy actually. I simply modify the service to support electronic payments and then configure the service to context for that specific payee. So how does modularity help here?

If the service is composed of modules, it’s going to be much easier for me to understand the structure of the service, assess the impact of what’s it’s going to cost to change the service, and then introduce a new module (or modify the existing module) to provide the new capability. Without modules, I’m simply wading through the code trying to figure out all of these things. Now, the 2nd scenario.

What should I do when I want to reuse just one step of the Pay Bill function?

Let’s say another new requirement emerges. Whereas traditionally bills were entered by data entry personnel, we now have to support electronic delivery of bills. We also know that bills delivered electronically are often duplicates. It’s just one of those business things, you know? If we don’t pay the bill on the day it’s received, the billing party sends us the bill again, asking for payment. So we need to check for duplicates before we save the bill to the database and prepare it for processing. What do we do?

We could take the same approach as before and modify the Pay Bill service so that the duplicate check could be invoked separately from the higher level pay bill function. But that’s a bastardized design solution. Why? We are exposing behavior of the Pay Bill service that shouldn’t be exposed. The API is coarse-grained in some areas and fine-grained in others.

Maybe exposing the finer-grained capabilities of the Pay Bill function isn’t a severe compromise, but it is a compromise nonetheless. And we are forced to compromise because of architectural inflexibility. We don’t have architecture all the way down, and are therefore left with limited choice as to how we support the new requirement. We can either modify the service to reuse what we know is already encapsulated within the service, or copy the code to create something new. But those are the two options we have, and neither may be ideal.

As we continue to hack the system in this manner, with a multitude of other similar changes, the design will rot. Eventually, our Pay Bill service is transformed into a utility that performs all general-purpose bill functions. It’s API is a mix of coarse-grained and fine-grained operations. It’s become the monolith that we’re trying to avoid. While the Pay Bill service is still pretty reusable (it does everything now), it isn’t that usable. That tension between reuse and use surfaces again.

Our decision to modify the Pay Bill service to expose the duplicate check was driven by one thing - ease. It was our easiest option. Really, it was our only option. But it isn’t the best option.

If we architect all the way down, we have another option. A Pay Bill service composed of modules that audit the bill, check for duplicates, remit payment, and reconcile the payment means we can choose the desirable solution over the easiest short term choice. We can avoid the long term compromises that degrade architectural integrity.

Shown at right (click to enlarge), we see the service composed of modules. If we have a check for duplicates module, we can simply reuse that module in the new service or application that’s going to process electronic bills. Or we might expose the capabilities of the check for duplicates module as a separate service. We have multiple reuse entry points, as well as different levels of granularity through which our software entities can be managed, deployed, built, and much more.

To Conclude

My point here isn’t to debate the original design nor the decisions made to reuse the check for duplicates functionality in another service or application. Certainly, debating architectural and design decisions is a favorite past-time of developers. There are a variety of different ways that we can support new requirements as they emerge. Some are better than others and all are contextual.

The gist of my point is that architecture all the way down gives us options, and these options help maintain architectural integrity. They increase our options when making decisions and allow our system to accommodate unforeseen architectural shifts. I can modify an existing application or service to give me what I need. Or I can reuse an existing module that’s already available for me. Or I can compose a new service from an existing set of modules. Or I can break apart existing modules to create new modules that result in new services. And I can do a lot of this refactoring without significant impact on the existing code. This was an important point illustrated in the series of posts on Applied Modularity.

In other words, architecture all the way down helps increase architectural agility, and modularity is a key ingredient.

The Secret Sauce

Filed Under Agile, Development | Leave a Comment 

All too often, software process improvement initiatives fail. In a recent post discussing SEMAT, Ralph Johnson provided some words of wisdom that serve as a wonderful guide to any team about to embark on that much vaunted software process improvement initiative.

The state of the practice in software development is pretty dismal. Some groups do a great job, but most do not.  As I tell the students in my software engineering course, if you manage requirements, make sure the developers talk to each other, release working code regularly, have some sort of a systematic testing process, use build and version control tools, and periodically stop and see how you are doing and how you can improve, you will be better than 90% of the groups out there. Of course, I could be exaggerating.  Maybe it is only better than 75%.

I suppose that pretty much sums it up! Amazing how difficult we tend to make things though, heh?

Applied Modularity - Retrospectives

Filed Under Agile, Architecture & Design, Java, OSGi | 1 Comment 

We’ve completed our four part series on Applied Modularity, and I wanted to put a final wrap on the posts by highlighting a few things that may not be obvious. First, a brief review on the series.

  • In Part 1, we introduced the sample application and applied PhysicalLayers pattern to separate our logical layers out into physical layers.
  • In Part 2, we applied the AbstractModules (second refactoring) and AcyclicRelationships (third refactoring) patterns.
  • In Part 3, we applied the SeparateAbstractions (fourth refactoring) pattern.
  • In Part 4, we applied the CollocateException (fifth refactoring), IndependentDeployment (sixth refactoring), and ImplementationFactory (sevent refactoring) patterns.

Through this series of refactorings, we made considerable progress in modularizing the application using a few of the modularity patterns. The diagram at right (click to enlarge) illustrates the progress we made. We started with everything bundled into a single WAR file and wound up with a highly modularized system that satisfied the evolutionary requirements. Aside from the many advantages we spoke about in each post, I want to take a moment to explore a few other thoughts.

A Note On Module Testing

If you’ve explored (and built) the system by getting the code from the Google code repository, you’ll notice that there are a corresponding set of test modules for each module that we’ve created. These can be found in the bin directory (shown at right). Like we do with unit testing, I’ve tried to create a test component for each module in the system. Unfortunately, there’s a flaw in the billtest.jar module.

Similar to unit testing, where we create mocks and stubs to avoid undesirable dependencies, a test module shouldn’t pull in other modules that contain implementation classes. Instead, we should create mocks or stubs to avoid this situation. In other words, a test module should only be dependent on the same set of modules as the module it’s testing. Unfortunately, the billtest.jar module breaks this rule by leveraging the the AuditFacade implementations. That means the billtest.jar module is also dependent on the audit1.jar and audit2.jar modules, but the bill.jar module is not. So billtest.jar is really a module integration test, not a module unit test. It could easily be fixed by creating a mock AuditFacade implementation that lived in the billtest.jar module.

This begs another question….

How do we keep track of module relationships so that we recognize when something bad like this happens?

Even for small systems, without a module system like OSGi, it can be incredibly challenging.

A Note On Managing Modules

Modularizing a system on a platform that doesn’t support modularity is challenging. Hell, modularizing a system on a platform that does support modularity is challenging! One of the greatest challenges is in managing module dependencies. Tracking the dependencies between modules is really quite difficult.

This is where module systems like OSGi really help by enforcing the declared module dependencies. In plain ole Java today, there is no notion of module so there is nothing to help enforce modularity. And the first unwanted dependency that creeps into our system compromises architectural integrity. This is where JarAnalyzer can be helpful. By incorporating JarAnalyzer into my build script, I’m able to more easily manage the dependencies between modules.

JarAnalyzer has two output formats. The first is a GraphViz compliant dot file that can be easily converted to an image showing module relationships. The image at right (click to enlarge), which includes the test modules, clearly illustrates the problem with the billtest.jar module discussed above.

As can be seen, the bill.jar module has only a single outgoing dependency on the auditspec.jar module. So the module that tests the bill.jar module should not be dependent on any other modules, either. However, if you look at the billtest.jar module, you’ll see that it depends upon the audit1.jar and audit2.jar modules. So instead of using a mock or stub to test the bill.jar module, I got lazy and used the various AuditFacade implementations. Look at a few of the other modules, and you’ll discover that none include additional dependencies beyond the dependencies already present within the modules they test.

The second output format for JarAnalyzer is an html file that provides some key design quality metrics, as well as listing the dependencies among modules. Essentially, it’s a textual view of the same information provided by the visual diagram. I’ve included the Summary header of the JarAnalyzer report for the system below (click to enlarge). You can also browse the complete JarAnalyzer HTML report for the final version of the system. There is also a version that omits the test modules.

Look at the auditspec.jar module. Note that it has 8 incoming dependencies (afferent coupling) and 0 outgoing dependencies (efferent coupling). It’s abstractness is 0.67 and Instability is 0.00. This is a pretty good sign. Why? It’s instability is very low, implying it’s highly resistant to change. It possesses this resistance to change because of the large number of incoming dependencies. Any change to this module may have serious implications (ie. the ripple effect of change). But because it’s quite abstract, it’s less likely to change than a module with a lot implementation classes. The Distance for the module is 0.33 (ideal is 0.00), so we’re not far from where we ideally want to be.

In case you’re wondering about all these metrics I’m rambling about, you might want to take a look at the Martin Metrics. In general, without a utility like JarAnalyzer (or a module framework like OSGi), it would have been incredibly difficult to manage the modules composing this system.

A Note on Module Reuse

The reuse/release equivalency principles states that the unit of reuse is the unit of release. Modules are a unit of release, and therefore are a unit of reuse. Naturally, the devil is in the details, and we’re going to discuss these details here.

In Reuse: Is the Dream Dead, I spoke of the tension between reuse and use. That tension is evidently at play here. Earlier versions of the system had coarser-grained modules that were easier to use but more difficult to reuse. As we progressed, we broke these coarser-grained modules out into finer-grained modules, increasing their reusability but decreasing their ease of use. A perfect example of this is the bill.jar module. In the final version, it was quite reusable, since it was only dependent on the auditspec.jar module. However, this came at the price of useability.

To elaborate a bit more. In Part 4, the sixth refactoring, we decoupled the bill.jar and financial.jar modules so the two could be deployed independently (ie. increase reuse). But the runtime structure still has some dependencies. In order to reuse bill.jar, we need a BillPayer type. While an alternative BillPayer implementation could be created, the existing implementation is the BillPayAdapter in the mediator.jar module, which also has a relationship to the financial.jar module. This means that to use the bill.jar module without the mediator.jar and financial.jar modules would require a new consuming module to implement the BillPayer interface.

So what do we do if we want to break this runtime coupling? We should move the pay method on the Bill up to the BillPayAdapter class, and get rid of the BillPayer interface. Now the Bill class has no dependency on the BillPayer interface, but it also can’t make payments. Every action has an equal an opposite reaction, heh?

A Note on The Build

The build was a key element in helping enforce modularity (note: JarAnalyzer helped me manage module relationshps; the build enforced module relationships). Even a framework such as OSGi is only going to manage module relationships at runtime. I talk a bit more about this concept in The Two Faces of Modularity & OSGi, and it’s why we need really good tools that help us design more modular software. It’s our responsibility to craft the modules, and the build is one way to help put in place a system of checks and balances that help enforce modularity before discovering at runtime that one module has a relationship to another. In Part 2, as part of the third refactoring, we refactored our build script to a levelized build. Here’s the before and after build script.

This means that as we build each module, we include only the required modules in the build classpath. This is more easily explained by examining the build script for the final version, where you can clearly see what I’m talking about. Look at line 40. When we build the auditspec.jar module, we include nothing else in the build classpath because the auditspec.jar module doesn’t require anything. Now look at line 60, where we build the audit1.jar module. The auditspec.jar module built in the previous step is included in the classpath. This pattern recurs throughout the remainder of the script. Introducing a module dependency that violates the dependency structure enforced by the build results in a failed build.

A Note on Object Orientation

The way we managed, massaged, and modified module relationships was through OO techniques. By introducing interfaces and abstraction and allocating them to their respective modules, we were able to significantly change the module structure of the system. While we used OO to do this, OO is not a prerequisite. We could just as easily have used other techniques, such as aspects (AOP).

The Final Wrap

If you’ve made it this far through the tutorial, you’ve done well. Overall, this was a pretty lengthy and involved tutorial. In fact, I only touched briefly on all that I really had to say. Yeah, you’ve seen the abridged version here! I think I could pretty easily fill a book with the rest. Hmmm…

I use this same system and examples in quite a few of my talks on architecture and modularity. If you have questions or suggestions, feel free to drop me a line via the comments or send me an e-mail (hint: look on the About Page). Or you can track me down at a conference, as I’m always happy to discuss topics related to modularity, architecture, and agility.

I have one final deliverable. As promised way back in Part 1, I intend to show an OSGi-ified version of the system. That will follow shortly.

Next Page →