My Stuff

2010 Conferences

OSGi DevCon @ JAX London

February 23 - Keynote titled OSGi in the Enterprise: Agility, Modularity, and Architecture’s Paradox


March 22 - 25 - Tutorial on Modular Architecture

Über Conf

June 14 - 17 - Sessions titled Turtles and Architecture and Patterns of Modular Architecture


July 26 - 30 - Two sessions on rich mobile applications and one on agile development. Half day tutorial on software process improvement.

Tweets @ Twitter

re: #apple event "We sold more iPads than any PC manufacturer sold of their entire PC line." 2012-09-12

re: #Apple Event ""Our notebooks now rank #1 in the US in Market share in the last three months." 2012-09-12

Right on. I just won a Best Buy drawing worth $1000. Either that or I won a shiny new virus by clicking the link. Hmm...what to do. 2012-08-29

The #osgi alliance response ( to the @mreinhold #jigsaw announcement ( 2012-08-29

Good Q&A with @mreinhold on project #jigsaw. Modularity will change the way we design and run apps! 2012-08-28

LinkedIn Profile

The opinions expressed on this site are my own, and not necessarily those of my employer.

Language Type Systems

Filed Under Development, Java, Ruby | 3 Comments 

All programming languages have a type system. Typically, we classify these type systems as either static or dynamic. A shift that’s taking place is to include type inference engines within a programming language that allows the developer to realize the safety benefits of static typing and the flexibility and expressiveness benefits of dynamic typing. I talked briefly about these ideas in The New Era of Programming Languages.

For many though, the whole type system issue comes down to compilation. Statically typed languages require type information because the compiler needs to verify that the type is not used incorrectly throughout the program. Dynamically typed languages typically don’t have a compiler, so the type verification is left to run-time.

So in general, we typically say that statically typed languages are safer because the compiler catches certain types of errors at compile-time, but dynamically typed languages are more flexible and expressive because we don’t need a bunch of language constructs to get us past the compilation step. A good example of this is an interface in Java. A class needs to implement an interface simply to get past the compiler. At runtime, that interface provides no value. Dynamic languages typically rely on duck typing instead of inheritance. Other interesting aspects of a language’s type system include covariance and contravariance, which are related to how types are ordered within a class hierarchy, and impact how the language deals with return types and method parameters.

But there is another dimension to a language’s type system that often goes unnoticed. A language is either strongly or weakly typed. For years, we’ve recognized Java as a statically typed language, while Ruby’s type system is dynamic. Because Java is statically typed, it’s natural to assume that is also has a strong type system, and since Ruby is dynamically typed, it’s easy to assume it has a weak type system. Not quite so true, however. Time for an example.

Let’s take the same simple program, written in both Java and Ruby. The code in Figure 1 is Java code that attempts to add a String and an int.

public class TypeSystemTest {
	public static void main(String args[]) {
		System.out.println("4" + 2);

Figure 1

Interestingly, Java performs an implicit type conversion, the program runs successfully, and the resulting output is seen in Figure 2.

> 42

Figure 2

An identical Ruby program can be seen in Figure 3.

puts "4" + 2<br />

Figure 3

Ruby, however, does not perform an implicit type conversion, and results in a TypeError. This output can be seen in Figure 4.

> TypeSystemTest.rb:1:in `+': can't convert Fixnum into String
             (TypeError)	from TypeSystemTest.rb:1

Figure 4

In some cases, Ruby is actually more strongly typed than Java, but the dynamic type system of Ruby delays discovery of the problem until runtime. Java, on the other hand, is a statically typed language that uses implicit type conversion in special situations. This implicit type conversion results in a weak type system, meaning the program can suffer from undesirable side affects if the implicit conversion is not the desired conversion, yet no runtime error results.

The point here is that while Java is statically typed and Ruby is dynamically typed, we cannot categorically say that Java is safer and Ruby is less safe. The runtime type system has the final say in making that decision.

The Project Date - Kill It!

Filed Under Agile, Development | 9 Comments 

Tom DeMarco recently published an article in IEEE Software suggesting he may have been wrong in stating that metrics are a requirement for successful software development.

He questions his timeless statement, “You can’t control what you can’t measure”, realizing that control is not the most important aspect of a software project.

But there was a small section on the second page of the article that was especially thought provoking. DeMarco makes the following hypothetical statement.

“I have a finish date in mind, and I’m not even going to share it with you. When I come in one day and tell you the project will end in one week, you  have to be ready to package up and deliver what you’ve got as the final product.”

Imagine for a moment how this would change the software development landscape. So much of software development is dominated by that single date - the date the project must be finished. It dictates most of what we do, and when we do it. It’s typically one of the first things we know about the project - when we need to be finished. The date drives the project. Every software development team strives to finish by that preconceived date, often making significant compromises along the way. Sadly, these dates are often pulled out of thin air (the CEO’s birthday, perhaps?), yet are written in stone as the date we have to be done.

The date makes agility counter-intuitive and encourages development practices that we know don’t work. Removing the date makes agility intuitive. Seems kinda far-fetched, heh? I’m not so sure. Having thought about this for a while, I’m not convinced it’s too far away from the notion of internal and external releases.

Imagine the profound affect it would have on how we went about developing software if there was no date. Imagine if that date was unknown to us. It could be tomorrow or it could be a year from tomorrow. Imagine how we’d be forced to change how we work. We’d be forced to emphasize software quality throughout the development process. We’d be forced to ensure we always have a functional product. It might go something like this.

We kickstart the project by hosting a meeting with our customers. In this meeting, we establish some high level goals for what the system must do. Basically, a sketch. We’ll call this our initial project plan. The development team is already busy setting up their development infrastructure - the version control repository, the IDE, test harnesses, automated build script, some code inspection tools, the datasources, and the important frameworks we’ll use to develop the system. We don’t have time for waste, so we only do what’s going to provide value.

At the same time, we put together some wireframes and mock-ups, and a week after that first meeting, we share these mock-ups with our customers. They may like them or they may hate them. For those they like, we start implementation. We know we must move quickly, but we must be careful to ensure we have a functional product at all times. If the build fails, we have to fix it. We can’t let quality degrade beneath acceptable levels. We have to be ready at all times to package the system, and make it available to our customer. We have to establish our cadence - our sustainable pace.

It’s obvious that we have to show value early and often. Our customer can demand the system anytime they wish. They can also cancel the project at anytime, as well. The only way to ensure the future of the project is to demonstrate value. Intense and frequent collaboration is in our best interest. It’s in the best interest of the project, too!

To afford ourselves the slightest possibility to deliver the system in one week after the customer’s request, we have to test early. Quality must be baked in from day one. We have to ensure we can package and deploy the system quickly. We must have the ability to perform the entire lifecycle on a moment’s notice. We can’t wait to run our load tests, capacity tests, or failover tests. It’s in our best interest to do this as often as we possibly can. To ensure we have the ability to test as often as we’d like, we’ll want to automate as much of this as possible.

In general, we only have time to perform practices that directly contribute to the delivery of software. Estimates? We don’t have time to estimate, nor is there a reason to. The business gives us their highest priority scenarios, and we implement them to the best of our ability. If they like the direction we’re headed, we keep funding. If not, the project’s over. And when the customer feels they have a system that’s going to provide value, they get it when they want it. Requirements signoff? We don’t have time to signoff on requirements and then resist change. In this world, it’s in our best interest to serve the customer, embrace change, and prove that we can respond to their needs quickly.

Doesn’t that sound productive, intuitive…and fun? There are so many fascinating and amazing ways to build upon this scenario. Imagine it for a moment. How do you see it playing out. Good? Bad? Ugly?

When Bad is Good

Filed Under Agile, Development | 1 Comment 

A thought on a dreary Friday. There are two types of software projects - those that go well and those that do not. Typically, we think of those projects that go well as successful, and those that go poorly as failed. But it’s not quite that simple. What matters most is the point at which things go poorly and the point at which things are going well.

I’ve been on projects where we have lots of parties to celebrate our success. We’ve signed off on requirements. Throw a party. We’ve just passed our architectural review. Another party. We’ve just completed coding and testing commences next week. After that, we’re going to demo the system to out customer. Party again. At this point, we might be six, nine, or twelve months into the project.

I’ve been on other projects where we spend a lot of time developing infrastructure code early, and continuously troubleshoot complex problems. We host frequent customer demonstrations, and receive a lot of feedback - not all of it entirely positive. We fight through tough integration challenges, feel like we’re constantly revamping the user interface, and the build fails a few times a week. Testing occurs frequently and is always turning up bugs. There aren’t any parties, and it’s a lot of work. People grow frustrated at times, and friction between team members needs to be carefully monitored. This might go on for six, nine, or twelve months.

I’ve found projects that experience significant challenges early often go one of two ways. They either get canceled early or wind up delivering some pretty good software. These are good things. Projects that get cancelled early save an organization from making a significant investment in a project that’s going to fail. Win fast. Fail fast. And move on!

Ideally, of course, the project wouldn’t get cancelled and the team would go on to deliver a great piece of software. Even in this situation, the team has fought through significant issues early in the lifecycle, and eventually they discover their rhythm. Development starts to click and progresses at a relatively sustainable pace with few late surprises.

On the other hand, projects that seem to be running smoothly early typically experience significant issues late in the software lifecycle because they’ve delayed attacking high risk items. This is a worst case scenario. We’ve now made huge investment and have crappy software.

All in all, this is one of the most significant benefits of agile practices. They encourage us to write code early, ensure the system is always functional, share the system with users, and a whole lot more. In general, this mitigates a lot of serious risk that goes unnoticed on projects that delay these activities until late in the lifecycle. And it allows us to continuously evaluate the project and determine if it’s wise to proceed or allocate resources differently. In the end, our customers aren’t going to remember how well the project was going six months ago. What they are going to remember is if we were able to deliver a great piece of software.

Agile Transitions - BANG!

Filed Under Agile, Development | 2 Comments 

A while back, I posted a blog summary of David Anderson’s thoughts on enterprise Agile transition initiatives. In general, big process improvement efforts really don’t work all that well. I’ve experienced this in many cases, as well. I touched on it briefly in a blog post where I stated that

Large process improvement efforts typically fail, often resulting in methodology wars that place process improvement efforts above software delivery.

If you’ve ever been part of a large process improvement effort, you’ve probably felt the pain. In Grass Roots Agile, I take a rather developer-centric view in exploring ways to increase agility by injecting agile practices into the development effort.

In lieu of adopting in complete form an Agile software development process, such as Scrum, XP, or Crystal, injecting practices can help ease the pain. Such an approach makes an agile transition easier, less risky, and ultimately more beneficial.

I also posted a follow-up explaining how these practices help agile scale to larger teams. The concept of injecting agile practices to help ease the pain is supported by Big Blue’s agile transition, where Sue McKinney says:

We pushed tackling low-hanging fruit to get the benefit and to attack the major pain points.

I’ve found it tends to work better to incrementally improve how software is delivered by relieving the most significant pain points one practice at a time. But I think Esther Derby sums it up very succintly in saying:

I’m a bit puzzled by big bang transitions to agile methods.  Since you can’t know how everything will play out, it only makes sense to make incremental change and inspect and adapt as you go.

Gosh! That makes a lot of sense, heh?

Reuse: Is the Dream Dead?

Filed Under Agile, Architecture & Design, Development, OSGi | 23 Comments 

Reuse is software development’s unattainable panacea. The ability to compose systems from reusable elements has long been our achille’s heel.  We want reuse so badly, yet our failures are spectacular. Almost all major technology trends of the past 20 years touts reuse as the saving grace. Vendors have sold billions of dollars in software through the broken promise of increased reusability.

What happened? Reuse was supposed to save software development. In the early 90’s, object-orientation promised to save us. It didn’t, so we moved on. In the late 90’s, component-based development promised to save us. It didn’t, and we moved on. Shortly after the turn of the millenium, SOA promised to save us. It didn’t, and we’re moving on. Why is reuse so damn hard?

The problem stems from the following rather simple statement, which is depicted in the diagram (click it to view in full size):

Maximizing reuse complicates use. (1)

In general, the more reusable we choose to make a software component, the more difficult that same software component is to use. In the extreme, an infinitely reusable component is infinitely difficult to use. Dealing with the tension between reuse and use is a complex issue, and often, we fail. Largely, the problem has to do with dependencies.

NOTE: I use the term component pretty loosely here. In general, the software development community has done a poor job defining the term “component”. Let’s just assume that when I say “component”, I’m talking about a chunk of software. No more, no less.

The Reuse Disclaimer

I recognize that we’ve done a fair job in achieving reuse at certain levels, and we’re much farther along the reuse curve than we were 20 years ago. Today, we have a plethora of frameworks to choose from that aid development. Web frameworks, ORM frameworks, and security frameworks to name just a few. But most of these frameworks are horizontal, not vertical. They address problems related to infrastructure and plumbing code, not business problems. And I want to focus explicitly on vertical reuse, because that’s the unattainable panacea we’ve been struggling with for so long. That’s the broken promise. Why have we struggled to create reusable business components?


Granularity is the extent to which a system is broken down into parts. Coarse-grained components tend to be richer in behavior than fine-grained components. Because coarse-grained components do more, they tend to be bigger than fine-grained components. To maximize reuse, we try composing coarse-grained components from fine-grained components. Of course, this results in a lot of dependencies between components, making them more difficult to use. In general, we can say the following:

Coarse-grained components are easier to use, but fine-grained components are more reusable.

Time for an example. Let’s say we’re creating a component that processes health insurance claims. Let’s keep the business process relatively simple here to maintain our sanity. There are four steps in the process. First, the system is fed the claim information. Second, the system checks to make sure it’s not a duplicate submission. Third, the system reprices the claim based on HMO and PPO agreements. Fourth, the system remits payment. A coarse-grained component would perform all four of these steps.

In doing this, we’ve made it easy to use since we only need to invoke one operation to complete the whole process. But it’s also more difficult to reuse only a portion of this process, such as the remit payment code. The logical solution is to create four fine-grained components (one for each step in the process) and one coarse-grained component composed of the four others that pulls everything together. The fine-grained components make things more reusable, but are also more difficult to use since we have to do more to pull them all together to perform a unit of work.


Weight is the extent to which a component depends on it’s environment. A heavyweight component depends on it’s operating environment, while a lightweight component avoids these dependencies. When creating a component that runs in multiple environments, we’re forced to move the environment specific dependencies (ie. context dependencies) from code to configuration. This makes the component more reusable, but it’s also a bit more difficult to use since the component must be configured for each environment.

Designing and configuring a lightweight component is more difficult than simply dropping in a component programmed to operate in that environment. In general, we can say the following:

Lightweight components are more reusable, but heavyweight components are easier to use.

Let’s elaborate using the example above, where the solution was to create one coarse-grained component composed of four fine-grained components. If each of these components only needs to run within a single application in a single operating environment, we can encapsulate all of this environmental code into each component, making each heavyweight. But if we want to reuse these components across applications and operating environments, then we have to move this code outside of the component and ensure it can be configured for each environment in which we want it to operate.

Reuse or Use

The challenge we run into when attempting to create a highly reusable component is to manage the tension between reusability and useability. In our example above, breaking out the coarse-grained component into fine-grained components makes it more difficult to use each of the resulting fine-grained components. Likewise, creating a lightweight components makes using the component more difficult since the component must be configured each time the component is used.

Fine-grained components have more component dependencies and lightweight components have more context dependencies. Each makes a component more reusable, but also more difficult to use. The key is to strike a balance, and that is a topic for another day not too far away.

(1) This statement is a derivation of Clemens Szyperski’s statement in Component Software: Beyond Object-Oriented Programming - “Maximize reuse minimizes use.”