My Stuff

2010 Conferences

OSGi DevCon @ JAX London

February 23 - Keynote titled OSGi in the Enterprise: Agility, Modularity, and Architecture’s Paradox

EclipseCon

March 22 - 25 - Tutorial on Modular Architecture

Über Conf

June 14 - 17 - Sessions titled Turtles and Architecture and Patterns of Modular Architecture

Catalyst

July 26 - 30 - Two sessions on rich mobile applications and one on agile development. Half day tutorial on software process improvement.

Tweets @ Twitter

re: #apple event "We sold more iPads than any PC manufacturer sold of their entire PC line." 2012-09-12

re: #Apple Event ""Our notebooks now rank #1 in the US in Market share in the last three months." 2012-09-12

Right on. I just won a Best Buy drawing worth $1000. Either that or I won a shiny new virus by clicking the link. Hmm...what to do. 2012-08-29

The #osgi alliance response (http://t.co/KrN8XNWg) to the @mreinhold #jigsaw announcement (http://t.co/9YvcDdqC). 2012-08-29

Good Q&A with @mreinhold on project #jigsaw. http://t.co/9YvcDdqC. Modularity will change the way we design and run apps! 2012-08-28

LinkedIn Profile

The opinions expressed on this site are my own, and not necessarily those of my employer.

Source Code is a Corporate Asset

Filed Under Agile, Development | 5 Comments 

The goal of software development is to - SURPRISE! - deliver software. Fancy architecture diagrams, detailed requirements documentation, and comprehensive project plans mean little if we can’t deliver the software that realizes them.

The source code is the detailed specification for a software system, and it is the only specification guaranteed to offer an accurate representation of the system. It is also the only artifact we must create in order to deliver a software system. Assuming an organization views the software system as an asset, then so too is the source code. Source code is a corporate asset that must be carefully guarded!

Guarding the Source

High quality source code is important. A malleable codebase is easier to change, eases maintenace, and speeds delivery. An organization may have a mature change management policy in place, but an application that resists change due to mangled source or architectural inflexibility cannot be modified and delivered quickly. A fully tested codebase is also important. Tests allow us to verify correctness, identify defects, and lend us the courage to make change by providing a safety net to identify the ripple affect of change. Finally, a functional codebase is important. A codebase that is void of compile and test errors can be frequently delivered, allowing us to garner important customer feedback that helps ensure we never stray far from our intended destination.

In other words, if we develop a codebase that is malleable, fully tested, and functionally complete, our team possesses the ability to continuously deliver software in the face of change. By emphasizing creation, growth, verification, and deployment of our code from initial project inception to final solution delivery, we are developing an inherently agile software development environment. At the center of this process is the source code, and we would do well to guard this source code with artifacts that help move it along this path. Ideally, these artifacts are executable.

Executable Artifacts & Fidelity

The traditional software lifecycle consists of high level activities, each producing an artifact fed into subsequent phases. These artifacts are interpreted by humans and manually translated into some other software artifact. Eventually, we get to the source code. This approach lacks fidelity, and valuable information is lost during each translation. It’s a bit like Chinese Whispers, and is fraught with cumulative error and inaccuracies.

Executable artifacts guarantee fidelity. If an artifact ever becomes dated and out of sync with the source code, the artifact breaks. Executable artifacts offer a real-time up-to-date glimpse into the application development effort. Executable artifacts guard the source code in a variety of different ways. Executable artifacts also help increase project transparency.  Here are some other advantages of executable artifacts:

  • They are an accurate reflection of the code at the point they were run. For most teams, ensuring documentation stays synchronized with the codebase is a daunting task. Because executable artifacts are run against the source code, we can always be certain that they reflect the current state of the source.
  • They can be run at any time, with little overhead. After initial setup of the executable artifact, they can be easily run against the source code, usually by issuing a simple command.
  • They encourage early and frequent delivery. Since we have a functional codebase, we can garner important feedback throughout the development lifecycle.
  • They can be automated. Automating an executable artifact means little manual intervention is required. Additionally, automation reduces the likelihood of errors or omissions that is common with manual processes.
  • They maintain a reliable history. An executable artifact, run at frequent and regular intervals, produces output that can be analyzed over time to help the team identify trends in development.

Let’s consider a simple development scenario

The Development Scenario

As a project commences, the development team meets with the customer to elicit high level requirements. In lieu of producing a detailed software requirements specification and obtaining signoff from the customer before moving onto development, the development team adopts a strategy to deliver running tested software on a frequent basis. A business analyst develops a suite of acceptance tests using a tool such as Fitnesse or Selenium. Simultaneously, the developers create an automated and repeatable build process scheduled to run hourly. Incorporating execution of the acceptance tests into the build process ensures that the application always satisfies the current set of acceptance tests, which serve as a set of executable requirements for the application development effort.

The priority of the development team is to ensure the application always remains in a functional state by passing all acceptance tests. Because the application is always functional, the development team can deliver the software into the hands of the customer on a frequent basis. As a stop gap to ensure they maintain a high degree of customer feedback, the development team hosts regular application demonstrations for their customers.  As customers participate in these demonstrations or interact with the application, the feedback they offer can be managed via an issues backlog that is a valuable input artifact to subsequent iteration planning sessions. As change trickles in and new features are identified, additional acceptance tests are added to the suite.

To ensure the application architecture maintains a high degree of resiliency and adaptability, the team places constant focus on growing their unit test suite and makes certain all unit tests are executed as part of the build process. By ensuring unit tests verify classes and methods in isolation from outside forces, application components are naturally decoupled. To further enhance application architecture, the development team generates a number of software artifacts as part of the build process. Class and component diagrams in conjunction with dependency metrics offer valuable feedback that lead the development team to targeted refactoring efforts to maintain a high degree of architectural and design resiliency.

Eventually, the build process updates a project dashboard that includes important project metrics, including the number of tested teatures supported in the current software build. All project participants, including the software development team, customer team, and management team, have access to the project dashboard via a URL, ensuring the team always knows the current state of the application. The consistency and repeatability of this micro process allows the team to identify important trends in development, such as development velocity in terms of the number of tested features delivered. They are able to develop burn down charts, backed by evidence, that help the team provide more reliable estimates based on historical evidence.

In this development scenario, executable artifacts serve as a powerful catalyst for other important practices, and enable another key aspect of frequent delivery. Because artifacts are executable, they can be automated. But at the center is software development’s most important artifact - the source code. Once a team begins guarding the source with executable artifacts, a vicious, self-generating, and remarkably powerful process emerges.

As you consider embarking on a software process improvement initiative, begin planning your agile transition, or simply kick-off your next project, reflect for a moment on the ultimate goal. The purpose of any software development effort is software delivery, not software process improvement, documenting requirements, or creating architecture documents. The only artifact required to deliver software is the source code, and all artifacts should feed this requirement. It’s imperative that we treat source code as a corporate asset, and guard it as closely as possible. While other artifacts are inevitably necessary, executable artifacts guarantee fidelity with the source code, and can prove its quality, resiliency and functionality.

Mac Migration

Filed Under General, Technology | Leave a Comment 

Yeah…check out that battery life. That’s not 7 minutes and 26 seconds, that’s 7 hours and 26 minutes. Last weekend, I migrated to a new MacBook Pro.

Starting the Migration

Before migrating, I had some questions on the migration options. I knew about the OS X Migration Assistant, but I also create daily backups to a Super Duper drive. From a reliable source, I found that I should be able to simply restore my new MBP from the backup of my old MBP. He’d done it and was up and running in about an hour. But I was still a tad skeptical, since the old MBP I was using was built atop older hardware and I’m assuming had to have different drivers. Specifically, it was a 2.0 GHz Core Duo with 2GB RAM, but without the multi-touch trackpad found on the new machine. For reference, the new MBP is 2.53 GHz Core 2 Duo with 4 GB of RAM. Both were of the 15″ variety. I just wasn’t confident about compatibility between these two machines with such different birth dates.

I started by setting up the new MBP, but chose not to run Migration Assistant. Instead, I figured I’d try booting my new MBP from the Super Duper drive. If that worked, I’d use the Super Duper drive to setup my new MBP. If not, I’d run the Migration Assistant. I’d booted my old MBP by simply holding down the option key upon startup, and then selecting the drive I wanted to boot from. Upon trying this with my new MBP, the only bootable disk presented to me was the MBP hard drive. This concerned me, and I couldn’t bring myself to try restoring from the Super Duper drive. If I couldn’t get it to boot from the Super Duper drive, would restoring cause similar issues when booting? I was concerned, so I opted (ie. chickend out) to use the Migration Assistant. Here’s that story.

Migration Assistant - Pass 1

As I mentioned, I chose to setup my new MBP while foregoing the option to run Migration Assistant at the same time as setup. This was a mistake. Once I setup the new user account on my new MBP, use of the Migration Assistant was problematic. It worked, just not the way I wanted it to. If I wanted all of my preferences, applications, settings, and data, I needed to copy the user account of my old MBP. Unfortunately, I couldn’t copy that on top of my new user account I had just setup on the new MBP. At least, I didn’t know how. Instead, the Migration Assistant created a new user that represented my old MBP account. That’s not what I wanted. I wasn’t really sure how to merge two user accounts, and since this was a new machine, I didn’t want a hack right out of the box. So I decided it was time to start from square one.

Installing OS X

I pulled out the OS X install disk, and did an Erase and Install of OS X on my new MBP. This took about an hour and put me back into the factory default settings so that when I turned on the machine, it’d walk me through the setup again. Bingo. Worked like a charm. Now, when booting my new MBP, I selected that the option to transfer files while performing the first time setup. Because it asks to do this before setting up any user accounts on the new machine, I figured it would just clone my old MBP onto my new MBP. And it did.

Migration Assistant - Pass 2

During setup, the Migration Assistant prompted me to connect the two machines via Firewire. Because I was using an external drive connected via Firewire on my old machine (that was the Super Duper drive), I had a Firewire cable handy. I connected the two machines, opted for the default settings in migration assistant, and it started the copy. The copy took roughly two hours, and when I was done, I had a complete clone of my old MBP on my new MBP. It worked like a charm.

Everything appears the same on my new MBP as it did on the old MBP. Right down to the wallpaper on my desktop. All the applications (at least those I’ve tried to this point) were copied over, and all are accessible. I didn’t have to re-enter any license keys. Didn’t have to reinstall any FireFox plug-ins. Nothing! iTunes even worked, and my library was copied over perfectly. All I had to do upon starting iTunes was authorize my new MBP.  While it’s possible restoring from the Super Duper drive would have also worked, the Migration Assistant worked very well for me. But I could have saved a bit of headache had I opted to run the Migration Assistant first time around.

Overall the migration took about 5 hours. But had I run the Migration Assistant the first time through, I estimate it would have taken about two hours. As interesting developments unfold, I’ll update the Miscellaneous Notes section below. Right now, I’m working on my new MBP.

Miscellaneous Notes

  • After running Migration Assistant, I ran a software update. I’m glad I did. I found that there were a few applications that weren’t copied over (or at least updated when copied). iTunes was one of them. I can’t imagine the mess I might have had if I didn’t upgrade to iTunes 8.2 before connecting my iPhone, especially since iTunes 8.2 is required for iPhone 3.0 software which I had just upgraded using my old MBP.
  • I’m still getting acclimated to the multi-touch trackpad. I had grown very accustomed to resting my thumb on the clicking device on my old MBP. Now when I do that on the new multi-touch trackpad, it thinks I’m trying to give it a gesture of some sort.
  • When installing OS X, I was forewarned that I may need to resinstall some of the applications (iLife). This never materialized since the migration did move those applications over from my old MBP.
  • The new MBP uses a Mini Display port for an external monitor. The old MBP used a DVI port. So I had to purchase a new adapter because none was included “in the box.” I bought the wrong one. I thought (silly me) that I could use a Mini Display port to DVI adapter and then use the older DVI adapter to VGA adapter from my old MBP. Not so. The pin configuration for DVI was slightly different.
  • I often times work with my MBP on my lap. It’s amazing how much cooler the new MBP is than the old MBP. The old MBP got very hot. The newer…not at all. I’m guessing that’s because the new MBP has the battery encased within the solid aluminum body of the new MBP. But I’m not sure…I just know it runs much cooler to the touch.
  • Even after installing OS X, running through the Migration Assistant, wiping out my Super Duper drive completely, and creating a new backup of my new MBP, I still cannot boot from that drive. I do not yet know why. I’ll have to troubleshoot.
  • I love the new magnetized mechanism that holds the LED down when closing. It’s aesthetically pleasing. However, I don’t like the finger prints that accumulate at the top of the display.
  • I’ve discovered that my scheduled Super Duper backups haven’t run for the past week. Strange. I tried initiating the scheduled copy manually and receive an error stating it can’t find Macintosh HD. Wow. I deleted that scheduled copy and created a new one and the copy appears to be executing successfully.
  • As I mentioned, I wasn’t able to boot my new MBP from my Super Duper drive. So I was curious. I tried booting my old MBP from the Super Duper backup I created from my new MBP. The old MBP booted up fine, and I was able to see all the application upgrades and file changes I had made on my new MBP. That’s cool!
  • I find it absolutely fascinating that I’m able to boot my old MBP using the Super Duper backup of my new MBP. The old MBP must have different drivers and hardware. I don’t know the details surrounding OS X architecture, but it must have a cool abstraction layer somewhere that deals with this.

The Two Faces of Modularity & OSGi

Filed Under Agile, Architecture & Design, Development, Java, OSGi | 13 Comments 

There are two aspects to OSGi - the OSGi runtime model and the OSGi development model. Today, emphasis is on the runtime model. But eventually, the importance of the development model will take center stage.

The Runtime Model

The runtime model includes support for the dynamic deployment of bundles, different versions of bundles, dependency resolution, enforcement of module boundaries, and the overall dynamism of the runtime environment. The runtime model is relatively mature, and many vendors are leveraging OSGi to take advantage of the runtime capabilities resulting from increased modularity. To an extent, this allows the enterprise to realize the advantages of OSGi without knowing much about it. Faster application startup times and platform adaptability are two advantages organizations will realize as vendors bake OSGi into their products.

While there is some dispute over whether vendors will expose the virtues of OSGi to the enterprise, eventually they will (SpringSource dm Server currently does). They will because the enterprise stands to gain considerably from a more modular architecture. There is benefit in a world void of classpath hell and monolithic applications. There is benefit in modularity, and eventually the enterprise will ask for OSGi. In the end, the enterprise will get what they’re asking for, and when they do, the development model will become very relevant.

The Development Model

The development model has two facets - the programming model and the design paradigm - surrounding how organizations will use OSGi to build more modular applications. The programming model challenge is already being addressed with technologies such as Spring DM, iPojo, and OSGi Declarative Services, where developers are able to tap into the capabilities of OSGi without worrying about the OSGi programming model. These frameworks help encapsulate dependencies on the OSGi API so your code doesn’t have to talk directly to the API. The separation of concerns achieved through these frameworks ensures Java classes remain POJOs that aren’t dependent on the OSGi framework. This makes programming and testing much easier.

But the design paradigm must also be addressed. How does an organization create a more modular architecture? What is the right granularity for a module? How heavily dependent should modules be on each other? How do we minimize module dependencies? How do we break apart larger modules into a smaller set of more cohesive modules? When do we do this? These, among others, are the important architectural and design questions that surround OSGi in the enterprise. Let’s take two different examples of technologies that prove the pending relevance of the OSGi development model - object-oriented (OO) programming and EJB.

In the early 1990’s, OO was touted as the savior. Development teams would be able to build systems by composing reusable objects. This promised significantly reduced time-to-market and higher quality software (we’ve never heard that before, or since, have we?). It was never realized. There were a few reasons for this. Objects are too granular to serve as the foundation of reuse. Developments teams also had difficulty grasping and applying OO concepts correctly. Deep inheritance hierarchies laden with base classes rich in functionality contributed to poorly designed systems. In general, OO was an early failure.

The runtime capabilities of OO languages provided features such as polymorphism and dynamic binding, and developers were able to easily understand many aspects of the programming model. Using dot notation to invoke methods and defining private member variables were trivial concepts. But it took a long time for us to understand how to design good programs using OO. What we accept today as simple truths surrounding OO design (”favor object composition over object inheritance” and “program to an interface, not an implementation”) were unknown, or at least a mystery to us, 15 years ago.

Enterprise Java Beans (EJB), and especially entity beans, were presented as part of Java EE as a way to componentize business applications. The runtime capabilities of EJB were very alluring - transactions, persistence, security, transparency, etc. - and baked directly into the platform. Unfortunately, there were two glaring problems. The development model was complex and it was not well understood.

It was a number of years ago, but I recall vividly my first experience with EJB. I arrived on the team mid-way through their development effort. At that point, they had more than 100 entity beans, the localhost environment took more than four hours to start-up, and problems were rampant. I stuck around for three weeks before I got the hell out. The project wound up getting cancelled. Developers were able to easily understand the EJB programming model (especially with all the code generation wizards), but lacked the design wisdom to use EJB effectively.

These lessons serve as examples of the difficult road that lies ahead for OSGi, and specifically, modularity on the Java platform. If the development model isn’t understood, with principles and patterns that guide how developers leverage the technology, the benefits of the runtime model will not be realized.

I’m an advocate for OSGi, and firmly believe that modularity is a key component of agile architecture. There is a need for modularity on the Java platform, especially in developing large enterprise software systems. But if we do not begin to understand how to design more modular applications today, we’ll face significant challenges when platform support for modularity arrives.

What We Should Do Right Now

It’s important that development teams start modularizing their applications, even if they aren’t deploying to an OSGi runtime. How can we do this? We know a lot about the OSGi runtime model. It’s likely the platform we’re using is leveraging the runtime model internally, even if it’s not exposed to us. We know the unit of modularity is the JAR file, and we can start modularizing our applications today by emphasizing the JAR file as the unit of modularity.

I provide some insight to how we should go about doing this in my post On SOLID Principles & Modularity. Of course, these are simply top level concepts. There are numerous principles and patterns that provide additional guidance, and I hope to share many of these with you over the coming months. Out of this, I’m hopeful that tools will continue to evolve that help aid the development of modular software. For example, tools that provide important refactoring capabilities to help improve modularity.

Of course, the intent of modularizing our applications today is not so that we are able to take advantage of OSGi upon it’s arrival in the enterprise. That’s simply a positive side affect. The real value is the modular architecture that results.

The New Era of Programming Languages

Filed Under .Net, Development, Java, Platforms | 5 Comments 

Burton Group has just published the overview I authored titled  The New Era of Programming Languages. This is a timely document. The Tiobe Programming Index shows an increasing mix of different language types that are popular today. Also included is a graph that illustrates the shrinking gap between the use of dynamically and statically typed languages.

At JavaOne a couple of weeks ago, there were numerous sessions I attended that discussed alternative languages on the JVM. In a session hosted by Brian Goetz, he talked about the renaissance JVM, and the more than 200 languages it currently hosts. There were also sessions on Clojure, Ruby, Scala, along with a script bowl where Jython (represented by Frank Wiersbicki), Groovy (represented by Guillaume Laforge), Clojure (represented by Rick Hickey), and Scala (represented by Dick Wall) squared off against one another.

While the clear crowd favorite at the script bowl was Groovy, it was apparent that each language stood out in separate ways - Jython as a form of “executable pseudocode”, Groovy with it’s compile-time metaprogramming capabilities, Clojure as a dialect of Lisp with it’s powerful multi-threaded and concurrency capabilities, and Scala with it’s Java-like syntax and type inference capabilities. And all had great integration with the JVM along with the ability to leverage existing Java librairies.

Here’s an excerpt from the Burton Group report that walks the timeline of language evolution as depicted in the diagram above. It’s a walk down history lane, and offers a perspective on the future of languages.

The Birth of Language

The first modern programming language is hard to identify, but historians trace the profession of programming back to Ada Lovelace, a mathematician often credited with creating the world’s first computer program in the mid-1800’s. In these early days of computing, languages provided no abstraction from the computer hardware on which the programs would run, and hardware restrictions often defined the language. These earliest computing machines had fixed programs, and changing how these machines behaved was a labor intensive process that involved redesigning the hardware in conjunction with the program it would run.

In the 1940’s, general-purpose computers began to emerge that were able to store and load programs in memory. Creating programs for these computers required developers to use first-generation machine code languages. Eventually, second-generation languages, such as Assembly, emerged and provided a symbolic representation for the numeric machine code. Regardless, each language was low-level and cryptic. Developing even trivial programs was error-prone, required a great deal of intellectual ability, and generally took a long time.

The Compiler Era

The 1950’s gave way to the first modern programming languages—Fortran, LISP, and COBOL—that represent the ancestors of the languages in widespread use today. Each was a higher-level third-generation language that abstracted away the underlying complexities of the hardware environment and allowed developers to create programs using a more consumable and understandable language syntax. Though languages were greatly simplified, adoption of these languages was slow because developers wanted assurance that programs created using these languages was comparable to that of assembly language. As compilers were optimized, adoption of higher-level programming languages progressed.

Even at this early stage of language evolution, language capabilities diverged with an emphasis on solving specific types of programming problems. COBOL and Fortran dominated business and scientific computing, respectively. LISP prospered in academia. These early languages, and their emphasis on specialization for solving certain types of programming problems, were an indication of the language evolution to come.

The Paradigm Era

Language evolution experienced significant innovation in the 1960’s and 1970’s, and many of the major paradigms in use today formed in this period. Software systems were increasing in size, and development was incredibly complex. Maintaining those systems presented additional challenges. Language designers and developers were seeking ways to make complex programming tasks easier by raising the level of abstraction. Discussion of the language effect on software design began to surface, and the use of the GOTO statement and the advantages of structured programming were serious topics of debate.

A multitude of languages emerged that supported these new paradigms. Object-oriented programming was born when the Simula programming language was created, and eventually Smalltalk would surface as the first pure dynamically typed, object-oriented language. C and Pascal were created in this era, as was Structured Query Language (SQL). The first functional language, ML, was also invented. A major shift was under way to make programs easier to develop and maintain through new language enhancements and programming paradigms.

The 1980’s were a period of far less innovation while emphasis turned toward paradigm and language maturation. Languages such as C++, invented in 1979 and originally called C with Classes, emerged that brought the advantage of object-oriented programming to a language that was strong in systems programming. Modular languages, such as Modula and Ada, began to emerge that helped in developing large-scale systems. Leveraging the capabilities of advanced computer architecture led to compiler advancements that were a precursor to managed code environments.

The Productivity Era

Without question, the early 1990’s were driven by a motivation to increase developer productivity. Managed runtime environments emerged that removed the burden of memory allocation and deallocation from the developer. Advanced programming environments, such as PowerBuilder, provided an increase in productivity by combining the language and integrated development environment (IDE) into a consolidated product suite. Attempts were made to push languages down the stack, and attention shifted to advanced tooling and frameworks that increased developer productivity.

In the mid-1990’s, the World Wide Web (WWW) popularized the use of the Internet as a massively scalable hypermedia system for the easy exchange of information. Soon thereafter, a nascent technology was integrated into the omnipresent Navigator browser. Java technology, and applets specifically, had been officially unveiled to the world as a language whose programs could execute atop a managed environment in a plethora of operating environments.

Although the hype surrounding applets fizzled out shortly thereafter, the write once, run anywhere (WORA) promise of Java moved to the server-side as dynamic web applications began to increase in popularity. The portability of Java applications was realized through a Java Virtual Machine (JVM) that managed application execution. Because of this, the language was far simpler than C and C++, because the JVM provided complex memory management capabilities. What was once a complex task to create programs that ran on disparate platforms now required little effort. The simplicity of the language, combined with the portability of applications, was a force that would affect computing for the next decade.

The ecosystem surrounding Java technology flourished, and tools and frameworks emerged that made Java programming tasks much easier. Early after the turn of the millennium, .NET arrived along with its Common Language Runtime (CLR), and platform support for programs written in multiple languages was a reality. Initially, many scoffed at the need for a platform supporting multiple languages. Instead, they adopted general-purpose languages and accompanying tools, frameworks, and managed runtime environments, believing they offered the greatest productivity advantages. Even though a multitude of languages existed, organizations rejected many of them in favor of Java and C#. However, platform support for multiple languages proved to eliminate a significant barrier to language adoption.

Eventually, developers discovered that frameworks and tools were increasing in complexity and becoming difficult to work with. In many ways, these frameworks and tools were hindering productivity, and the advantages of using a single general-purpose language was called into question. Developers were growing frustrated. Though frameworks and tools might exist that aid a task, learning the framework or tool was as daunting as learning a new language altogether. While the complexity of frameworks and tools continued to increase, developers sought alternative languages that were easier and more productive.

The overview goes on to talk about the present day and the postmodern era. The Java and .NET platforms dominate enterprise development, and each support a variety of languages. Platform support has reduced the barrier to entry. Subsequently, a shift is taking place as developers are looking for alternative languages (instead of frameworks, for example) that increase productivity and make programming tasks easier.

The postmodern era recognizes that multiple languages are a fact of life. Language evolution will continue, and new languages will emerge that blend aspects from multiple paradigms. The strong distinction made between static and dynamic languages will disappear, and metaprogramming will become more mainstream. Even compilers will change, as developers gain more control, and are able to take advantage of compilers just as they do APIs today.

Without question, we live in interesting times, especially if you’re a language geek.

Agile Architecture Requires Modularity

Filed Under Agile, Architecture & Design, OSGi | 11 Comments 

A few weeks ago, I presented my view of agile architecture, and followed that up with a post on DZone that presented two aspects of agile architecture. A process aspect, which is temporal, and a structural aspect. Here, I embellish.

Temporal Aspect

The temporal aspect of architecture pertains to when architectural decisions are made. It involves making sound and timely decisions throughout the software development lifecycle. In some cases, it means admitting that an architectural mistake was made and that the team needs to modify the architecture.  On the other hand, traditional architecture involves spending significant time early in the project defining the architecture. Once the vision has been established, teams tend to resist architectural change throughout the lifecycle. Traditional architecture is done up front, whereas agile architecture is done throughout.

Of course, with agile architecture, time is still spent early in the development effort establishing architectural vision. But effort is also devoted to architecture throughout the lifecycle, such as performing architecture proofs and spikes that verify architectural decisions. The temporal aspect of agile architecture is not chaotic architecture. Nor is it accidental architecture, where the architecture simply evolves without vision. Instead, the emphasis is on proving the architectural decisions that are made as soon as possible, while also embracing architectural change throughout the development lifecycle.

Agile architecture does not significantly impact what an architect does, though I admit there are various architectural practices that are heavyweight and wasteful. Alas, this is a topic for another day. In general, while agile architecture doesn’t significantly impact what an architect does, it does impact when the architect does it.

Structural Aspect

The structural aspect of agile architecture pertains to the flexibility of the architecture itself. Simply put, how easy is it to make an architectural change to the system. Agile architecture emphasizes the structural resiliency of the system so that change is possible. This involves using the right patterns, establishing the right extension points within the system, abstracting away areas of intense complexity, and more.

For large systems, we cannot manage architecture at the code level. We need higher level units, and modularity is a critical ingredient to agile architecture. Understanding and managing dependencies between those units is critical to accommodating change. The proof is simple. Is it easier to understand the impact of change when examining a system of 10000 classes or a system of 10 modules? Obviously, the latter. If we are able to assess the impact of change, we inherently have a better understanding of whether we are able to accommodate that change.

Consider this example. Early in the life of a system, we’re certain to be using Vendor X, and our code is tightly coupled to Vendor X. The lack of modularity makes it difficult to identify which areas of the system talk to the Vendor X product. A shift in business priorities now causes a shift in architecture. Vendor X is out and Vendor Y is in. How easy is it to change the system without knowing which areas of the system depend on Vendor X functionality? Not easy. But now imagine if all Vendor X dependent code is isolated to a single module. We know where Vendor X integration code resides. Change is now more palatable. Modularity is inherent for agile architecture, and it doesn’t matter if you’re developing an application, a service, or something else. Modularity matters!

The Inextricable Link

Without question, architectural shifts will occur throughout the course of the project. Sometimes, we anticipate correctly. Sometimes not. Agile architecture is defined by our willingness and ability to embrace architectural change. Yet to embrace architectural change demands that we have an architecture that can accommodate change. Simply because a process is employed that embraces architectural change throughout the software lifecycle does not mean that it’s possible to make that change. Here, the inextricable link between the temporal and structural aspect is established.

Whereas the temporal aspect of agile architecture demands that the architect be flexible, the structural aspect of agile architecture demands that the architecture be flexible so that the team is able to quickly make the change. Agile architecture demands both, and the absence of one precludes the presence of the other.

Modularity is a key ingredient of agile architecture. Modularity ensures that development teams understand architectural dependencies that are going to inhibit or accommodate change. Simply embracing an attitude of change is not enough. We must also be able to accommodate change efficiently. Modularity allows the development team to envision and manage change more easily, meaning they’ll be able to make the change they want to make when they want to make it.

Java Modularity - OSGi and Project Jigsaw

Filed Under Architecture & Design, OSGi, Platforms | 5 Comments 

I’ve had some time to reflect on the sessions pertaining to Java modularity that I attended at JavaOne. This topic can be pretty confusing, so let’s take some time to clarify a few things.

The JSRs

Initially, there were three primary JSRs surrounding Java modularity - JSR 277, JSR 294, and JSR 291. Descriptions follow:

  • JSR 277 - Also known as the Java Module System, JSR 277 was an alternative module system being proposed that would have provided similar, though less, than what OSGi currently provides.
  • JSR 291 - Proposed by IBM around 2006, JSR 291 defines a dynamic component model for Java SE. This JSR brought OSGi into the JCP. In general, this did for Java SE what JSR 232 did for Java ME.
  • JSR 294 - Currently in progress, this JSR will define the language and virtual machine changes that will be made to support modularity on the Java platform.

Today, JSR 277 is inactive, JSR 291 is final, and JSR 294 is in progress. That makes it all seem rather simple. But things get messy pretty quickly. JSR 294 does not define a module system, it simply specifies changes to the language and VM that module systems can leverage. The intent is that module systems be built atop JSR 294. And today, there are two separate module systems for the Java platform - Project Jigsaw and OSGi.

The Module Systems

Many have heard of OSGi. It’s been around for over 10 years, originated in the mobile and embedded systems space, was popularized by the Eclipse foundation upon adoption for it’s plugin system, and is currently being leveraged by every major application platform vendor, including Sun’s Glassfish product. Jigsaw, however, is not as mature nor as well-known.

Jigsaw is an Open JDK project. The intent of Jigsaw is twofold - to modularize the JDK and allow developers to modularize their applications. While the Reference Implementation (RI) link on the JSR 294 homepage navigates to the Project Jigsaw homepage, Jigsaw is not technically the RI for JSR 294. Instead, Open JDK 7 is the RI for JSR 294, and as it happens, Project Jigsaw is an Open JDK 7 project. Certainly modularizing the JDK is a good thing. But it does beg the question - why not build atop the proven and de facto standard OSGi dynamic module system? The answer to this question is not entirely clear. What is clear is that Jigsaw revives JSR 277 outside the context of the JCP governing body.

In a JavaOne session led by Jigsaw proponent Mark Reinhold, he stated that Jigsaw addresses “requirements that aren’t met by existing module systems.” While I have to conclude that Mark was indirectly referencing OSGi, he wasn’t clear on what these missing requirements are. Additionally, there is no guarantee that Jigsaw and OSGi will be compatible module systems. While both may eventually leverage the language and VM features of JSR 294, it’s doubtful that modules developed using Jigsaw will be interoperable with OSGi bundles. Certainly, the Java community doesn’t need this fragmentation. Of course, the debate surrounding Jigsaw and OSGi may be a moot point with the impending acquisition of Sun by Oracle, who has invested significantly in OSGi with their Fusion middleware product line.

In general, OSGi clearly has a significant advantage over Jigsaw today. It’s mature and widely adopted across the industry, though Jigsaw has the inside track given it’s inclusion as part of the Open JDK 7 project and it’s sponsorship by Sun insiders. But it’s confusing why Sun is adverse to building atop the proven OSGi platform, and there are many questions surrounding modularity on the Java platform that deserve an answer. Notably, what requirements is Jigsaw addressing that OSGi does not? And why is Sun adverse to leveraging OSGi as the standard module system for the Java platform? Sadly, I suspect the answer to these questions has little to do with technology, and more to do with politics.

For more information on the debate surrounding Jigsaw and OSGi see the following references: