My Stuff

2010 Conferences

OSGi DevCon @ JAX London

February 23 - Keynote titled OSGi in the Enterprise: Agility, Modularity, and Architecture’s Paradox

EclipseCon

March 22 - 25 - Tutorial on Modular Architecture

Über Conf

June 14 - 17 - Sessions titled Turtles and Architecture and Patterns of Modular Architecture

Catalyst

July 26 - 30 - Two sessions on rich mobile applications and one on agile development. Half day tutorial on software process improvement.

Tweets @ Twitter

re: #apple event "We sold more iPads than any PC manufacturer sold of their entire PC line." 2012-09-12

re: #Apple Event ""Our notebooks now rank #1 in the US in Market share in the last three months." 2012-09-12

Right on. I just won a Best Buy drawing worth $1000. Either that or I won a shiny new virus by clicking the link. Hmm...what to do. 2012-08-29

The #osgi alliance response (http://t.co/KrN8XNWg) to the @mreinhold #jigsaw announcement (http://t.co/9YvcDdqC). 2012-08-29

Good Q&A with @mreinhold on project #jigsaw. http://t.co/9YvcDdqC. Modularity will change the way we design and run apps! 2012-08-28

LinkedIn Profile

The opinions expressed on this site are my own, and not necessarily those of my employer.

Ecosystems, Modularity, & OSGi

Filed Under Industry, OSGi, Platforms | 3 Comments 

Recently, I questioned whether OSGi and modularity would succeed in penetrating the enterprise. But what I really meant to question is whether OSGi will have the disruptive impact of which it’s capable. I asserted that if OSGi does succeed, it won’t be based on the technical merits of OSGi. It’ll be because of something else. Something trendy. Something fashionable.

Now, I could be wrong. OSGi adoption is certainly increasing, albeit slowly. And as case studies begin to emerge that tout the cost reduction, improved responsiveness, and time-to-market advantages of OSGi, adoption will likely continue to rise.

But adoption is one thing, disruption another, and I still have this nagging sensation that serves as cause for pause. What if something trendier, more fashionable surfaces, and OSGi is pushed into the backwaters? Will it really have the impact it’s capable of? You know…an “iPhone-esque” impact that raises the bar and redefines an industry. Not just evolutionary, but truly disruptive.

OSGi As Enabler

For OSGi to cross the chasm, it must enable something big that a business wants to buy. Maybe cost reduction, improved responsiveness and time-to-market benefits will be enough. But that’s easily perceived by many as a rather evolutionary impact. Not really disruptive.

Quite possibly OSGi will flourish in the data center, as organizations seek more adaptable platforms that lend them these benefits. But this doesn’t necessarily guarantee that development teams will leverage OSGi to build systems with a modular architecture. Because leveraging a platform built atop OSGi is separate from building modular software systems, even though OSGi enables both.

Without something trendy that promises real benefits, it’ll get brushed under the carpet like what has happened to many technologically superior solutions. Perhaps it’s already happening, with all of the hype surrounding the cloud. Or maybe it’ll be the cloud that helps OSGi cross the chasm. OSGi doesn’t obviously enable the cloud, but OSGi can enable the dynamic and adaptive runtime benefits of the cloud.

But again, this doesn’t mean that teams will begin leveraging OSGi to design modular software. It only means that the platform itself is adaptable. Of course, there is benefit in that. But if that’s the route that is taken, teams will still continue to develop monolithic applications that lack architecturally resiliency, and the full benefit of OSGi (and modularity) will not be realized.

The Disruption

OSGi has the potential to have have a much broader impact, affecting everyone from the developer to those working in the data center.  So what might this trend be that will propel OSGi to stardom?

OSGi enables ecosystems!

Now, you’re thinking I’ve gone on the deep end, perhaps? But give me a chance…let me explain.

A Bit of [Recent] Platform History

To start, I want to take a brief walk down memory lane. Not too far back though, but far enough so that we can see how important the ecosystem is in today’s most successful platforms. And these platforms span a range of markets, from mobile to social media. But each are successful in large part due to a thriving ecosystem.

Apple

In 2007, Apple released their first generation iPhone. Without question, the device revolutionized the mobile phone industry. While the device offers a great user experience that has certainly played a role in its surging popularity, people flock to the iPhone today because of the wealth of applications available. Yeah, “there’s an app for that.”

Apple, recognizing the power of this ecosystem, now delivers the iPad. With fewer preinstalled applications than the iPhone, Apple is counting on the ecosystem to drive adoption. The more consumers who flock to the device, the more developers who flock to the platform to deliver applications. As more applications become available, consumers will buy more iPads. The ecosystem fuels itself. Apple has simply provided the environment for the ecosystem to thrive.

Eclipse

In 2003, the Eclipse team was thinking of ways to make Eclipse more dynamic. Their decision to use OSGi to create a rich client platform that supports plug-in architecture was the first step toward the resulting ecosystem we know today. One of the reasons developers use Eclipse is because there are an abundance of plug-ins available that allow them to do their jobs more effectively. Other developers create Eclipse plug-ins because Eclipse is a popular IDE used by many developers. Again, the ecosystem fuels itself. The Eclipse team provided the environment that allows today’s Eclipse ecosystem to thrive.

If you’re interested, you can read more about the history of Eclipse and OSGi.

Hudson

Arguably, Hudson is today’s most popular continuous integration server. But it hasn’t always been. Before Hudson was CruiseControl. And while CruiseControl did help development teams get started on their path toward continuous integration, it was also unwieldy to use in many ways. With Hudson’s plug-in architecture, developers have the ability to extend the tool in ways the original creator couldn’t imagine or couldn’t find the time to do himself. Kohsuke created Hudson and gave the development community a new platform for continuous integration. With its plug-in architecture though, he also provided an environment that allows the Hudson ecosystem to thrive.

Social Media

Facebook. MySpace. Twitter. Each are examples of social media tools with a strong developer community that creates extensions to the platform that users can leverage to enhance the experience. Facebook Developers. MySpace Developer Platform. Twitter API. Each allows the ecosystem to thrive.

And Others

These are just a few examples. It’s easy to find other platforms with similar ecosystems, as well. The ease with which Wordpress themes and plug-ins can be developed and used to enhance a Wordpress website is another example. In fact, many content management systems provide similar capabilities. A large reason why the Firefox web browser has emerged as the preferred web browser is the ease with which add-ons can be installed that extend the capabilities of the browser. The Atlassian Plugin Framework is another example that uses OSGi, and platforms such as Force.com and SharePoint have built (or are trying to build) similar ecosystems.

The Power of Ecosystems

Aside from Eclipse (and Atlassian), none of these other platforms leverage OSGi. Yet each are wildly successful because of two reasons:

  • An environment was created that allowed an ecosystem to form and flourish. This environment includes a platform and a marketplace.
  • A group of customers and developers converged on the marketplace and fueled growth of the platform. The result is a self-sustaining ecosystem.

If you look at many of the more popular platforms that have emerged over the past decade, they tend to possess a similar characteristic - a community of individuals dedicated to providing great solutions leveraging the foundation of the platform. OSGi and modularity enables ecosystems on the Java platform.

Ecosystems and the Two Faces

I’ve already talked about the two faces of OSGi - the runtime model and the development model. I’ve also explained how one could possibly see widespread adoption while the other has little impact. A strong ecosystem surrounding OSGi and modularity must leverage both. Developers would create reusable modules, implying they are designing modular software. For development teams to leverage these modules, they must be using a platform that supports the runtime model.

CBD Has Already Had Its Day, You Say?

Now some of you might be arguing that this sounds a lot like the component based development (CBD) fad of the 1990’s. True…to an extent. Certainly these ideas are not new. But there are also some striking differences between that which OSGi enables and the CBD fad that has come and largely gone, or whose promise was never fully realized.

Foremost, the CBD fad was focused almost exclusively on visual components, such as ActiveX. While some attempted to create components for the Java platform, the movement largely failed to go mainstream. Instead, Java EE grew in popularity and for a number of years, garnered everyone’s attention. Why did this  happen?

IMHO, the answer is fairly simple. Even though numerous marketplaces emerged that allowed the consumer and producer to come together to buy and sell components, there was never a suitable component execution environment. That is, an environment that would support dynamic deployment, support for multiple versions, dependency management, and, in general, complete control over all components currently executing within the environment. ActiveX components did have an execution environment (though did not support each of these capabilities), but Java did not. Today, in OSGi, Java has the requisite execution environment!

The Ecosystem

It’s easy find holes in this idea. To explain why it cannot work. Yet, it’s happening elsewhere, so why not on the server…in the enterprise? Certainly there are a variety of different ways such an ecosystem could manifest itself. Possibly multiple ecosystems emerge like what we see in the mobile market today.

But for a moment, imagine the world where you have the ability to easily assemble a platform from pre-built infrastructure modules that exactly meet the demands of your application. You might purchase these modules, you might choose to use open source modules, or you might build them yourself. For those you don’t build, you obtain from a module marketplace, possibly deploying them to your [cloud] environment.

And when you choose to use a module, it’s dynamically deployed to your environment. The modules it depends upon? You’re given the option to purchase and deploy them, as well. You develop your software modules using the sound principles and patterns of modular design to ensure loose coupling and high cohesion. As you roll out your business solution modules, you simultaneously deploy the additional infrastructure modules that are needed.

In this marketplace, modules are sourced by multiple vendors. Some large. Some small. Neither the stack, nor your applications, are monolithic beasts. Instead, they are a composition of collaborating software modules. Your infrastructure isn’t necessarily tied to a specific vendor solution. The option always exists for organizations to purchase modules from different providers, easily swapping one provider module out with another.

The ecosystem flourishes. Developers flock to sell their latest creation. Organizations seek to add amazing capabilities to their rightsized environment at a fraction of the cost compared to what they are accustomed to today. The business benefits are real. The technical advantages are real. And the resulting ecosystem is sustainable.

A successful ecosystem demands both the runtime model and development model. And today, OSGi is the only standard technology that will allow this type of successful ecosystem to form on the Java platform. But will it happen? We may have a ways to go, but it sure would be cool! And it would be a shame if we lost this opportunity.

Thoughts?

Note: If you’re interested in ecosystems more generally, you might want to see the great TED talk by Dan Barber, “How I fell in love with a fish.” It’s very entertaining, informative, and worth 20 minutes of your time. His discussion on sustainability is truly fascinating!

Image Source: http://en.wikipedia.org/wiki/File:Blue_Linckia_Starfish.JPG

Back to the Future

Filed Under Architecture & Design, Java, OSGi, Platforms | 2 Comments 

Thanks

This is my last post of the year, and I want to take a moment to thank everyone that spends their precious time reading my long-winded entries. In 2009, I saw a four fold increase in blog traffic over 2008, and December 2009 saw a five fold increase in traffic over January 2009. Out of a total of 74 posts this year, more than half were related to modularity. I’d like to think the increase in traffic is related to the topics I’ve been writing about. Modularity is something that folks are interested in. But we still have a ways to go.

Future of Modularity

Interestingly, I was speaking to a conference organizer yesterday, and I sensed his mild surprise when I began to talk about modularity. He noted that modularity is certainly not a new concept. I agreed. It’s an idea that has been around since at least the early 1970’s when David Parnas published his essay. But things are different now. Until recently, modularity was not a first class concept on the Java platform. If we wanted to develop software with a modular architecture, we were required to do so without much framework or platform support. In fact, we hadn’t even identified any standard definition of module on the platform.

That’s changing. As the application platform vendors continue to bake OSGi into their products, it’s clear that a module is a JAR file and there’s a framework to back it up. In 2010 modularity on the Java platform will gain considerable visibility in our industry. The stackless stack is coming to fruition, and it’s a game changer…a disruptor…that reaches from the developer to the data center. It’s not something we should ignore. Really, it’s not something we can ignore anymore.  So, as we close out one decade and usher in another, modularity will celebrate at least its 40th birthday. Maybe we’re finally starting to get it.

All Posts Page

A final note before closing out 2009. For those interested, I’ve created a custom Wordpress page that shows a summary of all posts published on this blog. Enjoy!

That Rotting Design

Filed Under Agile, Architecture & Design, Java, OSGi, Platforms | 6 Comments 

Note: This is a re-post, with slight modifications, from an entry in October 2007. And now, two years later, we’re just about there! For the abridged version of this post, focus on the text in bold. You can see the original version here.

Design Rot

We’ve all experienced that sinking feeling when maintaining a piece of crappy software. Has my change broken the system in some unintended way? What is the ramification of my change on other parts of the system? If you’re lucky, and the system has a robust suite of unit tests, they can offer some support in proving your work. In practice, however, few systems have thorough automated test coverage. Mostly we’re alone, left to verify our changes as best as possible. We might privately criticize the original developers for creating such garbage. It certainly lends a plausble excuse in explaining why the maintenance effort is so costly or time-consuming. Or it might serve as the basis upon which we recommend a re-write. But mostly, we should wonder how it happened.

For sure, most software doesn’t start out this way. Most software starts out clean, with a clear design strategy. But as the system grows over time, strange things begin to happen. Business rules change. Deadline pressures mount. Test coverage slips. Refactoring is a forgotten luxury. And the inherent flaws present in every initial design begin to surface. Reality has proven that few enterprise development teams have the time or resources to fix a broken design. More often, we are left to work within the constraints of the original design. As change continues, our compromises exacerbate the problem. The consequence of rotting design is seen throughout the enterprise on a daily basis. Most apparent is the affect on software maintenance. But rotting design leads to buggy software and performance degradation, as well. Over time, at least a portion of every enterprise software system experiences the problem of rotting design. A quote from Brook’s sums it well:

All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing the original design flaws; more and more is spent on fixing flaws introduced by earlier fixes. As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress.

The most obvious question is, “How do we prevent rotting design?” Unfortunately, rotting design is not preventable, only reducable. Of the past ten years, the design patterns movement has provided insight to the qualities of good design. Dissecting design patterns reveals many important design principles that contribute to more resilient software design. Favor object composition over class inheritance, and program to an interface, not an implementation, are two examples. Of the 23 patterns in the GOF book, all adhere to these fundamental statements. Alone however, design patterns that emphasize class structure are not enough to help reduce rotting design.

Reducing Rot

Most patterns emphasize class design, and present techniques that can be used in specific contexts to minimize dependencies between classes. Teasing apart the underlying goal of most patterns shows us that each aim to manage the dependencies between classes through abstract coupling. Conceptually, classes with the fewest dependencies are highly reusable, extensible, and testable. The greatest influence in reducing design rot is minimizing unnecessary dependencies. Yet enterprise development involves creating many more entities beyond only classes. Teams must define the package structure in which those classes live, and the module structure in which they are deployed. Increasing the survivability of your design involves managing dependencies between all software entities - classes, packages, and modules.

But if minimal dependencies were the only traits of great design, developers would lean towards creating very heavy, self-contained software entities with a rich API. While these entities might have minimal dependencies, extreme attempts to minimize dependencies results in excessive redundancy across entities with each providing its own built-in implementation of common behavior. Ironically, avoiding redundant implementations, thereby maximizing reuse, requires that we delegate to external entities, increasing dependencies. Attempts to maximize reuse results in excessive dependencies and attempts to minimize dependencies results in excessive redundancy. Neither is ideal, and a gentle balance must be sought when defining the behavior, or granularity, of all software entities - classes, packages, and modules. For more on the use/reuse paradox, see Reuse: Is the Dream Dead?

Software design is in a constant quandary. Any single element key to crafting great designs, if taken to its individual extreme, results in directly the opposite - a brittle design. The essential complexity surrounding design is different for every software development effort. The ideal design for a software system is always the product of it’s current set of behavioral specifications. As behavior changes, so too must the granularity of the software entities and the dependencies between them. The most successful designs are not characterized by their initial brilliance, but instead through their ability to withstand the test, and evolve over the course, of time. As the complexity of software design is an essential complexity surrounding software development, our hopes lie with technologies and principles that help increase the ability of your design to survive. Such is the reason why agile architecture is so important.

A Promising Future

I’m hopeful that all software developers have experienced the pleasure of a design that, through the course of time, has withstood the test of time. Unfortunately, many enterprise development teams have too few of these experiences. Likewise, few enterprise development teams devote adequate effort to package and module design. It’s unreasonable to believe that even the most flexible class structure can survive should the higher level software entities containing those classes not exhibit similarily flexible qualities. The problems are rampant. Increased dependencies between packages and modules inhibit reusability, hinder maintenance, prevent extensibility, restrict testability, and limit a developer’s ability to understand the ramification of change.

Services offer some promise to remedy our failures with object-oriented development. Yet, while services may offer tangible business value, within each awaits a rotting design. There exists a world between class design and web services that deserves more exploration, and as an industry, we are beginning to notice. OSGi is a proven module system for the Java platform, while Jigsaw aims to modularize the JDK. JSR-294 aims to improve modularity on the Java platform. While some friction might exist between the constituencies involved, it’s only because they too recognize that something has been missing, and are passionate about fixing the problem. Of course, it doesn’t stop there. A plethora of application servers and tools are also including support for modularity using OSGi, which has grown into the defacto standard module system on the Java platform.

All aim to help manage the complexity, from design through deployment, of enterprise software development. With each, new practices, heuristics, and patterns will emerge that increase the ability of a design to grow and adapt.

The New Era of Programming Languages

Filed Under .Net, Development, Java, Platforms | 5 Comments 

Burton Group has just published the overview I authored titled  The New Era of Programming Languages. This is a timely document. The Tiobe Programming Index shows an increasing mix of different language types that are popular today. Also included is a graph that illustrates the shrinking gap between the use of dynamically and statically typed languages.

At JavaOne a couple of weeks ago, there were numerous sessions I attended that discussed alternative languages on the JVM. In a session hosted by Brian Goetz, he talked about the renaissance JVM, and the more than 200 languages it currently hosts. There were also sessions on Clojure, Ruby, Scala, along with a script bowl where Jython (represented by Frank Wiersbicki), Groovy (represented by Guillaume Laforge), Clojure (represented by Rick Hickey), and Scala (represented by Dick Wall) squared off against one another.

While the clear crowd favorite at the script bowl was Groovy, it was apparent that each language stood out in separate ways - Jython as a form of “executable pseudocode”, Groovy with it’s compile-time metaprogramming capabilities, Clojure as a dialect of Lisp with it’s powerful multi-threaded and concurrency capabilities, and Scala with it’s Java-like syntax and type inference capabilities. And all had great integration with the JVM along with the ability to leverage existing Java librairies.

Here’s an excerpt from the Burton Group report that walks the timeline of language evolution as depicted in the diagram above. It’s a walk down history lane, and offers a perspective on the future of languages.

The Birth of Language

The first modern programming language is hard to identify, but historians trace the profession of programming back to Ada Lovelace, a mathematician often credited with creating the world’s first computer program in the mid-1800’s. In these early days of computing, languages provided no abstraction from the computer hardware on which the programs would run, and hardware restrictions often defined the language. These earliest computing machines had fixed programs, and changing how these machines behaved was a labor intensive process that involved redesigning the hardware in conjunction with the program it would run.

In the 1940’s, general-purpose computers began to emerge that were able to store and load programs in memory. Creating programs for these computers required developers to use first-generation machine code languages. Eventually, second-generation languages, such as Assembly, emerged and provided a symbolic representation for the numeric machine code. Regardless, each language was low-level and cryptic. Developing even trivial programs was error-prone, required a great deal of intellectual ability, and generally took a long time.

The Compiler Era

The 1950’s gave way to the first modern programming languages—Fortran, LISP, and COBOL—that represent the ancestors of the languages in widespread use today. Each was a higher-level third-generation language that abstracted away the underlying complexities of the hardware environment and allowed developers to create programs using a more consumable and understandable language syntax. Though languages were greatly simplified, adoption of these languages was slow because developers wanted assurance that programs created using these languages was comparable to that of assembly language. As compilers were optimized, adoption of higher-level programming languages progressed.

Even at this early stage of language evolution, language capabilities diverged with an emphasis on solving specific types of programming problems. COBOL and Fortran dominated business and scientific computing, respectively. LISP prospered in academia. These early languages, and their emphasis on specialization for solving certain types of programming problems, were an indication of the language evolution to come.

The Paradigm Era

Language evolution experienced significant innovation in the 1960’s and 1970’s, and many of the major paradigms in use today formed in this period. Software systems were increasing in size, and development was incredibly complex. Maintaining those systems presented additional challenges. Language designers and developers were seeking ways to make complex programming tasks easier by raising the level of abstraction. Discussion of the language effect on software design began to surface, and the use of the GOTO statement and the advantages of structured programming were serious topics of debate.

A multitude of languages emerged that supported these new paradigms. Object-oriented programming was born when the Simula programming language was created, and eventually Smalltalk would surface as the first pure dynamically typed, object-oriented language. C and Pascal were created in this era, as was Structured Query Language (SQL). The first functional language, ML, was also invented. A major shift was under way to make programs easier to develop and maintain through new language enhancements and programming paradigms.

The 1980’s were a period of far less innovation while emphasis turned toward paradigm and language maturation. Languages such as C++, invented in 1979 and originally called C with Classes, emerged that brought the advantage of object-oriented programming to a language that was strong in systems programming. Modular languages, such as Modula and Ada, began to emerge that helped in developing large-scale systems. Leveraging the capabilities of advanced computer architecture led to compiler advancements that were a precursor to managed code environments.

The Productivity Era

Without question, the early 1990’s were driven by a motivation to increase developer productivity. Managed runtime environments emerged that removed the burden of memory allocation and deallocation from the developer. Advanced programming environments, such as PowerBuilder, provided an increase in productivity by combining the language and integrated development environment (IDE) into a consolidated product suite. Attempts were made to push languages down the stack, and attention shifted to advanced tooling and frameworks that increased developer productivity.

In the mid-1990’s, the World Wide Web (WWW) popularized the use of the Internet as a massively scalable hypermedia system for the easy exchange of information. Soon thereafter, a nascent technology was integrated into the omnipresent Navigator browser. Java technology, and applets specifically, had been officially unveiled to the world as a language whose programs could execute atop a managed environment in a plethora of operating environments.

Although the hype surrounding applets fizzled out shortly thereafter, the write once, run anywhere (WORA) promise of Java moved to the server-side as dynamic web applications began to increase in popularity. The portability of Java applications was realized through a Java Virtual Machine (JVM) that managed application execution. Because of this, the language was far simpler than C and C++, because the JVM provided complex memory management capabilities. What was once a complex task to create programs that ran on disparate platforms now required little effort. The simplicity of the language, combined with the portability of applications, was a force that would affect computing for the next decade.

The ecosystem surrounding Java technology flourished, and tools and frameworks emerged that made Java programming tasks much easier. Early after the turn of the millennium, .NET arrived along with its Common Language Runtime (CLR), and platform support for programs written in multiple languages was a reality. Initially, many scoffed at the need for a platform supporting multiple languages. Instead, they adopted general-purpose languages and accompanying tools, frameworks, and managed runtime environments, believing they offered the greatest productivity advantages. Even though a multitude of languages existed, organizations rejected many of them in favor of Java and C#. However, platform support for multiple languages proved to eliminate a significant barrier to language adoption.

Eventually, developers discovered that frameworks and tools were increasing in complexity and becoming difficult to work with. In many ways, these frameworks and tools were hindering productivity, and the advantages of using a single general-purpose language was called into question. Developers were growing frustrated. Though frameworks and tools might exist that aid a task, learning the framework or tool was as daunting as learning a new language altogether. While the complexity of frameworks and tools continued to increase, developers sought alternative languages that were easier and more productive.

The overview goes on to talk about the present day and the postmodern era. The Java and .NET platforms dominate enterprise development, and each support a variety of languages. Platform support has reduced the barrier to entry. Subsequently, a shift is taking place as developers are looking for alternative languages (instead of frameworks, for example) that increase productivity and make programming tasks easier.

The postmodern era recognizes that multiple languages are a fact of life. Language evolution will continue, and new languages will emerge that blend aspects from multiple paradigms. The strong distinction made between static and dynamic languages will disappear, and metaprogramming will become more mainstream. Even compilers will change, as developers gain more control, and are able to take advantage of compilers just as they do APIs today.

Without question, we live in interesting times, especially if you’re a language geek.

Java Modularity - OSGi and Project Jigsaw

Filed Under Architecture & Design, OSGi, Platforms | 5 Comments 

I’ve had some time to reflect on the sessions pertaining to Java modularity that I attended at JavaOne. This topic can be pretty confusing, so let’s take some time to clarify a few things.

The JSRs

Initially, there were three primary JSRs surrounding Java modularity - JSR 277, JSR 294, and JSR 291. Descriptions follow:

  • JSR 277 - Also known as the Java Module System, JSR 277 was an alternative module system being proposed that would have provided similar, though less, than what OSGi currently provides.
  • JSR 291 - Proposed by IBM around 2006, JSR 291 defines a dynamic component model for Java SE. This JSR brought OSGi into the JCP. In general, this did for Java SE what JSR 232 did for Java ME.
  • JSR 294 - Currently in progress, this JSR will define the language and virtual machine changes that will be made to support modularity on the Java platform.

Today, JSR 277 is inactive, JSR 291 is final, and JSR 294 is in progress. That makes it all seem rather simple. But things get messy pretty quickly. JSR 294 does not define a module system, it simply specifies changes to the language and VM that module systems can leverage. The intent is that module systems be built atop JSR 294. And today, there are two separate module systems for the Java platform - Project Jigsaw and OSGi.

The Module Systems

Many have heard of OSGi. It’s been around for over 10 years, originated in the mobile and embedded systems space, was popularized by the Eclipse foundation upon adoption for it’s plugin system, and is currently being leveraged by every major application platform vendor, including Sun’s Glassfish product. Jigsaw, however, is not as mature nor as well-known.

Jigsaw is an Open JDK project. The intent of Jigsaw is twofold - to modularize the JDK and allow developers to modularize their applications. While the Reference Implementation (RI) link on the JSR 294 homepage navigates to the Project Jigsaw homepage, Jigsaw is not technically the RI for JSR 294. Instead, Open JDK 7 is the RI for JSR 294, and as it happens, Project Jigsaw is an Open JDK 7 project. Certainly modularizing the JDK is a good thing. But it does beg the question - why not build atop the proven and de facto standard OSGi dynamic module system? The answer to this question is not entirely clear. What is clear is that Jigsaw revives JSR 277 outside the context of the JCP governing body.

In a JavaOne session led by Jigsaw proponent Mark Reinhold, he stated that Jigsaw addresses “requirements that aren’t met by existing module systems.” While I have to conclude that Mark was indirectly referencing OSGi, he wasn’t clear on what these missing requirements are. Additionally, there is no guarantee that Jigsaw and OSGi will be compatible module systems. While both may eventually leverage the language and VM features of JSR 294, it’s doubtful that modules developed using Jigsaw will be interoperable with OSGi bundles. Certainly, the Java community doesn’t need this fragmentation. Of course, the debate surrounding Jigsaw and OSGi may be a moot point with the impending acquisition of Sun by Oracle, who has invested significantly in OSGi with their Fusion middleware product line.

In general, OSGi clearly has a significant advantage over Jigsaw today. It’s mature and widely adopted across the industry, though Jigsaw has the inside track given it’s inclusion as part of the Open JDK 7 project and it’s sponsorship by Sun insiders. But it’s confusing why Sun is adverse to building atop the proven OSGi platform, and there are many questions surrounding modularity on the Java platform that deserve an answer. Notably, what requirements is Jigsaw addressing that OSGi does not? And why is Sun adverse to leveraging OSGi as the standard module system for the Java platform? Sadly, I suspect the answer to these questions has little to do with technology, and more to do with politics.

For more information on the debate surrounding Jigsaw and OSGi see the following references:

Programming Language Classification

Filed Under Development, Platforms, Technology | 10 Comments 

Below is a table that shows some popular and emerging programming languages classified according to the following:

  • Type system - Dynamic or Static type system
  • Problem space - A General Purpose language versus a Domain Specific Language
  • Runtime environment - A Managed environment (ie. garbage collection, etc.) or an Unmanaged environment.
  • Paradigm - Object-Oriented, Procedural, Functional, Imperative, or Declarative

Is this a relevant classification scheme? Are languages classified correctly? Are certain qualified languages missing from the list? How would you modify this list?

OSGi Post Summary

Filed Under Agile, Architecture & Design, Java, OSGi, Platforms | 4 Comments 

Over the past several months, I’ve written quite a few blog entries about OSGi. Here’s a recap of those entries. If you’re not familar with OSGi, the posts under How OSGi might be of interest. If you’re familar with OSGi, but aren’t clear how it can help, the posts under Why OSGi will be of interest. And if you want to get a sense for where OSGi stands in the market today, the posts under OSGi Market may peak your fancy.

Why OSGi

Here are some posts that describe the motivating factors behind OSGi.

  • Rotting Design - Discusses how a system’s design tends to rot as change occurs, and what we can do about it. Introduces OSGi at the end of the post.
  • Java Components & OSGi - My first “official” OSGi post where I talk about JAR files as components on the Java platform.
  • Enterprise OSGi -The advantages of what OSGi will bring to Enterprise development. We certainly aren’t here yet, but this is the direction things are moving.
  • JAR Design over Class Design - An indication of how few development teams spend time designing JAR files. Very unfortunate, but true.

How OSGi

Here are some posts describing how to use OSGi. They are simple tutorials.

  • Simple OSGi Service - The simplest OSGi service just to show the basics of OSGi. A good “getting started” guide if you’re not familar with OSGi.
  • OSGi & Modularity - Uses the OSGi service registry and separates interface from implementation to create a module that can be replaced with another without restarting the system.
  • OSGi & Spring - Shows how Spring Dynamic Modules can be embedded in an OSGi runtime (Felix) and OSGi services registered with the service registry. Spring takes care of registering the services, so that application code has no dependencies on the OSGi API.
  • OSGi & Embedded Jetty - A simple tutorial showing how Jetty can be embedded in OSGi. The example is fully functional and can be checked out of my Google Code SVN repository. I used Felix for the example, but also tested it under Equinox.
  • Embedding OSGi in Tomcat - A tutorial that shows what’s necessary to embed OSGi in an application server. This is the inverse of OSGi and Embedded Jetty.

OSGi Market

Here are some posts on the OSGi market.

Late last year, I also published an in-depth research overview on OSGi. A Burton Group client subscription is required to access the report. The Wordle image embedded in this post shows the word cloud for that paper.

Embedding OSGi in Tomcat

Filed Under Architecture & Design, Development, Java, OSGi, Platforms | 14 Comments 

My last post embedded OSGi in an application server using Felix, Jetty, and PAX WEB. Here, I’m going to embed Equinox in Tomcat. I originally set out to embed Felix in Tomcat, but the dearth of tools and frameworks available for embedding Felix made using Equinox much easier.

I use the word “much” pretty loosely, though, because there were still some hoops I had to jump through to get JSP compilation working. Of course, PAX WEB provided that capability when embedding Jetty in Felix. Unfortunately, no framework stepped forward that offered these capabilities in this embedding scenario. At least, I wasn’t able to find them.

I want to keep this post strictly educational (like a tutorial), but I’m going to post a follow-up shortly that discusses these two scenarios. To give you something to chew on in the meantime, both scenarios presented a different set of challenges, and we have a long ways to go before many development teams will be able to leverage OSGi for building web applications. Like I said, I’ll explain why in a different post, but for now let’s move on with this example.

Getting Started

In the spirit of using the simplest tools possible, all you’ll need for this example is Tomcat, Ant, and Subversion. If you went through the previous example, you’ve already got Ant and Subversion installed. If you already have Tomcat, you should be set. But if not, download Tomcat 6, which is the version I used for this embedding exercise.

Checkout HelloWorldEmbedJSP

Next, you’ll need to checkout the project from my Google code repository. You can put the project anywhere when checking out, but you’ll have to make sure you know the path relative to each directory so that you can install the necessary bundles. I have the project sitting in a directory right alongside Tomcat. To checkout, open up a Terminal window or DOS prompt and do the following:

svn checkout
http://kcode.googlecode.com/svn/trunk/osgi/HelloWorldEmbedWebJSP

Of course, if you have a Subversion client installed like TortoiseSVN, you can checkout from the directory browser.  I don’t use these tools, though. Next step!

Build the WebApp

This step has nothing whatsoever to do with OSGi. But as a basis for comparision, I felt it would be interesting to package the sample as a web application. Since this is the exact same example I used when embedding Jetty in Felix, it does offer a nice basis for comparing how we can deploy this functionality in different ways. To build the web application, simply navigate to the web directory in the console and type the following:

ant

The build script spits out a web.war file in the bin directory. Simply take this web.war and copy it to the Tomcat webapps directory. If you haven’t started Tomcat yet, you should do so now. Just startup another DOS prompt or console window, navigate to the Tomcat bin directory, and execute the startup.bat or startup.sh. Once this is done, give it a second to explode the web application, and navigate to http://localhost:8080/web/hi. Like I said, if you went through the exercise on my previous post of embedding Jetty in Felix, you’ll see the same thing here deployed as a web application instead of a bundle. Onto OSGi.

The Bridge

The Bridge.war web application is the main reason I chose to use Equinox instead of Felix. It provides two very important functions that are critical when embedding OSGi in an application container. In fact, this is one of the items I’ll be speaking about in my next post - how difficult it is to use OSGi with the current generation of application servers. It’s virtually prohibitive. Anyway, here are the two necessary functions provided by the Bridge.

  • Embeds and launches Equinox
  • Tunnels servlet requests from Tomcat to Equinox

Deploying the Bridge is easy because it’s a webapp. Just take the Bridge.war from the web/lib directory of the HelloWorldEmbedWebJSP project and drop it into the Tomcat webapps directory. Give Tomcat a second to explode the web application, and test it by accessing http://localhost:8080/bridge/sp_test. You should see a page that says the Servlet Delegate is registered. You’ve now got OSGi embedded within Tomcat with a bridge that will feed requests to Tomcat and pass them onto Equinox.

Linux and Mac OS/X users take note. As you’ll recall in the previous examples, we install OSGi bundles from the OSGi console after starting OSGi. Well, since we embedded OSGi in Tomcat, where’s the console? The console is the same console window you used to start Tomcat. But because Tomcat redirects all output to a file, you may not see it. To enable the console, you’ll need to make some changes to the catalina.sh file in the Tomcat bin directory. Windows users shouldn’t have this problem, though I’m sure Windows users are dealing with many other types of problems. Anyway, once you’ve opened catalina.sh, comment out the following line so it looks like this. If you’ve got the same catalina.sh as I do that’s bundled with Tomcat 6, it’s line number 298.

#>> "$CATALINA_BASE"/logs/catalina.out 2>&1 &

Now, restart Tomcat and voila…in the console you used to start Tomcat, you should see the OSGi console. Type ss to see the list of installed bundles. Onward!

Configuring the Environment

Before we deploy the bundle, we need to configure the environment. If we weren’t using JSP, we’d be done. But because JSPs require compilation, and the JSPs will run within Equinox, they can’t use the JSP compiler included with Tomcat. We need to include our own JSP compiler, and we’ll use Jasper. I’ve included all the bundles needed in the same directory where you found Bridge.war. To install the bundles, jump over to the OSGi console, and do the following:

osgi> install file:path to bundle/
javax.servlet.jsp_2.0.0.v200806031607.jar
osgi> install file:path to bundle/
org.apache.commons.el_1.0.0.v200806031608.jar
osgi> install file:path to bundle/
org.apache.commons.logging_1.0.4.v20080605-1930.jar
osgi> install file:path to bundle/
org.apache.jasper_5.5.17.v200806031609.jar
osgi> install file:path to bundle/
org.eclipse.equinox.jsp.jasper_1.0.100.v20080427-0830.jar
osgi> install file:path to bundle/
org.eclipse.equinox.jsp.jasper.registry_1.0.0.v20080427-0830.jar

Now, type ss to view the state of the bundles. If they aren’t active, you’ll need to start each one of them. Again, in the OSGi console, just type:

osgi> start bundle id

Dependening on the dependencies, multiple bundles may start. Just do an ss in between each start to see which ones haven’t been started yet. Wash, Rinse, Repeat!

Build & Deploy the Bundle

Unlike my previous example where I gave you a pre-configured environment that didn’t require you to build anything, we have to do the build and deploy of the bundle ourselves. Why did I do this? Only because Tomcat has a larger footprint than Felix and Jetty, and I didn’t want to put Tomcat in my Google code repository. Building is pretty easy though. We’ve already built and deployed the web application. Building the bundle is done the same way, except we’ll use a different Ant build script. For this step, you can either shut down Tomcat, or open up another DOS or terminal window. Navigate to the HelloWorldEmbedWebJSP/web directory, and type the following:

ant -f buildjar.xml

The JAR file overwrites the web.war in the bin directory and places a valid OSGi bundle named web.jar there for you. To install the bundle, make sure Tomcat is started, and type the following in the OSGi console:

install file:relative path to web.jar/web.jar

The bundle gets installed. Start it up using the OSGi start command. We can now access the OSGi enabled web application by navigating to http://localhost:8080/bridge/hi. Again, it’s the exact same functionality as the other web applications we’ve deployed, except using a different deployment topology.

To shut down, just type close in the OSGi console. We’re done. If you had any problems, let me know via comments here or by contacting me. Next up…some general notes on OSGi and hopefully messing around with Distributed OSGi.

Additional Notes

A few additional notes about this exercise. Obviously it appeared a bit more difficult than embedding Jetty in Felix. But were it not for the PAX WEB framework, that exercise would have proven equally difficult. Also, the Bridge.war performs two very important functions for us. Felix has supporting documentation that shows how to embed Felix, but I found no framework that did it for me. The Bridge.war was available, so I used it. But were a bridge available for Felix, it would be easy to deploy that bridge and install web.jar with Felix embedded within Tomcat.

OSGi and Embedded Jetty

Filed Under Architecture & Design, Development, Java, OSGi, Platforms | 7 Comments 

There are two approaches to working with OSGi when developing web applications.

  • Embedding OSGi into the application server
  • Embedding the application server into an OSGi runtime

In this post, I’m going to embed Jetty, the application server, into Felix, the OSGi runtime. As with my previous posts (Simple OSGi Service, OSGi & Modularity, and OSGi & Spring), I’m trying to use the simplest tools for the job to maximize the experience. That means Ant, Felix, Subversion, a simple text editor, and a terminal window. The code can be found in my Google code repository. Look for the project called HelloWorldWebJSP.

As the project name suggests, not only am I going to embed Jetty in Felix, I’m also going to make sure I can serve JSP pages, which requires a bit of extra trickery. If you need additional details, leave a comment or contact me. Let’s go!

Get Felix

I used Felix 1.4.1 for this exercise. I also tested it with 1.4.0, so either of those two versions should be fine. If you don’t have Felix, get it.

Checkout HelloWorldWebJSP

Next thing to do is to checkout the project from the google code repository. If you want to use the scripts included with the project to startup Felix, you should checkout the project in a directory right alongside Felix. To checkout the project, you’ll need subversion. Then, navigate to the directory containing Felix and do the following:

svn checkout
http://kcode.googlecode.com/svn/trunk/osgi/HelloWorldWebJSP

For convenience, I checked in the Felix bundle cache, so you don’t need to install all the necessary bundles. The environment is already configured. However, if you’d like to setup the environment yourself, you should be able to delete the felix-cache directory, and after starting Felix, go through the steps of installing the necessary bundles. I’d wait to do this until after you’ve had a chance to step through the rest of the example. But it’s up to you. If something gets messed up, you can always start over with a clean checkout. The bundles you’ll need to install can be found in the /web/lib directory. They are

jetty-6.1.7.jar
jetty-util-6.1.7.jar
pax-web-service-0.5.1.jar
osgi.cmpn.jar
pax-web-jsp-0.5.1.jar

Start Felix

Next step is to start Felix. Open a terminal window and navigate to the HelloWorldWebJSP directory. You can use the pre-written scripts (assuming your directory structure is correct), or you can invoke it from command-line like the following, replacing the path to felix with the location of felix:

java -Dfelix.config.properties=file:./config.properties
-jar path to felix/felix-1.4.1/bin/felix.jar

Note that you should use the config.properties included with the project, and not the default Felix config.properties. This is because we need to specify an additional package import to make things work (javax.security.cert). Also, if the HelloWorldWebJSP and Felix directories aren’t side-by-side, you’ll have to change the file paths to the Felix bundles in config.properties.

Try it out

At this point, Jetty should be running inside Felix, and you can navigate to the homepage in your browser. To do this, simply point your browser to http://localhost:8380/. Click hello. Click goodbye. To see the JSP, click the link that serves a JSP.

Explore the Configuration

There isn’t a lot of magic here, but it did take some time to find all the bundles necessary to make it happen, especially to get JSP pages working. All in all, there are a total of 10 bundles, and that includes the four Felix bundles, and the bundle we developed as part of the web application (web.jar in ./web/bin directory). To see a list of all the bundles installed, type the following in the terminal window.

ps

Of particular interest, you’ll see two PAX Web bundles. The OPS4J Pax Web - Service (0.5.1) is the bundle that provides Jetty integration (and actually starts Jetty), while OPS4J Pax Web - Jsp Support (0.5.1) provides JSP compilation via Jasper. Note that most of the examples on the PAX Web website use Maven. Since I didn’t want to use Maven, I installed these bundles manually through the Felix console.

Explore the Code

The key element of the web application is the Activator. This registers the servlet, html pages, and JSP page so that Jetty can serve them up. This is where PAX Web makes the job much easier, because it encapsulates integration with Jasper.

Further Experimentation

There are a few interesting things you can do to experiment a bit further.

First, try setting up the environment yourself. Doing this isn’t that difficult since all of the bundles are included with the project when you checked it out. To do this, close down the existing Felix instance by typing the following in the terminal window.

shutdown

Next, delete the felix-cache directory. This gives you a clean Felix instance with none of the bundles installed. Startup Felix and then, one-by-one, install the appropriate bundles (listed above) from within felix using the install command, as follows:

install file:./path to bundle

Note that if HelloWorldWebJSP and Felix aren’t sitting side-by-side in the file system, you’ll need to change config.properties to point to the location of the Felix bundles. They are found in the bundle directory under felix. Also, don’t forget to install web.jar from the bin directory (you may have to run the build script), since that’s the JAR that contains our application classes. Then, start the bundles using the following command at the Felix command line in the terminal, where the bundle id is the id of the bundle obtained using the ps command.

start bundle id

Second, make a change to the JSP, html, or servlet. There is an Ant build script in the web directory that allows you to recompile. Depending on what you changed or added, you may have to modify the build script to make sure any new files are included in the JAR file created. Once compiled, redeploy the web.jar to Felix from within the console. You shouldn’t need to restart Felix when doing this. Just do the following, where bundle id is the id of the Hello World Web bundle (ie. the web application):

stop bundle id
update bundle id
refresh bundle id
start bundle id

Third, try deploying another servlet after registering it in the Activator. Or create and deploy another JSP. Or Add some images and additional resources. You can modify the build script to include them in the web.jar that’s already been deployed to Felix, or you can create your own JAR file and deploy it alongside web.jar. You can deploy these changes without interrupting what’s already running within Felix, providing a great feel for the flexibility of OSGi and the adaptable environments possible. Whatever you do, this base install should give you a foundation for experimenting with web applications using OSGi.

Soon I hope, I’ll be experimenting with the alternative configuration - embedding OSGi into the application server.

Update: While the example above uses Felix, I have also tested this example with Equinox. You can follow the same steps and the result is the same. Except the default port for the Equinox solution is 8080, and if you want that changed, you’ll have to do it yourself. Also, when using Equinox, replace the ps command with ss to see the bundle listing.

The Paradox of Choice

Filed Under Development, General, Platforms | 1 Comment 

A couple weeks back, I watched the Barry Schwartz Ted Talk on The Paradox of Choice, and found it fascinating. He started by citing the official dogma of western civilization:

If we are interested in maximizing the welfare of our citizens, the way to do that is to maximize individual freedom.

He goes on to state that the way to maximize freedom is to maximize choice. The more choice we have, the more freedom we have, the more welfare we have.

That resonates with me. But how little I understand the human psyche was quickly revealed.

Mr. Schwartz continues by illustrating how all this wonderful choice we’ve been given has a paradoxical affect. We find it incredibly difficult to make a decision because the choice has a paralyzing affect as we strenuously evaluate our options. Once we are lucky enough to make the decision, we wind up less satisfied with our final choice. He calls this the opportunity cost of choice. Whenever we make a choice to do one thing, we are also making a choice not to do another. And if in the end we are not completely satisfied, we imagine another choice would have brought us greater happiness or pleasure. The opportunity cost of choice detracts from our satisfaction as we cannot resist the urge to think about the alternative options.

He drives that point home further by explaining that with so much choice, our expectations are artificially inflated. From all that choice, we expect perfection. On the contrary, when presented with only a single choice, our expectations are much lower. Now here’s the punchline! When given only a single choice, dissatisfaction is not our fault because we had no choice. But when given multiple options, if not completely satisfied, we tend to blame ourselves. Choice, believed to lead to happiness, in fact leads in the other direction. That’s paradoxical, no?

He concludes, perhaps jokingly, that the secret sauce to happiness is low expectations and that everything was better back when everything was worse. Or perhaps he wasn’t joking! Closer to the end of the talk, he told a simple story about his trip to the department store for a pair of jeans. The shopkeeper asked him if he wanted slim fit, easy fit, or relaxed fit? Button fly or zipper? Stone washed or acid washed? Boot cut or tapered leg? His reply?

I want the kind that used to be the only kind.”

Now that resonates with me. I’d encourage you to listen to the talk. I’m sure I’ve missed a few points. It’s just under 20 minutes and is time well spent.

Apologies. This entry has little to do with technology. Or does it?

BTW, the image was not created by me. I snagged it from the talk.

Next Page →