Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.







Know What You Got: Just what, exactly, should be inventoried?

[tl;dr Application inventories linked to team structure, coupled with increased use of meta-data in the software-development process that is linked to architectural standards such as functional, data and business ontologies, is key to achieving long term agility and complexity management.]

Some of the biggest challenges with managing a complex technology platform is knowing when to safely retire or remove redundant components, or when there is a viable reusable component (instantiated or not) that can be used in a solution. (In this sense, a ‘component’ can be a script, a library, or a complete application – or a multitude of variations in-between.)

There are multiple ways of looking at the inventory of components that are deployed. In particular, components can view from the technology perspective, or from the business or usage perspective.

The technology perspective is usually referred to as configuration management, as defined by ITIL. There are many tools (known as ‘CMDB’s) which, using ‘fingerprints’ of known software components, can automatically inventorise which components are deployed where, and their relationships to each other. This is a relatively well-known problem domain – although years of poor deployment practice means that random components can be found running on infrastructure months or even years after the people who created it have moved on. Although the ITIL approach is imminently sensible in principle, in practice it is always playing catch-up because deployment practices are only improving slowly in legacy environments.

Current cloud-aware deployment practices encapsulated in dev-ops are the antithesis of this approach: i.e., all aspects of deployment are encapsulated in script and treated as source code in and of itself. The full benefits of the ITIL approach to configuration management will therefore only be realised when the transition to this new approach to deployment is fully completed (alongside the virtualisation of data centres).

The business or usage perspective is much harder: typically this is approximated by establishing an application inventory, and linking various operational accountabilities to that inventory.

But such inventories suffer from key failings..specifically:

  • The definition of an application is very subjective and generally determined by IT process needs rather than what makes sense from a business perspective.
  • Application inventories are usually closely tied to the allocation of physical hardware and/or software (such as operating systems or databases).
  • Applications tend to evolve and many components could be governed by the same ‘application’ – some of which may be supporting multiple distinct business functions or products.
  • Application inventories tend to be associated with running instances rather than (additionally) instantiable instances.
  • Application inventories may capture interface dependencies, but often do not capture component dependencies – especially when applications may consist of components that are also considered to be a part of other applications.
  • Application and component versioning are often not linked to release and deployment processes.

The consequence is that the application inventory is very difficult to align with technology investment and change, so it is not obvious from a business perspective which components could or should be involved in any given solution, and whether there are potential conflicts which could lead to excessive investment in redundant components, and/or consequent under-investment in potentially reusable components.

Related to this, businesses typically wish to control the changes to ‘their’ application: the thought of the same application being shared with other businesses is usually something only agreed as a last resort and under extreme pressure from central management: the default approach is for each business to be fully in control of the change management process for their applications.

So IT rationally interprets this bias as a license to create a new, duplicate application rather than put in place a structured way to share reusable components, such that technical dependencies can be managed without visibly compromising business-line change independence. This is rational because safe, scalable ways to reuse components is still a very immature capability most organisations do not yet have.

So what makes sense to inventory at the application level? Can it be automated and made discoverable?

In practice, processes which rely on manual maintenance of inventory information that is isolated from the application development process are not likely to succeed – principally because the cost of ensuring data quality will make it prohibitive.

Packaging & build standards (such as Maven, Ant, Gradle, etc) and modularity standards (such as OSGi for Java, Gems for Ruby, ECMAsript for JS, etc) describe software components, their purpose, dependencies and relationships. In particular, disciplined use of software modules may allow applications to self-declare their composition at run-time.

Non-reusable components, such as business-specific (or context-specific) applications, can in principle be packaged and deployed similarly to reusable modules, also with searchable meta-data.

Databases are a special case: these can generally be viewed as reusable, instantiated components – i.e., they may be a component of a number of business applications. The contents of databases should likely best be described through open standards such as RDF etc. However, other components (such as API components serving a defined business purpose) could also be described using these standards, linked to discoverable API inventories.

So, what precisely needs to be manually inventoried? If all technical components are inventoried by the software development process, the only components that remain to be inventoried must be outside the development process.

This article proposes that what exists outside the development process is team structure. Teams are usually formed and broken up in alignment with business needs and priorities. Some teams are in place for many years, some may only last a few months. Regardless, every application component must be associated with at least one team, and teams must be responsible for maintaining/updating the meta-data (in version control) for every component used by that team. Where teams share multiple components, a ‘principle’ team owner must be appointed for each component to ensure consistency of component meta-data, to handle pull-requests etc for shared components, and to oversee releases of those components. Teams also have relevance for operational support processes (e.g., L3 escalation, etc).

The frequency of component updates will be a reflection of development activity: projects which propose to update infrequently changing components can expect to have higher risk than projects whose components frequently change, as infrequently changing components may indicate a lack of current knowledge/expertise in the component.

The ability to discover and reason about existing software (and infrastructure) components is key to managing complexity and maintaining agility. But relying on armies of people to capture data and maintain quality is impractical. Traditional roadmaps (while useful as a communication tool) can deviate from reality in practice, so keeping them current (except for communication of intent) may not be a productive use of resources.

In summary, application inventories linked to team structure, and Increased use of meta-data in the software-development process that is linked to broader architectural standards (such as functional, data and business ontologies) are key to achieving agility and long term complexity management.

Strategic Theme #2/5: Enterprise Modularity

[tl;dr Enterprise modularity is the state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Standards for these are slowly gaining traction in the enterprise.]

Enterprise Modularity is, in essence, Service Oriented Architecture but ‘all the way down’; it has evolved a lot over the last 15-20 years since SOA was first promoted as a concept.

Specifically, SOA historically has been focused on ‘services’ – or, more explicitly, end-points, the contracts they expose, and service discovery. It has been less concerned with the architectures of the systems behind the end-points, which seems logically correct but has practical implications which has limited the widespread adoption of SOA solutions.

SOA is a great idea that has often suffered from epic failures when implemented as part of major transformation initiatives (although less ambitious implementations may have met with more success, but limited to specific domains).

The challenge has been the disconnect between what developers build, what users/customers need, and what the enterprise desires –  not just for the duration of a programme, but over time.

Enterprise modularity needs several factors to be in place before it can succeed at scale. Namely:

  • An enterprise architecture (business context) linked to strategy
  • A focus on data governance
  • Agile cross-disciplinary/cross-team planning and delivery
  • Integration technologies and tools consistent with development practices

Modularity standards and abstractions like OSGi, Resource Oriented Computing and RESTful computing enable the development of loosely coupled, cohesive modules potentially deployable as stand-alone services and which can be orchestrated in innovative ways using   many technologies – especially when linked with canonical enterprise data standards.

The upshot of all this is that traditional ESBs are transforming..technologies like Red Hat Fuse ESB and open-source solutions like Apache ServiceMix provide powerful building blocks that allow ESBs to become first-class applications rather than simply integration hubs. (MuleSoft has not, to my knowledge, adopted an open modularity standard such as OSGi, so I believe it is not appropriate to use as the basis for a general application architecture.)

This approach allows for the eventual construction of ‘domain applications’ which, more than being an integration hub, actually hosts the applications (or, more accurately, application components) in ways that make dependencies explicit.

Vendors such as CA Layer7, Apigee and AxWay are prioritising API management over more traditional, heavy-weight ESB functions – essentially, anyone can build an API internal to their application architecture, but once you want to expose it beyond the application itself, then an API management tool will be required. These can be implemented top-down once the bottom-up API architecture has been developed and validated. Enterprise or integration architecture teams should be driving the adoption and implementation of API management tools, as these (done well) can be non-intrusive with respect to how application development is done, provided application developers take a micro-services led approach to application architecture and design. The concept of ‘dumb pipes’ and ‘smart endpoints are key here, to avoid putting inappropriate complexity in the API management layer or in the (traditional) ESB.

Microservices is a complex topic, as this post describes. But modules (along the lines defined by OSGi) are a great stepping stone, as they embrace microservice thinking without necessarily introducing distributed systems complexity. Innovative technologies like Paremus Service Fabric  building on OSGi standards, help make the transition from monolithic modular architectures to a (distributed) microservice architecture as painless as can be reasonably expected, and can be a very effective way to manage evolution from monolithic to distributed (agile, microservice-based) architectures.

Other solutions, such as Cloud Foundry, offer a means of taking much of the complexity out of building green-field microservices solutions by standardising many of the platform components – however, Cloud Foundry, unlike OSGi, is not a formal open standard, and so carries risks. (It may become a de-facto standard, however, if enough PaaS providers use it as the basis for their offerings.)

The decisions as to which ‘PaaS’ solutions to build/adopt enterprise-wide will be a key consideration for enterprise CTOs in the coming years. It is vital that such decisions are made such that the chosen PaaS solution(s) will directly support enterprise modularity (integration) goals linked to development standards and practices.

For 2015, it will be interesting to see how nascent enterprise standards such as described above evolve in tandem with IaaS and PaaS innovations. Any strategic choices made in these domains must consider the cost/impact of changing those choices: defining architectures that are too heavily dependent on a specific PaaS or IaaS solution limits optionality and is contrary to the principles of (enterprise) modularity.

This suggests that Enterprise modularity standards which are agnostic to specific platform implementation choices must be put in place, and different parts of the enterprise can then choose implementations appropriate for their needs. Hence open standards and abstractions must dominate the conversation rather than specific products or solutions.

The high cost of productivity in Java OSGi

How hard should it be to build a simple web page that says ‘Hello World’?

If you are using Java and OSGi, it seems that this is a non-trivial exercise – at least if you want to use tools for making your life easier..but, when you are through the pain, the end result seems to be quite good, although the key decider will be whether there is some longer-term benefit to be had by going through such a relatively complex process.

My goal was to build an OSGi-enabled ‘Hello World’ web application that allowed me to conduct the whole dev lifecycle within my Eclipse environment. Should have been simple, but it turned out trying to persuade Eclipse to put the right (non-Java resource files) in the right place in JAR files, and getting the right bundles included to use them, was a challenge. Also, when things didn’t work as expected, trying to resolve apparently relevant error messages was a huge waste of time.

In particular, the only tools I am using at this point in time are:

  • Eclipse with Bndtools plugin
  • Git
  • An empty bundle repository, apart from that provided by default bndtools

So, the specific challenges were:

Web Application Bundles (WABs) + Servlets

  • To use the embedded jetty bundle (i.e., Java http server) to serve servlets, you need to include a web.xml in the jar file, in the WEB-INF folder. Turns out to do that, you need to put the WEB-INF/web.xml in the same src directory of your servlet java file, and manually add ‘-wab: src/<servlet src dir>’ to the bnd.bnd source. This tells bndtools to generate a JAR structure compatible with web applications.
  • The key pointer to this was in the bdntools docs:
  • Once that was done, eclipse generated the correct jar structure.

Including the right bundle dependencies

  • There are a number of resources to describe how to build a servlet with OSGi. Having started with the example in the ‘Enterprise OSGi in Action’ book (which precedes bndtools, and saves time by using an Apache Aries build but which includes a lot of unnecessary bundles), I decided to go back to basics and start from the beginning. I found the link below:
  • Basically, just use the ‘Apache Felix 4 with Web Console and Gogo’ option (provided within bndtools, and included as part of the default bndtools bundle repository), and add Apache Felix Whiteboard. It took a bit of effort to get this working: in the end I had to download the latest Felix jar download for all its components, and add them to my local bdntools repo.
  • The standard bndtools Felix components did not seem to work, and I think some of this may be due to old versions of the Felix servlet and jetty implementation in the bndtools repo (javax.servlet v2.5 vs 3.0, jetty 2.2 vs 2.3) conflicting with some newer bundles which support the Felix whiteboard functionality. So to use the whiteboard functionality, ensure you have the latest Felix bundles in your local bndtools repo. (Download from Felix website and add them via Eclipse Repositories view to the ‘Local’ repository.)

Error messages which (again) were not really what the problem was.

  • When I first built the application, the jetty server said ‘not found’ for the servlet URL I was building, and this message appeared in the log:
    • NO JSP Support for /fancyfoods.web, did not find org.apache.jasper.servlet.JspServlet
  • This was followed by a message suggesting that the server *had* processed the web.xml file.
  • After exploring what this error was (through judicious googling) it turns out Felix does not include JSP support by default, but this is not relevant to implementing the servlet example. In fact, this error was still displayed even after I got the application working correctly via the methods described above.

In conclusion: the learning curve has been high, and there’s a lot of feeling in the dark just because of the sheer volume of material out there one needs to know to become expert in this. But once environment/configuration issues have been sorted out, in principle, productivity should improve a lot.

However, I am not done yet: I suspect there will be much more challenges ahead to get blueprint and JPA working in my Eclipse environment. 


The vexing question of the value of (micro)services

In an enterprise context, the benefit and value of developing distributed systems vs monolithic systems is, I believe, key to whether large organisations can thrive/survive in the disruptive world of nimble startups nipping at their heels.

The observed architecture of most mature enterprises is a rats nest of integrated systems and platforms, resulting in high total cost of ownership, reduced ability to respond to (business) change, and significant inability to leverage the single most valuable assets of most modern firms: their data.

In this context, what exactly is a ‘rats nest’? In essence, it’s where dependencies between systems are largely unknown and/or ungoverned, the impact of changes between one system and another is unpredictable, and where the cost of rigorously testing such impacts outweigh the economic benefit of the business need requiring the change in the first place.

Are microservices and distributed architectures the answer to addressing this?

Martin Fowler recently published a very insightful view of the challenges of building applications using a microservices-based architecture (essentially where more components are out-of-process than are in-process):


The led to a discussion on LinkedIn prompted by this post, questioning the value/benefit of serviced-based architectures:

Fears for Tiers – Do You Need a Service Layer?

The upshot is, there is no agreed solution pattern that addresses the challenges of how to manage the complexity of the inherently distributed nature of enterprise systems, in a cost effective, scalable way. And therein lies the opportunities for enterprise architects.

It seems evident that the existing point-to-point, use-case based approach to developing enterprise systems cannot continue. But what is the alternative? 

These blog posts will delve into this topic a lot more over the coming weeks, with hopefully some useful findings that enterprises can apply with some success.. 


BndTools – a tool for building enterprise OSGi applications

Enterprise OSGi is complex, but the potential value is enormous, especially when the OSGi standards can be extended to cloud-based architectures. More on this later.

In the meantime, building OSGi applications is a challenge, as it is all about managing *and isolating* dependencies. Enter bndtools, an Eclipse plug-in designed to make developing and testing OSGi components (bundles) easy.

The tutorial for bndtools can be found here, which I recently completed:


It is quite short, and gets you up and going very quickly with an OSGi application.

Now that I’ve got the basics of bndtools sorted, I’m going to go back to see if I can rebuild my “Enterprise OSGi in Action” project using it, so I can stop using my rather poor shell scripts..that should make it much easier/quicker to get through the remaining sections of the book, in terms of the code/build/test cycle.

The bndtools makes use of a remote repository which includes packages needs to build OSGi applications..I need to understand more about this, so I can figure out how and where various packages and bundles needed to build and run OSGi applications should come from, and how to manage changes to these over time. The ‘Enterprise OSGi in Action’ project leans heavily on the Apache Aries Blog Assembley set of components, which is great to get things going, but in practice, many of the components in the Apache Aries distribution are not needed to do the examples in the book. 

Since I don’t like waste (it leads to unnecessary complexity down the road..), I want to see if I can build the examples with only the specific components needed.

OSGi Blueprint travails

So, OSGi is supposed to make it easier to isolate problems and therefore quicker to resolve. Alas, that is not the case yet – although this may yet change as I start to apply all the tools that are rapidly maturing in this space.

The problem is my application (the ‘fancyfoods’ website, from the ‘Enterprise OSG in Action’ tutorial) is failing to startup correctly, to wit:

  1. The servlet that displays the special offers that I know is available from the ‘fancyfoods’ store is saying there is none available.
  2. A number of exception errors in the console window I use to launch the OSGi application.

To date, the book has been accurate in terms of the key source code and configurations it requires. So in all probability, the error was likely due to my very novice development/test environment configuration.

So, nothing to do but pick through the exceptions and try to decipher it.

It starts with this beauty, which basically says the first operation which attempts to access the persistent store is failing (it does a count to see if any rows exist in the database):

[Blueprint Extender: 2] ERROR org.apache.aries.blueprint.container.BlueprintContainerImpl – Unable to start blueprint container for bundle fancyfoods.persistence

org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to intialize bean populator


Caused by: java.lang.VerifyError: Expecting a stackmap frame at branch target 41

Exception Details:


    fancyfoods/persistence/FoodImpl.<clinit>()V @32: ifnull


    Expected stackmap frame at this location.


[Blueprint Extender: 1] WARN Transaction – Error ending association for XAResource org.apache.derby.jdbc.EmbedXAResource@e5a209e; transaction will roll back. XA error code: 100

As some background, Derby is an SQL-based database written in Java that implements JPA (Java Persistence API). It is used as the reference persistence implementation service for the ‘fancyfoods’ applications, as a simple substitute for mySQL or Oracle, etc.

After spending *way* too much time trying to figure out this error and verifying the various configuration files etc were correct (and in the process understanding a whole lot more about OSGi Blueprint), it turns out that the error was quite mundane, and indeed related to my dev/test environment, which uses the Mac OS X version of Java 1.7.

The turning point was this post from stackoverflow.com:


Adding the ‘-XX:-UseSplitVerifier’ to my OSGi run script fixed the problem.

Along the way I, I investigated and eliminated all kinds of debug messages that were suggestive of problems but seemed to be relatively normal, or were just symptoms of the original error – for example, mysterious unbinding of services seemed to precede the exception being thrown. I suspect now that this was due to the multi-threaded nature of OSGi, and once the exception was thrown, various activities kicked in that displayed messages before the thread with the exception displayed its message.

Trying to solve this highlighted a few things:

  1. OSGi is powerful, but when services don’t work as expected, it is devilishly difficult to trace through a potentially complex web of dependencies to figure out where things have gone wrong. Blueprint in particular is a challenge: being declarative, tracing cause and effect can be non-trivial.
  2. Getting your dev/build environment right is soooo key..spend time on this, it is worth it, and shouldn’t change much once the basic architecture is in place.
  3. The ‘services’ mindshift in Java coding is quite significant: I haven’t got there yet, and I think this is at least partially because Java is basically an imperative language, and services are more declarative in nature, so it requires the two concepts to co-habitate. More sophisticated tools are needed to support this, especially in an enterprise setting.

Next up for OSGi: finish chapter 3 (implementing distributed and multipart transactions). Hopefully there won’t be any showstoppers to slow things down in the meantime..

Enterprise OSGi in Action – a tutorial in enterprise OSGi

Having developed an active interest in Enterprise OSGi, I decided to leap in and work my way through the ‘Enterprise OSGi in Action’ book by Tim Ward and Holly Cummins, published by Manning Publications. (manning.com, btw, is a great resource for technical hands-on books delivered electronically..many of them will feature in this blog.)

More details about *why* I am interesting in Enterprise OSGi is the subject for another blog post…enough, onto my progress so far.

I am already some weeks through this project, and have been using it to refresh my java development skills, which, I must admit, are somewhat rusty after some years in the technology project management space. This blog will touch upon some topics which may be obvious to every-day developers, but for folks who cut their teeth in C/C++ and Java in the 90’s and didn’t keep up programming into the 2000’s, some topics can be quite a revelation.

So, here’s what I’ve got so far:

  • I’ve completed to Chapter 3 of the book (covering a simple web-based application with persistence)
  • I’ve created a repository on GitHub:
  •  I’ve loaded the project into Eclipse for compilation error checking etc, but not for building
  • I’ve written some simple shell scripts to do a *very* crude compile/build/deploy process – this is what folks were doing prior to ‘make’, never mind Nexus/Maven/etc!

What to do next:

  • Properly understand what I did in the first 3 chapters..! 🙂
  • Find out why I keep getting Derby/Blueprint errors when my application starts up – this appears to be a timing thing, but seems to be very sensitive and not particularly robust, as I have had it working previously..
  • Improve the toolchain process – use Eclipse, bndtools, Jensen and Maven/Nexus to automate the compile/build/deploy process.
  • Complete the following few chapters – delving more deeper into the ‘enterprise’ aspects of OSGi
  • Build something ‘real’ with the technology.

Note that the ‘Enterprise OSGi in Action’ book is now a couple of years old..however, its concepts are still correct and apply to the latest specs of OSGi. I believe that with the correct application of the ‘bndtools’ toolset, building this example project would be a whole lot easier, hence a goal of this project is to do just that. Without bndtools, some deep technical understanding of what is going on is needed, which for many devs will be too much: they just want to get productive already…

If anybody else is exploring or experimenting with OSGi, feel free to comment. This is a learning exercise, so there are no stupid questions. (After having searched a few times in stackoverflow.com for problems which I attributed to my lack of recent hands-on experience, I’ve found that most of my stupid questions have already been asked by others and invariably answered.)

For those interested, I am doing most of my work on a MacBook Pro running Mac OS X Mavericks. I’ve installed Homebrew for some basic tools, but otherwise I’m developing in a standard Mac environment.