Wednesday, June 20, 2018

Case Management and the Microprocess Architecture

In this article I introduce the concept of the Microprocesses Architecture which addresses some important limitations of traditional monolith Case Management applications.

This article has been updated on 2018-05-21 to correct a link to a previous blog article, and on 2018-09-15 to include a link to the Process Group Instance Pattern.

You might also be interested in the following supporting pattern:

In the Oracle Integration Cloud a Case Management application consists of at least one Dynamic Process that on its turn consists of Case Activities. A Case Activity is implemented by a (structured) Process, or a Human Task. The unit of deployment is an Application, which consists of one or more Dynamic Processes plus the implementations of the activities (Processes, Human Tasks), and may also include a couple of Forms (plus some more).



The same application can have multiple revisions (versions) deployed at the same time, each having its own Revision Id. There can only be one default revision. It is important to realize that once an instance of a case is started, it stays running in the same revision. In contrast to the (on-premise) Oracle BPM Suite there (currently) is no way to move, or migrate as it is called, the instance from one revision to another. That holds for Dynamic Process as well as the implementation of its activities.
The consequence of the before is that new features and bug fixes cannot be applied to any of its running instances. Also, for some type of Case Management applications instances can run for a very long time. Think about legal cases, for example. This makes that release management of Case Management applications quickly can become a difficult job, if not a nightmare, especially when new versions of Case Management applications need to be deployed regularly.

An effective way to address this is by applying a Microprocess Architecture. As the name already suggests, it is not a coincidence that it sounds similar to a Microservice Architecture, as it addresses some of the same challenges as the Microservice Architecture, and shares some of its characteristics. As I argued some time ago in my blog article "Are MicroServices the Death of BPM and Case Management?", it is not the same though.

With a Microprocess Architecture you implement a case activity as a Delegate which is just a "hook" to call another Process in a different application to which the actual implementation has been delegated. The Case Management application as well as all implementing Process applications each have their own life cycle.


The advantages are:
  1. Support for parallel development of the case (with the Oracle Integration Cloud only one person can work on an application at the same time).
  2. Reduce impact of implementing new features and bug fixes, as these may be kept local to an application.
  3. Support that running instances can benefit from new features and bug fixes of the implementation of a Case Activity (as long as it has not yet reached that activity).
  4. Reduce the amount of revisions required for a Case Management application.
  5. Support reuse of the implementing Processes (which in theory also is possible with implementing Processes that reside in the Case Management application, but that's just theory 😉).
  6. Support that an implementing application can be deployed to a different container.
  7. Support that an implementing application can be (re)implemented using different technology.
As you may be aware these advantages practically also all apply to a Microservices architecture.

The consequences of a Microprocess Architecture are:
  • Every Case Activity implementation requires an extra component to be realized (Process Delegate plus implementing Process).
  • It increases the amount of applications to be deployed, which to some extend is mitigated by advantage 4.

Because of these two consequences (that also apply to a Microservices Architecture) one should make a conscious decision before applying a Microprocess Architecture on a Case Management application. For example, it might be less suitable for an application with short running instances or when there are no requirements to let new features and bug fixes apply to running instances.

There are a few prerequisites to a Microprocess Architecture which I will discuss in some blog(s) to come.
 
Further reading:

Thursday, May 31, 2018

Dynamic Process, Conditions and Scope


In Oracle Integration Cloud's Dynamic Processes activation/termination conditions can be based on case events. These events are related to the scope of the components they relate to, which implies some restrictions. The below explains how this works, and how to work around these restrictions.

A Dynamic Process or Case (as I will call it in this article) in the Oracle Integration Cloud consists of four component types: the Case itself, Stages (phases), Activities, and Milestones. An Activity or Milestone is either in a particular Stage (in the picture below Activities A to H are), or global (Activities X and Y). Cases, Processes, Stages, Activities and Miletones cannot be nested (but a Case can initiate a sub-Case via an Activity, which I will discuss another time).



Except for the case itself, all other components can explicitly be activated/enabled or terminated/completed based on conditions. For example in the dynamic process above Milestone 1 is activated once Activity A is completed, and Stage 2 is to be activated once Stage 1 is completed.

A Stage implicitly completes when all work in that stage is done (i.e. all Activities), and a Case implicitly completes when all work in the case is done. Currently the status of a Case cannot be explicitly set using conditions, but I would expect this to become possible in some next version. In the meantime there is a REST API that can be used to close or complete a case.

There are two types of conditions for explicit activation/termination:
  • (case) Events, for example completion of an activity
  • (case) Data Driven, for example "status" field gets value "started"
Events and Data Driven can also be used in combination, for example Activity B is only activated when Activity A is completed (event) AND some "status" data field has value "approve" (data driven).

The scope of an Event is its container, meaning:
  • A Stage can only be activated or terminated by a condition based on an Event concerning another Stage or a Global Activity.
  • An Activity can only be activated or terminated by a condition based on an Event concerning another Activity or Milestone in the same Stage.
  • A Milestone can only be completed by a condition based on an Event concerning an Activity or another Milestone in the same Stage.
  • A Global Activity can only be enabled or terminated by a condition based on an Event concerning a Stage, or Global Milestone or another Global Activity
  • A Global Milestone can only be enabled or terminated by a condition based on an Event concerning a Stage, a Global Activity, or another Global Milestone
I expect that in practice most conditions will be based on Events (so far for me that is the case) where the scope of these events will impose no limit on that. However, there are situations where you will need a "work-around".

Let's assume that in the example Stage 2 is only to be activated when Milestone 1 is completed and otherwise Stage 2 is to be skipped and the case should directly go to Stage 3. Because of the way events are limited by their scope, you cannot create a condition for Stage 2 to be skipped based on the completion of Milestone 1 (which is in Stage 1 and therefore not visible outside).

The work-around is to use a Data Driven condition instead. You can for example have a "metaData.status" field that you can set to something like "skip phase 2" and use that instead.

In general, it probably always is a good idea to let your case have some complex data element for example called "metaData" consisting of fields like "dateStarted" and "status", which you fill out via the activities, and if needed can be used in conditions everywhere.

Friday, April 06, 2018

Oracle Integration Cloud: New! The Data Mapper Activity

In a previous blog I discussed a work-around for not having a Script activity in Oracle Integration Cloud's Process Builder. In this blog I will discuss another work-around which is actually not a work-around, but the real thing: the Data Mapper!

As you can read in a previous blog about the matter, not having the equivalent of the Script activity of the on-premise BPM Suite, was an omission that we often had to find a work-around for. The one I used was the Business Rule activity. However, some weeks ago the Business Rule activity got deprecated (you could clearly see that).



With the latest release of OIC (which may not yet be public available when you read this) the Business Rule activity has vanished. At the same time the Data Mapper activity has been added.



The Data Mapper activity has no properties other than that you can put it in draft mode.


The implementation is as simple as you might expect: there is only an Output tab on which you can map data from Data Objects, Predefined Variables and Business Parameters on one hand, to Data Objects and Predefined Variables on the other.



Next to simple mappings, you can also create and use (reusable) transformations to map Data Objects (or attributes) of which the types don't match.


I hope I don't have to write this any time in the future again, but if you used my work-around I got you into trouble if you want to export and import an application, because import with a Business Rule activity in the application is not supported! Sorry :-D

Tuesday, March 27, 2018

Oracle Integration Cloud: Customer Managed & Patching

Currently the Oracle Integration Cloud (OIC) only comes as "customer managed". Among others this means that you as a customer have access to management consoles. It also means that you determine when to apply patches, as Oracle does not do that for you. The following describes how easy that is.

Oracle Cloud solutions can come in two flavors: Oracle Managed and Customer Managed. The first means that maintenance, including patching is done by Oracle. You don't have to ask for nor to initiate it as it all happens "automatically", typically during non-business hours (like Friday evening). It also means that you don't have any control over it. Now that probably is exactly what you want. However, in case of OIC that currently only comes as Customer Managed. This means that you have access to the Weblogic Service Console and the Fusion Middleware Console (although not with all the features that you for example would have with the on-premise version of the BPM Suite). I expect these consoles not to be available in the Oracle Managed flavor to come soon.



Another difference will be the way it is provisioned. With the Customer Managed flavor you have to provision a Storage Cloud yourself, and - depending on the type of template you use - also the Database Cloud.

With Oracle Managed I expect this to happen in one blow but that is yet to be seen. With Customer Managed you also have to think about how to configure the Stack that you want to use. A Stack is based on a Stack Template, which specifies the amount of nodes, OCPU's, memory, database version and edition of a node (and a few other things). A Stack is a provisioned instance of a template. After provisioning you cannot change the instance or use another template. However you can provision more instances based on the same stack. Another thing to point out is that with the Customer Managed flavor you need to indicate if and how you want it to be backed up.

Apart from some complexity but also flexibility that comes with determining your Stack Template, after provisioning there is little difference with the Oracle managed flavor. You can use it the same way, and if you have it configured to automatically do backups you don't have to think about that either. You do have to keep a keen eye on patches that may have become available, though.

If a patch is available, that will be shown on the Service Console:


You can start patching by clicking the link, which brings you to the Patch tab. In my case this gives a warning that I have no backup configured. It is a trial-only instance so I did not bother to do so. For a Production instance you should have done that (obviously). I don't know if I can still change that for my instance, but I don't think so. On the right-hand side there is a menu with two options: Precheck and Patch.


With the Precheck option you can let OIC verify if your instance is ready to apply the patch to. In my case it is.


With the Patch option from the menu you initiate the actual patching. In my case the patch can be done rolling what means with the instance up-and-running. As a matter of fact, the patch cannot be applied with one or more instances being shut down.


There also was a patch for the DB instance available, which required a restart. I could only apply that after shutting down the OIC instance, but that is indicated clearly.


Just for the fun of it I did the precheck of the patch after applying it. It failed, what I expected because it was already applied. The results were not very clear though.


Friday, March 23, 2018

Oracle Integration Cloud Tips & Trics: Work-around for no Script Activity

Oracle Process Cloud Services (PCS) nor the Process Builder in Oracle Integration Cloud (OIC) have a Script activity like there is in (on-premise) BPM Suite. In the BPM Suite you can use a Script activity for data mappings as well as Groovy. That OIC does not support Groovy is by design as the idea is to keep it as simple as possible. However missing the data mapping feature of the Script activity can make it even more complex than ever. Fortunately there is some data mapping activity on the road-map of some next version of OIC. Until then you can make use of the work-around below.
There can be several reasons why you may want to have an activity just for mapping data, among them:
  • Readability of the process model, making it clear which data is set where in the process.
  • Data mapping is conditional, making it too complex or impossible to do it in the Input or Output mapping of (for example) a Service activity.
  • A conditional mapping before a Gateway.
  • Iterative development, requiring (temporary) "hard-coding".
The work-around is to use a Rule activity which uses an input and output parameter of the type of the data object you want to map the data to.

A such the Rule activity is deprecated as it is superseded by the Decision activity, but as long as it is there (and a Mapping activity is not) we can make good use of it.

Below an example. This concerns some Process that is being used in a Dynamic Process application, to set up some case meta data. The case meta data is stored and checked for duplicates. The Store Meta Data activity is in draft mode because I'm developing it iteratively. One of the elements of the meta data is a startDate, which I want to set to the creationDate predefined variable.






I cannot do the mapping to the startDate in the Start event, because there it is not available. But even if it was, for reasons of clarity I would like to have it clearly visible in the process model.




I therefore created a Rule activity with uses an input and output argument, both of the MetaData business type.


 




I can do all mappings on the Input and Output Data Association tabs, so I do not actually have to implement a rule. The result will be that the input is mapped to the output 1:1. But for more complex use cases you can actually implement rules as well.





The run-time result is as shown in the next picture.


Monday, February 05, 2018

What Makes MicroServices Different from SOA?

In this article I will discuss what is different between MicroServices and a traditional Service Oriented Architecture as such an architecture may look looks like when you know for example Oracle SOA. I also discuss some of misconceptions heard or read concerning MicroServices. It is written by and for a person that knows SOA and is wonders what to do with MicroServices. If MicroServices is what you do already, I probably have little news for you.
I wrote this article many months ago, but somehow forgot to publish.

What's Different Compared to Traditional SOA?

In his article on InfoWorld Matt McLarty states that this question should not matter. The real question is: "what we can learn from the SOA movement", and I concur with his 5 important lessons. Nevertheless, even after reading his article, people like me will keep on wondering what the practical implications may be on the way we use our technology now and how we should change that.

All in all most of the MicroServices principles are fundamental to what I would consider to be a "good" Service Oriented Architecture. Of course, there is no such thing as the SOA, although in my opinion many best practices, and lessons learned the hard way, have lead to identifying some generic characteristics of the more successful ones, which in the below I refer to as classical SOA.

The way I see it (from my classical SOA perspective):

Statefull vs Stateless

MicroServices are stateless by principle. In SOA it is a best practice to avoid stateful services but that is not a principle. You should try to avoid stateful BPEL, but when creating a composite service that involves one or more asynchronous services, that leaves you little choice. As I explain in my previous blog about MicroServices and BPM and Case Management, the latter two are stateful by definition, so there you also don't have a choice.

However, in case of aynchronous (request/response) communication, some next time you may consider using events instead, where the response is not handled by an asynchronous callback but by publishing an event by means of the EDN or using JMS). Generally this complicates the implementation, but who said that MicroServices did not come with a price?

Reuse

SOA is about reuse. In a classical SOA there often are a number of small, reusable "technical" services that are then reused to compose bigger "business services". Examples include some service to handle asynchronous interaction in a generic way, and a service that retrieves some list of values from a database. We made them to speed up the development process, because creating the next application takes less time by reusing the services we created for the previous one. 

Everybody happy, until a new requirement makes we have to change the generic service with potential impact on all existing applications using it. If you are lucky some regression test suite is available to verify that the existing functionality keeps on working, but even then you may find that people don't feel comfortable unless all the other applications have been retested as well. You then may come to a point where you start wondering if all that reuse was such a great idea.

Much more than classical SOA, MicroServices are about minimal function services build around business capabilities (not necessarily 'fine-grained'), where reuse is even discouraged if that introduces dependencies that may jeopardize business agility. There obviously is reuse with MicroServices (a reusable printing service provides a sensible business capability), but you should for example avoid shared custom Java libraries that are deployed independently. Also in a classical SOA you can avoid this by making sure that you package a specific version of the library with the service so that it will never be impacted by any change unless you want it to.

In general, compared to classical SOA, applying MicroServices principles will make you start thinking differently about the responsibility, and granularity of services. Again, this may come with a price as some functionality may have to be duplicated to support business agility.

Data Services vs Data Replication

In a classical SOA we may not think for a second before deciding we need a (reusable) data service to get customer data. When reading about MicroServices you will find that the (already classical) example of a bad practice is having some sort of a CustomerDataService that may fail, and with that result in the failure of an OrderService to complete successfully.

It is for this reason that the Design for Failure principle implies that a MicroService should have its own data store when possible, and may have its own copy of shared business data like customer data. In this way the successful completion of the OrderService is never dependent on some CustomerDataService to be available. Data is synchronized when necessary and feasible.

You may already have realized that this is a specialization of the reuse issue addressed in the previous section. You will also realize that this is one of the more, if not most complex challenges to address, and the choice to replicate data is not be an easy decision to make.

HTTP vs REST

The interface of MicroServices should be simple, which almost de facto seems to imply REST (over HTTP) and JSON. With classical SOA this typically is SOAP and XML, although by no means you are limited to that. For a while already we start seeing more and more SOA services with REST interfaces.

Multiple vs Single Containers

With classical SOA many services will be deployed on the same SOA container, all sharing the same infrastructure (data sources, messaging, Operations tooling, etc.) that the container provides. Reuse of that infrastructure being the reason to do so.

However, as a result, one single service behaving badly can impact all other services on the same container. I have seen cases where a single failing service brought down the complete container. One of the reasons to deploy every version of a MicroService in its own container is to prevent this type of issues. In this way it can be scaled, improved, and fixed without affecting any other MicroService. 

Choreography

As I explain in my previous posting about MicroServices, there can be quite a few challenges to overcome when business functionality has to be supported by a set of MicroServices working together. Quite a few of those you could be avoided or addressed much more easily when all services would be deployed on the same container (which in a classical SOA is more or less the default), in particular related to monitoring and Operations.

If there is any area in which MicroServices could quickly start adding value to a classical SOA, then it is by orchestrating MicroServices (instead of classical SOA services) in case of Business Process Management or Case Management. Compared to classical SOA, what you will get "for free" is that the cluttering of the orchestration by technical aspects will be kept to the minimum (if existing at all) as you will be orchestrating business functions with (mostly) business-oriented interfaces.

Technology Choices

With classical SOA the technology is limited to what the SOA container supports. For example, in case of Oracle you primarily implement your services using BPEL, Mediator or BPMN, simply because that is the easiest to do. Of course there can be good arguments for restricting the technologies used (even in a MicroServices environment you might want to have guidelines on that) but in practice you may find that this does not always result in the best designed, constructed, and operating service. If all you have is a hammer...

In contrast, MicroServices are polyglot regarding technology, where for each individual MicroService you will use the technology that is best suited considering the functionality you have to provide and the skills present in the team. Different types of MicroServices may have a complete different way of implementation, and using a complete different set of technologies. However, except for the interface, the technology used is completely transparent for the consumer. 

Message Transformation

Another MicroServices principle is smart endpoints / dumb pipes, meaning that there is no transformation or enrichment happening in some Enterprise Service Bus. If an ESB is used then that is limited to routing and perhaps as a layer for enforcing security. In a classical SOA architecture transformation and some types of enrichment is typically done in the Service Bus.

Some Misconceptions About MicroServices

Finally I would like to address some of the misconceptions I hear and read about MicroServices:
  • DevOps implies MicroServices. It's more the other way around. DevOps is about culture and shared responsibility for the operation of one application. That can also be applied to many other architectures.
  • SOA is not MicroServices. Many see MicroServices is a sub-domain of SOA. As James Lewis and Martin Fowler state, some consider MicroServices as SOA done right.
  • There is no use for a Enterprise Service Bus in a MicroService architecture. Well, you may still need the routing and security features it can offer (see also the section Message Transformation above). Perhaps not the traditional Enterprise Service Bus as we know it, but more something that you could call a "Business Event Bus".