BPM

Metastorm BPM : It’s not an application development tool


After 2 years of designing a large operational system using Metastorm v7.6, I wanted to reflect on why it’s a bad idea to use Metastorm BPM to build big workflow based systems.

The problem with the finished system is not that it doesn’t satisfy the requirements or doesn’t perform well enough (considering), it’s that it is a maintenance nightmare.  I wrote an article this time last year whilst travelling back home to Holland from being snowed in and which concerned why maintainability is the most important design factor (over and above scalability and extensibility).  Coupled with a ‘big bang’ design approach (over an agile dev approach) and consistent requirement changes, it’s a surprise the system runs in its operational state.

I don’t wish to run the product down, because for small to medium workflow driven applications, it does the job. But, it’s clear lack of object orientation is the biggest single product flaw and when building a big system with Metastorm this cripples maintainability.   A solid design architecture is obviously of major importance.  Basic application architecture fundamentals such as breaking a system design down into cohesive functional ‘components’  that represent ‘concern’ area’s for the application can be difficult to implement.  This is down to the fact that process data is stored in the database per process and passing data between processes using flags can become messy, especially when certain characters are passed using those flags (that Metastorm classes as delimiters).  Sub-processes are then an option, but these also have inherent flaws.

Forms, which again are application components are process specific, so re-use is again suffering and so replication of forms has to be done, further disabling the idea of good maintainability.

Having data repeated in processes and having no code dependency features is bad enough, but because you have to remember where you have used process variables and keep in mind when and where values may change, the tool puts all the responsibility on the developer.  Once the system get’s very large, the event code panels (the ‘on start’, ‘on complete’ etc) get very complicated and tracking variables and when they may change etc becomes a struggle in itself.  Changing a variable value in a large process has the risk of making other parts of the process not work quite right because ‘you’ve forgotten that the variable is used in a conditional expression later on’.

This then begs the question, should you even use the Metastorm event/do this panels for ANY business logic.  I’d say no.  Only UI or process specific logic should be used and you should push ALL of your business logic to server-side scripts and a suite of external .NET assemblies.  You can then at least implement a fully swappable business logic layer.

So along comes v9.  This product is a great move towards stronger application architectures.  OOP design and ability to debug alone save a whole lot of system maintenance time.  So although this version takes us closer to being able to create solid, maintainable operational applications, it was released too early.  It is slow (halving development productivity against version 7), it had many broken features and grids, one of the most used visual components, especially for data driven applications (which is most business apps) were just terrible.  They continue to perform almost independently from the rest of the system and patch 9.1.1 is still addressing grid shortfalls.  Obvious shortfalls which should have been picked up by a thorough QC team @ (OpenText) Metastorm.

The new OOP approach means that designers and developers no longer have to use the ‘Line by line interpreted’ syntax of v7 and can design re-usable components.  So there is a greater case for using Metastorm BPM as an application development tool for fair-sized applications but whilst development productivity is still poor and the designer is still very buggy, it’s not quite there yet.

Free ‘BPM For Dummies’ book


When I started in the field of BPM, I started hands on, creating workflows based on customer requirements.  I hadn’t read any books on BPM as it just seemed like one of those fields that was really  just common sense, surely no concepts and best practices could exist for just ‘workflow’, which I defined as just moving ‘stuff’ through a sequence of activities?!!?

Equipped with MS Visio and common sense, I used my own home cooked notations that people came to recognised within the company I worked for at the time.  In a way I sort of faked it until I made it (isn’t that what we do in IT though?).  Now this was all well and good and with most fields, you learn by mistakes but I started noticing that some of the process decisions I’d made during design didn’t turn out to be as efficient because I’d gone for the big bang solution as opposed to the agile.  My ability to communicate my idea’s wasn’t based on any common concepts so became difficult in some instances and when I showed my home cooked Visio masterpieces, some people just didn’t quite get it.

Like most, I’ve learnt a lot over the years, working with different clients and within different companies, using their own standards taught by their BA’s and also using industry standard approaches and modelling notations.  One thing I do know is that although BPM from the outset may appear like an easy thing to get into as its mostly common sense coupled with a good ability to draw shapes, it’s not and whether you have a technical background or not, BPM requires you read up on some fundamental concepts.

My point here? – If you’re starting in BPM, read white papers, go to BPM focused community sites and forums and start to understand the most common business processes (or level zero processes as they’re referred to by some).  A nice starter is a free BPM for Dummies book.  I personally like the ‘Dummies’ series of books for getting into most new subjects at a basic level.  A free copy of the BPM for Dummies book is available via http://www.BPM.com if you sign up.  This link may take you directly to the pdf copy of the book itself, otherwise click here to signup and get access to it.

BPM and ECM


Based on some recent speculation, forum discussions and off of the back of yesterdays confirmation that OpenText have acquired Metastorm, I wanted to talk about the topic of whether it is inevitable that BPM and ECM will eventually become one technology offering.  There are lots of opinions on this topic and some think that both will merge and others think that whilst there is currently overlap, they will continue as separate and in some cases competing technologies.

So ECM, stands for Enterprise Content Management.  Microsoft’s Sharepoint is an ECM tool in that it allows you to organize your enterprise content / digital assets centrally which is useful for collaboration.  ECM’s not only allow you to organize your digital content but also generally provide basic process automation using this content and in the case of Sharepoint, that means using Microsoft Office and Workflow Foundation.  The later two, the workflow and UI for interacting with the process is where we start to move into the BPM realm.  But not really.  This tends to be the main argument that ECM and BPM will merge, that both technologies offer a workflow solution.  The problem with this however is that BPM as a field is misunderstood in a lot of cases.  BPM does not only mean just process automation using a workflow engine, there is so much more to the field of BPM.

ECM is about content and how it is organized and made available to an organization, some process automation is thrown in to ensure that this content can be moved around the organization but is limited. BPM is the consistent improvement of how a business is run (via its many processes), which is applied not only to automate processes but to raise visibility of how the business is run, via process activity monitoring, business intelligence, dashboards etc.  BPM is not all about content, yes BPM generally creates the content and may consume content, but BPM has had document management support for years, so this isn’t new.

Systems Integration, or EA (enterprise integration) is another area of technology that has a far closer relationship to BPM. By using programming frameworks like Java or .NET or using enterprise service bus components that implement a message orientated integration feature you can integration almost anything, out of the box, with BPM servers.  This being the case, enterprise integration remains its own independent technology even though the vendors are offering BPM products that cover the two.

I do think it is inevitable that some vendors will attempt to further develop their products with ECM features as to offer up an ‘all in one’ ESB, BPM, ECM server but for the most part I believe these will be still be sold as seperate products with simple API’s and thus mean that they remain competing technologies (as the products tend to drive what technologies are grouped). In the case of Microsoft, with Biztalk, Sharepoint and Office, I believe they have the right strategy and I do think OpenText will keep its BPM and ECM products separate but closely paired (makes sense from a sales / licencing perspective)

There are of course benefits to storing and organizing documentation and process models in an ECM like is done with some Business Process Analysis tools but this is the case with any project documentation and as such ECM, with its ‘Enterprise’ clue should be seen as an enterprise wide repository not specific to the process management realm.

In summary, whilst the two technologies are overlapping and I do see content management as important to BPM (think open XML document formats and web services), I do believe that the two will not become one, but content management will become one of the many area’s of the BPM space (along with rules management, process automation, activity monitoring, systems integration etc) – in what form is still to be seen.

Metastorm take over announced


The Baltimore Business Journal reports that Metastorm will be acquired by Open Text for $182M. The deal is looking to be closed by the end of March.

It sounds like the combined product offerings from both companies will create a stronger product line for new and existing customers going forward and of course there’s always the advantage of having a larger combined customer base.  Here’s a link to the original article.

eWork.Engine.ScriptObject


So in v7.x .NET is available to your Metastorm BPM processes via server side scripting.  You can use basic scripting on the server side, but not for use with the .NET framework.  By using JScript.NET as your server side scripting language you have the .NET framework class library and your own custom types available to you, but let’s not forget the all important ework script object.  This object allows you to access your process data from the server side.

When you create a server side script via designer, it is like creating a new class file in visual studio. The script can have full access to the .NET Framework Class Library and you may notice at the top of your script, like with your class files that the basic import System declaration has been made. Before I talk about the second imported namespace eWork.Engine.ScriptObject, I need to touch on references first.

In visual studio, if you are using assemblies that have been created by you or someone else or if you are using non core .NET framework assemblies, you have to create a reference in your project.  This reference is essentially a link to the assembly so that you can access the types in that assembly. You would right-click the references folder in your project and then go find your assembly or COM component (VS will wrap it in a .NET callable wrapper for you).  Once you select the assembly, it is copied locally to your project (if ‘Copy local’ set to true) so that when your project compiles it can use it after application deployment.

The same rule applies to using Metastorm BPM server side scripts.  Instead of adding a reference to an assembly, you place the assembly that you need to reference in the Engine/dotnetbin folder of the Metastorm BPM program folder.  Anything placed in here, the engine can see and so all you need to do is make sure you import the namespace required.  This is where the eWork.Engine.ScriptObject comes in.  If you go take a look in the dotnetbin just after installation of Metastorm BPM, you’ll see a collection of assemblies and eWork.Engine.ScriptObject.dll is one of them.

The namespace eWork.Engine.ScriptObject contains 13 types which you can use in your server side scripting. You may notice that when you first create a server side script, 2 methods are created for you, SyncSample and AsyncSample.  The first parameter of each of these is an object called ework which is of type SyncProcessData or AsyncProcessData.  Each of these are types found in the eWork.Engine.ScriptObject.dll.

So, we have an object in our methods available to us called ‘ework’. Now the first question is, what methods or properties does this handy ework object contain? – The server side script editor in v7 does not employ intellisense so typing ‘ework.’ is not going to help you much.  One idea is to use visual studio’s object browser.  If you create a new VS project, right click your references folder and then make your way over to Program Files/Metastorm BPM/Engine/dotnetbin and select the eWork.Engine.ScriptObject.dll file it will be added to your project references list.  Right click this new reference and select ‘View in Object Browser’.  You’ll now see the namespace and the 13 types I mentioned earlier in the object browser. Type members will be shown.  Take a moment to instantiate the SyncProcessData type to have a look at its members that will available to you via the ‘ework’ object in the Designers script editor.

There’s a lot more to the ework object than you thought right? Most of the functions available via the Integration Wizard are there and because this is a .NET object, all of the standard object members are inherited (ToString() etc).

ps – When validating your server side scripts, Designer might throw up some errors to the effect that it can’t find a reference to your own .NET assemblies if being used. This is purely because the designer looks to its own Dotnetbin folder for references when validating your code and so any assemblies you wish the engine to work with should be placed in your local Designers Dotnetbin folder also (note, its a capital ‘D’ for the designer folder and a small ‘d’ for the engine folder).

Biztalk 2010 : Configuration Guide


Biztalk is Microsoft’s Integration and Process server (sometimes referred to as an Enterprise Service Bus), is now in its 2010 edition and seems to be going from strength to strength 10 years after it hit the market.  I’ve worked with versions 2004 and 2006 R2 of Biztalk in the past for EDI projects and it’s a great product with a lot to offer.  It has custom adapters for connecting LOB systems and is heavily integrated with the .NET Framework, especially WCF.  The great thing about this new version of Biztalk is that the installation is easier than ever. Installation has become simpler with each release since Biztalk 2006 – I remember Biztalk 2004 being a nightmare to install when it came to its prerequisites installations (SQL Server SP’s etc).

For Biztalk 2010, al the prerequisite components required are downloaded for you by the installer (if you don’t already have the .cab files available on the disk) and the once the installation has finished (it takes about 15 minutes), the basic server configuration option ensures you are up and designing Biztalk applications in a very short time by auto configuring single sign on and setting up all the databases and service accounts for you using the single account specified on the first page of the configuration screen.

In terms of the audience for this post, if you are a newbie to Biztalk and wish to follow along with the configuration steps, hopefully this is simple enough for you. I do reference certain Biztalk features without explaining them in any detail so the more experienced developers may feel more at home having setup previous installs of the server.  In terms of environment for this install, I am using a Windows 7 machine and will be setting up a single server as no pre-existing Biztalk server environment exists to join with on my network.  I already have .NET 4, Visual Studio 2010 and SQL Server 2008 R2 installed.  In terms of the software component pre-requisites for the install, here is the list:

Prerequisites:

– Microsoft SQL XML 4.0 with Service Pack 1

– Microsoft Office Web Components

– Microsoft ADO MD.Net 9.0

– Microsoft ADO MD.Net 10.0

– Setup runtime files

– Enterprise Single Sign-On Server

– Enterprise Single Sign-On Administration

– Microsoft Primary Interoperability Assemblies 2005

Features Being Installed:

– BizTalk EDI/AS2 Runtime

– Windows Communication Foundation Adapter

-Portal Components

– Business Activity Monitoring

– Windows Communication Foundation Administration Tools

– Developer Tools and SDK- Documentation

– Server Runtime

– Administration Tools And Monitoring

– Additional Software

– Business Rules Components

– BAM Alert Provider for SQL Notification Services

– BAM Client- BAM-Eventing

Installing Biztalk 2010

Now I’m assuming you are either installing a paid for version of Biztalk or are trying out the trial offered on the Microsoft Biztalk site and so have the legit zipped installer package.  Once you have the installer package extracted to your disk, select the setup.exe file from the BT Server folder.  The install should be pretty self explanatory. Select the features you wish to try out and select install. Grab a coffee whilst this screen does its thing:

Don’t worry about the installers lack of interaction with you. Biztalk installs the required server components first and allows you to configure the server databases and register the server components later, which is what we’ll walk through now using the Biztalk Server Configuration tool.  This can be found in your program folder in the start menu:

The initial configuration screen will ask what mode of configuration you wish to proceed with.  In this case, we are choosing ‘custom’ as the basic mode performs all of the configuration for you and that’s not much use when we’re trying to understand the configuration process.  Supply the screen with your SQL Server instance name and also provide a windows account for use by the Biztalk service.  If you use an account that is part of the local or domain administrators group, you will be warned of the security risk related to doing so, but you can continue.  It isn’t advised to be using an administration account for a production Biztalk installation.  Select the ‘Configure’ button to view the following screen (note – this image was taken post configuration so shows items already configured. Your screen will contain red circles illustrating non configured features):


Enterprise Single Sign-On

The first of the server features to configure is single sign-on as most of the other features require this for their configuration.  Select Enterprise SSO from the tree, enable single sign using the check box provided and create a new SSO system.  Doing this will create a new single sign on database (SSODB) onto the SQL Server specified. A SQL Server login for the SSO Administrators group is also created.  As well as the database, the single sign-on service will be set. I am using the same service account specified earlier.

Before leaving SSO, ensure you set a SSO backup password and a location for the backup.

Biztalk Server Group

Next we join or create a Biztalk Group.  Your Biztalk server needs to be part of a group enabling you to manage your Biztalk server infrastructure at the group level.  In this instance I’m installing to my development machine only so will be creating a new Biztalk Group which I will manage from this machine.  Enable the Biztalk Server Group and then create a new group. If you’re familiar with Biztalk you’ll see the configuration for the Management and Tracking databases and of course, the all important MessageBox database.  Again, specify the database names (I always stick to the defaults) and select your Biztalk role to windows group mappings for Biztalk Admins, Operators and B2B Operators.

Biztalk Runtime

Next up, we have the critical Biztalk runtime. This is the engine of Biztalk and deals with processing and routing of messages into the messagebox and out of the system. You may also be paying for your Biztalk server on a runtime basis (license per runtime).  Ensure the ‘Register the Biztalk Server runtime components’ checkbox is ticked.  Biztalk applications are hosted within a process host so whilst registering the server runtime, leave the ‘BiztalkServerApplication’ and ‘BizTalkServerIsolatedHost’ checkboxes ticked.  Both of these are in-process hosts, so will need a windows account to run on. I’m using the same account used for SSO.

Biztalk Rules Engine

If you’re going to creating rule policies / vocabularies etc, you need to activate the Biztalk Rules Engine.  The rules engine allows you to abstract the business rules from your orchestrations (processes) to enable easy management of rules away from the process itself.  Rules are such things as ‘if PO cost is greater than £500.00 and cost centre = ‘101ADM’ then do this’.  Ensure ‘Enable Business Rules Engine on this computer’ is checked and specify the business rules database store (again I would recommend leaving the default  ‘BizTalkRuleEngineDb’).

Business Activity Monitoring [Portal]

BAM is the business activity monitoring side of Biztalk. It allows you to monitor certain business KPI’s against your process instances so that business stakeholders have real time updates on how their business is doing.  Monitoring of course is one of the key elements in process management and is one of the core drivers for process improvement.  If you plan on using BAM, enable it and confirm the database names.  As this is my development server, I won’t be adding BAM Alerts at this time, even though I do have SQL Server
Notification Services installed.  Last up on BAM and the configuration in general, we will be enabling the BAM portal.   This is the window into the activity data produced by BAM Services and is hosted on the local IIS server.  This section deals with some of your web server settings and for now I will be adding the site to my Default Web Site on IIS (version 7). You can of course setup your own site / application pool for this.

EDI/AS2

In this instance I have chosen NOT to install the BizTalk EDI/AS2 runtime as I’m not planning on creating any X12 or EDIFACT documents anytime soon and unless you are familiar with these areas or are actively using Biztalk to deal with EDI files it’s best not to install this feature.  The AS2 part of this relates to HTTP security (AS1 – SMTP, AS3 – FTP).

Applying The Configuration

So… I’m all done with my servers configuration. I have configured my Biztalk installation and am ready to Apply my configuration preferences.  Select ‘Apply Configuration’ from the top left of the server configuration window.  Biztalk will present you with a summary of your configuration choices as shown below:

Once you proceed, the configuration will take a minute or two and once completed, you will be presented with a list of green ticks (hopefully) against your server features. Any problems will show up with the familiar red circle with cross.  Typical issues can involve database rights and authentication using the windows account credentials provided.

Once the server has been configured, we can check all is well by opening up the Biztalk Server Administration application.  This is the application for managing your deployed Biztalk Applications and all of their artifacts.  Expand the Biztalk group and if all is well, you should see the default Biztalk applications and artifacts list as shown (notice we connect to the Biztalk Management Database for this info):

Another useful check is to boot up Visual Studio 2010 and check that Biztalk project template is available to you:

Congratulations, you are ready to begin creating XML Schemas, WSDL, Orchestrations, Custom Pipelines and more.

Metastorm BPM : Dynamically determine the Metastorm database name


At the moment, both of my Metastorm BPM clients have multiple Metastorm servers running on different physical servers that operate as independent Development, QA (or test) and Live systems.  Now, this isn’t so rare, most companies that implement Metastorm BPM environments have this type of setup.  You will generally also have three Metastorm databases running for these three servers and if you’re like both my current clients, all three of those Metastorm databases run on the same Sql Server Instance and as such are named differently.   If you have for example MetastormDev, MetastormQA, MetastormLive databases setup then accessing the data in these from one single procedure file isn’t a big hassle as the procedure just references a DSN (data source name) that is setup during installation.  This DSN is called Metastorm by default, so provided the procedure references that name in it grids, sql executions etc, then all is well.

When you are creating server side scripts you can still reference this DSN name by using the built in ework.SelectSQL or ework.ExecSQL functions via the script designer. You would create a SQL statement as you normally would in Metastorm Designer and then specify the DSN properties in the second parameter like this :

var sql : String = “SELECT myColumn FROM myTable”

ework.SelectSQL(sql,”DSN=Metastorm;UID=;PWD=”) //Uses windows authentication hence no Sql Server password

The above illustrates a simple select of a single column, but take the scenario of selecting multiple columns of a database that you want to apply a JScript.NET for loop to and loop each record performing logic such as raising a flag or checking a secondary data source.  SelectSQL is no good for this unless you want to start breaking the data down into sub strings and use arrays.

It is far easier to use ADO.NET in your server side script to load the SQL data into a dataset and loop that dataset, referring to the columns by name in your for loop statements.  I always much prefer the use of an in memory dataset for column/row manipulation.  The issue that arises using this method however is that we will need to specify a database connection string to pass to the SqlDataAdapter class constructor when you are filling your data set.  But what is the database name?  Our DSN becomes of little use to us now.

The answer, or at least the method I normally use is to read the Metastorm registry keys for a database value for the server you are running the procedure on.  Specifically we want to grab the local machines sub key value at this location : \\HKEY_LOCAL_Machine\SOFTWARE\ODBC\ODBC.INI\Metastorm\Database.  Here is a JScript.NET method that checks the local servers Metastorm database name (click for full size image):

For those who would want to implement this method in C# (i.e. for supporting utility assemblies that you can place in the Metastorm engines dotnetbin server folder):

You can call these methods from anywhere in your server side code and have a string return naming the database being used by the current Metastorm server.

Implementing a BPEL Process on GlassFish ESB


Note : Source XSD files for this project can be found here. Create 2 XSD files from the attached word document content.

I’ve been a .NET developer for a good few years now.  I’m familiar with OOP concepts and the .NET architecture and recently I’ve had the language / framework itch.  I’ve always wondered why Java remains such a popular language for rich client, web and embedded systems and decided, finally to jump into Java.

My initial thoughts where how similar in syntax C# is with Java.  There are of course small differences and Java appears to only have about 70% of the language features that C# / .NET offers, but for .NET developers, Java syntax and language features could really be picked up over a weekend.  The architecture of the JVM should be an initial read also.

Once I felt comfortable with the differences in the language, the differences in J2SE, J2EE and J2ME and how the component / container architecture of 2JEE works (JSPs, Servlets, EJBs etc), I was very keen to jump into the area I know and enjoy developing the most, integrating systems via messaging.  For this task I’m using OpenESB, a business integration platform similar to BizTalk (deploying OpenESB applications to the GlassFish Server instead of the Biztalk Server).

The key word here is OPEN.  The platform is entirely based on open standards such as XML, SOAP, WS-*, Java and Java’s SOA /  Business Integration approach called JBI (Java Business Integration).  Now the title of this blog is GlassFish ESB,  this is an edition released as part of the GlassFish Application Server that includes the core Open ESB engine and allows integrated and easy design with the NetBeans IDE.

I should stop for a moment and clarify what ESB is all about.  This acronym stands for Enterprise Service Bus.  The enterprise service bus concept is a kind of middleware architecture approach that provides a central messaging (including message brokering and routing), process orchestration /execution and message transformation (mediation) platform that allows software and systems of varying types to communicate across the enterprise using open standards, most importantly XML and SOAP.   Biztalk, Websphere, Business Integration Server (by Seeburger), MuleESB, JBossESB etc are all types of ESB.

Deploying a basic BPEL process.

The GlassFish server and Open ESB core components are installed with the NetBeans IDE install if you download the installer from the OpenESB website.  At the time of writing, Open ESB runs on GlassFish 2.2, but I’ve not managed to install to v3 of GlassFish.  To illustrate the ease of use and how quickly you can deploy a BPEL process to GlassFish, we will create a small BPEL module that receives a message from a SOAP endpoint, follows a simple BPEL process and returns a response message to the endpoint.  This is the standard first ‘example’ from the OpenESB site and forms a simple Loan Application process.

Create a new ESB Project

First, you need to launch the NetBeans GlassFish ESB IDE (integrated development environment) and create a new project. Select the BPEL Module project and select ‘next’.  Specify the project name as bpLoanApplication and leave ‘Set As Main Project’ ticked. Once the project has loaded, take a look at the Process Files directory. We will be adding the message schemas, WSDL and BPEL process file in here.

So, we need to define the request and response message structures using XSD files and then build a WSDL file using these message XSD’s to allow a service client consuming this process to understand the structure of the request message to send.  XSD files represent the XML schema of the message and includes important information such as element names and element data types. Here is our request message schema:

Import both XSD files included in the source files (Tutorial Files > jbiConcepts > Service Registry > LoanRequestMsgs) into the project.  These represent the request and response messages.

Create WSDL File

Next we need to create the WSDL file that the service consumer will read using the standard url?WSDL link.  WSDL is the web service description language and its mission is to describe to consuming clients what operations the web service offers so that the client can determine what it needs to do to interact with the service.  In the projects window, right click the ‘Process Files’ directory and select ‘Add’, choose ‘WSDL Document’.  The wizard will open and you can specify the WSDL file name (loanApplication.wsdl) and select ‘Next’ to move onto the main Configuration screen where you will define the Input and Output message parts.  You can select nodes from the already imported request and response XSD files and NetBeans will auto detect the data types of the message elements.  Once you  have specified the inbound and outbound message structures and specified a port name, you can select ‘Finish’ and the WSDL file is added to the process files folder.

Create and configure the BPEL Process

Now that we have our messages and WSDL contract, we need to create the BPEL process and associate the inbound and outbound messages with the process.  Again add a new file to the ‘Process Files’ directory and choose ‘BPEL Process’.  You will see the default process diagram appear in the design area.  Using the Palette on the left, drag Send, Receive and Assign shapes to the process diagram as illustrated below.  Receive and Send actions should be self explanatory, but the Assign action will map data values from the request message to the response message.

Next, drag a ‘Partner Link’ onto the left hand side as per the above and call it ‘plApplicant’.  This partner link will represent the logical message entry and exit point to the process.  This partner link will be associated with a SOAP endpoint later on.  Lets now configure our partner link and message send/receive actions.  Double click the partner link and a properties window will open up to allow you to associate the WSDL file created earlier.  Using the WSDL, the partner link type can be specified.  Once the partner link has been configured, we need to configure the Receive message action; double click the Receive action and in the properties window  you can now associate the recently created Partner Link name, the Operation name (the request operation specified in the WSDL) and the creation of a local variable that will represent the message within the process (ApplyForLoan). Leave ‘Create Instance’ ticked.

We need to also configure the Response action here.  Double click it and in the opened dialog again specify the single partner link and partner link operation as well as specifying a variable for the outbound message (ApplyForLoanOut, See below).  We do this as we will be mapping data from the inbound message variable to the outbound.

Assign values to the response message

Now that we have created our process structure, a logical partner link which will link the process to an endpoint of some kind and our Request and Response message handlers, we can now map the inbound message data elements to the outbound message.  This is done using the ‘Assign’ action placed in the centre of the process.  Double click it to open up the assignment mapper interface.  The mapper interface allows you to map values (message elements) from the request variable we created earlier, which contains the inbound message to the outbound message variable we specified, which again contains the outbound message schema.  In short, you create a relationship between source and target nodes (message elements) and if necessary can perform a group of ‘transformation logic’ actions on the pane in the centre of the mapper.  In this example, I am just create two static string values that will be mapped to the outbound message.  You do however have the ability to run some XSL transformations on the data, run some Java code on the data and using Operators and Filtering functions can perform decision based transformation.  Those familiar with the BizTalk Orchestration Designer will recognise these as Functoids.

Once you have completed your map, save it and close it.  You are now ready to build the ‘process’ part of the application also known as the BPEL Module.  Right click the ‘bpLoanApplication’ project and select ‘Build’. The build.xml output window will show you are the results of the build.  You may see a warning that warns that certain message elements are not used, this is fine as its only an example and we are not using all parts of the message.

Creating a host application for our BPEL module

Now that we have created our BPEL Module, we need to host it in a Composite Application.  We do this by creating a new composite application project that will host our initial bpLoanApplication module.  Select ‘New Project’, choose ‘SOA’ then select ‘Composite Application’ – call it ‘caLoanApplication’.  This will a new project to the project window, ensure this is now the main project. When the project loads, you will see an empty design surface that is awaiting WSDL ports and one or more JBI modules, which in our case is our BPEL Module.  Using the palette, select a SOAP binding from the binds list and drag it to the WSDL ports area on the left.  Note that will will not be able to configure the endpoint until the caLoanApplication project is built.

Next, we need to drag our BPEL module to the centre part of the design surface so that a message sent to the SOAP endpoint will be sent to the BPEL process we have designed.  The great thing here is that you don’t have to go out searching on the disk for the modules .jar file, just select the still open bpLoanApplication project and drag into the centre of the design surface – a nice new bpLoanApplication module will appear.   Now its time to build, right click caLoanApplication and select ‘Build’.  This build has now created a relationship with the JBI module bpLoanApplication and you may notice that under the caLoanAppication JBI modules folder, there is now a new JBI module called bpLoanApplication.jar.  Now that the project has a relationship with our JBI module, we can now configure the SOAP endpoint.  Right click the new SOAP endpoint and select properties, select the … next to interface name and select the only listed interface you see in the drop down (this is the logical port we created in the BPEL process earlier).  Provide an endpoint name and an endpoint path on your web server; this is the link between the logical WSDL port and an endpoint.  For this example I use http://localhost:${HttpDefaultPort}/loanApplication/loanApplicationEP as my endpoint address as I’m running this test locally (and no kids, clicking the link won’t work).

Almost done… Drag the connector from the now configured WSDL SOAP endpoint to the bpLoanApplication JBI Module.  We now have a way of receiving the message via SOAP, sending it to the JBI BPEL process and a way to get a response back to the calling SOAP endpoint.  Your composite application surface should look like this:

Great. Now all that is left to do is deploy it to your GlassFish server.  Right click the caLoanApplication and select ‘Deploy’.

Nice work…

Bonita Open Solution : A quick look


I’ve been playing with a few open source BPMS tools of late including Process Maker and Intalio Community Edition, however I’ve been most intrigued by Bonita Open Solution which is currently available at version 5.1.1 from their website. The product combines three solutions in one via 1) an innovative process design studio, 2) a BPM engine and 3) a fancy end user interface which takes an email approach to managing process instances (cases).

The product is a good size at 200mb but installation is effortless and I was up and running with a test process within minutes.

Bonita Studio is a very easy to grasp application and for the experienced BPM professional, especially those who have used Tibco Business Studio / Intalio the eclipse style interface will seem very familiar. I found the process design to be very enjoyable and much prefer the BPMN style notation used than Metastorms more business friendly ‘stages and actions’ approach.

  • Process Variables

    Process variables are created for use within the process and the forms at each task. Variables can be of types Boolean, Date, Int, Float, Text, List, Attachment or Java Object (any class from the java language). Variables are stored and accessed using the syntax ${myVariable}.

    • Roles

    Bonita determines its human task roles using groups. Groups are defined within Bonito Studio and the process initiator group is predefined. You assign a group to each task through the ‘Actors’ task property.

    • Connectors

    In order to work with external information systems in Bonita Open Solution you need to create connectors to those systems. The out of the box connectors list is impressive (see list groups below) but you can if you so wish create your own connectors using the Bonita API.

    The Bonita Open Source Pre-defined connectors include the groups:
    1. Databases (Sybase, SQL Server, DB2, AS400, MySql, Oracle and a whole load more)
    2. Google Calender
    3. Java Executor (executes a java class)
    4. LDAP (allows searching of LDAP directory)
    5. Email
    6. Scripting (Shell and Groovy (Bonitas own scripting language)
    7. Social (send messages to twitter)
    8. SugarCRM (interactions with the Sugar CRM)
    9. Web Services (interact with web services)
    Connectors can be configured (e.g. i configured sending a twitter message from my bpmtalk account to my personal twitter account) and saved for later re-use. During this connector configuration you specify which task ‘event’ the connector should execute on (start, end cancel or abort).
    One point regarding database connectors, to get your data using the connector wizard you can specify the database server properties (including connecting port) and write the SQL query. Finally you need to have a variable handy to accept the returned results. Bonita suggests you create a variable of type ‘Java Class’ which allows you to select any of the available classes in Java. I chose the recommend java.util.list class and had my database results assigned to that.
    I saved both my connectors to the process for later use. Connectors can also be saved against each process step.
    One final point on connectors, they are continuing to grow as users are contributing new connectors via the bonita community site. Additional connectors included google translation, csv reader, picasso etc are available.
    • Processes

    The process designer is very easy to use and the process components are in a simple BPMN style. The available process components are:
    1. Step (activity or state of type human, automatic or sub process)
    2. Start (process start event)
    3. End (process end event)
    4. Gate (way for AND or XOR decision making)
    5. Transition (continuing the flow with the option for flow conditions)
    6. Lane (for process swim lanes)
    7. Pool (for multiple lanes)
    8. Timer
    9. Message (for throwing or catching messages)
    10. Message Flow
    11. Text Annotation
    I can drag process components onto the canvas with ease and the ability to continue drawing the process from the last placed component (e.g. a step) really helps in quickly building the workflow. Each process component has a good amount of configurable options such as associating connectors, actors (roles), deadlines, forms, data, advanced options and the ability to instantiate a sub process (other deployed bonita processes) and map variable values from the parent process to the child.

    Each process is saved as a .bar file and Bonita ships with a few example processes for you to get to grips with. You can also export processes in the form of a zip file. As far as transactional support goes, there doesn’t appear to be any multi step transactional support, only the ability to abort a step if an error is encountered.
    • Forms

    Forms can be created per process or per step of the process. The benefit here is that you can select which process variables you wish to use on the form and the system will add them for you on the form design canvas.
    To create a step form, click on the spanner and screwdriver icon of the step and select the user step type (types can be sub process (with variable mapping) or man tasks). Once the step type is defined you can attach/create a form for that step.
    Form fields values are stored in ${field_Name} format with the field_ prefix being system generated. You specify the copying of form field variables into process variables as part of the form submit button or as each field is changed.
    Field validation is far more advanced than what Metastorm BPM offers and so is the form design options in general, however I do notice there doesn’t appear to be a away of scripting field change events events at this time. OnLoad and OnSubmit events can be scripted.
    The advantage of the Bonita forms is that you can create several of them against one process step allowing the user to step forward and backword through the forms in a wizard format. Your forms can utilise a HTML template for a standard look and feel as can the Bonita user experience interface.
    My only gripes with the forms is the lack of any grid control for viewing data and that you are designing to an HTML table similar to Cordys Process Factory.
    • Scripting
    Groovy is the Java based language that Bonita Open Solution uses and provides Bonita with 600+ built in functions. Each function has its return type (void, string, object, list etc) displayed in the Groovy expression builder and each function is complemented with a description of the function and its input parameters.  You can also run form submission logic using groovy.
    Groovy scripts can be saved and re-used later.
    • Summary
    All in all I find that Bonita Open Solution is a very good open source product that could handle plenty of small to mid sized business processes. The email like client interface for processing cases  and managing your other cases is very nice and the addition of simple case stats is a nice touch. The product clearly lacks certain features for more serious BPM development and there is still work to be done around the available form controls. My hope for the product is that its process designer does pickup more of the BPMN 2.0 components (intermediate events, more task types (script, service etc) and data objects for example) as it is still very limited however this is definitely a product I’ll be keeping my eye on.

    Metastorm BPM : Process Design Considerations


    I was asked this week by a colleague how I would go about sitting down and designing a Metastorm process from scratch. They where intrigued to know before getting into the details of how to use Metastorm what considerations should be taken into account to ensure the process had the best possible design. A good question and one I’d not really thought about with having my own ‘ways’ of approaching each different Metastorm project.

    Normally you would utilize process observations, documentation (requirements/specifications) and notation models to identify what the process should look like, how it should operate, who (users) and what (systems)  it should interaction with. This would result in a ‘rough’ architectural outline that would include how many processes to create, what hierarchical interaction each process would have and whether flags or sub procedures would be used to link the processes. A quick architectural diagram would help at this point and contribute to any technical specifications being drawn up.

    Thinking a little more about the question I drew up a post it of areas that should be given some thought prior to process design time and have translated my scribbles into the following key considerations to ponder prior to dragging any actions into the Metastorm process:

    [Note – For the purposes of Metastorm BPM 7.6 and 9.0 cross over I will refer to v7 maps as processes (v9) in this article. Also (as basic reminder) note that a Metastorm folder represents a process instance. That process instance will be moved through the process by user and system interactions obtaining process instance specific data along the way that can be utilized by the process.]

    1) Identify the tasks to be executed in order to complete the process. Prioritize those tasks. Modelling notations (UML/BPMN etc) can assist in identifying these. Ask yourself what states the process can be in during the process (think about what the folder / process instance will wait for i.e. awaiting approval, awaiting user input). Stages represent these states. Also ask yourself what activities will be performed by process participant (users) and systems. The are the process actions.

    2) Define how the process should start and who should initialize the process. Processes can have different initializing actions of types flagged and user. Define which shall be used for each process. For multiple start actions, remember to name them differently.

    3) Identify process participants by defining roles. Process participants will interact with the process as defined by their role membership. Define what functional activities process participants can take to further the process and create a role that reflects this. Systems also participate in the process but are generally interacting at a system stage so do not require roles.

    4) Determine data requirements. The processes can be data driven as well as participant driven and so its important to define what data entities the process will work with. Does it need customer lists, pricing information or user security information and if so in what form will the process obtain this data. Web Services, Directory Servers and Databases amongst other technologies can be utilized as part of the process. Identify what is required when. These data driven entities are called Business Objects as they represent a real world entity generated or consumed by the process.

    5) Identify re-use. Parts of a process can that operate the same but appear at different times in the process flow. It is useful to identify these early on and prior to design time so that sub processes can be determined. If for example you have some data collection , outbound email preparation and sending in your process that may occur several times in one process, you will want to break this out into a sub process. Sub processes are the child processing of parent process. As the parent process enters a sub process stage, the process instance (folder) is dropped into the sub process so that the emailing can take place. Upon send the Sub Process can enter a virtual archive stage and return the process instance back to the parent map for continued execution. Sub maps can be nested.

    6) Consider transactional roll back. If something errors during an action or a stage, Metastorm will roll back the process instance to the last successful ‘completed ‘ stage. If for example a system stages on start event causes an error, Metastorm will roll back the process instance to the previous successful stage. The same goes for the preceding action (as the process instance will never stall at an action, only ever a state/stage). A perfect illustration of how roll back can affect your process is the use of a flagged action as the first process action. If you add logic to the flag completion event you risk rolling back the process instance all together. If an error occurs during processing of this on completion event, Metastorm will roll back the action and subsequently the creation of the folder. Place nothing in your flagged action and dedicate a system stage on completion event for this logic.