Fiat Lux – the rise of the real time enterprise

Extreme connectivity is coming to the enterprise

This year at Process Mining camp 2013, during a workshop I lead, the attendees were discussing about the access, usage, transfer and reuse of knowledge. The context in a part of the discussion was about IT development and implementation. Some told that not every bit of information flowing among the development team deserved attention. The question was not where the information was stored or how it was transmitted. What determined the importance of information was the relevance for the duties in the context of the project. This meant that the development team was always filtering, analysing pieces of code, patterns, working solutions, seeking parallel developments, retrieving information from linked projects autonomously to make impact analysis (something that I also learned in a previous challenge as the Master Project Officer told me that such thing like linked work packages, project dependencies, simply does not exist as we think as a concept).

Real time enterprise

Two months ago I had a chat with a person that will be one of the speakers of the forthcoming edition of the BPM Conference Portugal. He is responsible of everything about customer support. That means that his team is responsible to update and change IT to support constant business evolving requirements and particularly to monitor multiple process instances like complains, connection requests, whatever. They rely on a divided brain to make it happen. Enterprise systems to enter into the addictive loop and out of the box tools like Github to share information among team members and make changes happen. Change in this context (Telecom) means you need to deliver new services every month and prevent customer churn on real time without waiting for next month outdated business intelligent reports. The challenge here is twofold: monitoring and intervene on operations and support business change.

The externalization of knowledge contributes to knowledge diffusion

One month ago I was talking with a manager of an utilities company that also confirmed the need of operational online addiction. Regulation it’s tightening and the company does not want to throw money out of the window because they missed slas to provide an answer about a customer complain, about start billing earlier energy, about finding hidden bottlenecks on operations. At this company some of the real time inspiration came from Lean discussion forums about the ones that are responsible for monitoring on real time energy  distribution and the status of the infrastructure using Scada systems. These people are used to control energy flow, transmission lines disruption, maintenance operations on continuous mode. For them they “do not understand” why their colleagues do not embrace a similar attitude, and as such, they played an important role translating abstract knowledge necessary to embrace the always on journey into proper codification that could be used by their peers.

The role of Enterprise Architecture

Enterprise Architecture under a system thinking approach can make a difference when designing the transformational step of entering in real time mode:

  • What are the horizontal barriers to be monitored? For each process domain is being monitored it’s necessary to identify the stakeholders that touches the process, that would be one of the main source of variety. Note that the idea is not figure out in process design if a particular stakeholder do something and pointing the measurement channels to those points. It’s identifying the stakeholders and absorb the information flowing in the context of process execution (that is a huge difference).
  • What are the vertical barriers to be monitored? What are the processes in the value chain that are related and a truly end-to-end vision must be setup? For example, in an Utilities environment a Complain Handling process should not be connected with Meter Reading? and with Billing and with Churn Management?
  • The Algedonic Channel – what in Cybernetics is defined with the objective to transmit alert signals concerning any event or circumstance that could seriously put the organization in a critical situation: failure of delivering services; an hike in customer churn; a flop on revenue, sales, etc. This is one aspect that is very neglected by managers, because they rely on the organizational structure to communicate alerts and supervening facts and sometimes it’s to late to intervene.

We are in a brink of another major change

There is a lot of discussion since last year about the concept of “intelligent BPMS”. For me, intelligent means the system is able to “think” and “reason” by it self, without falling into the cognitive illusion abyss (that leads to the question what the System should “think” about ?). Hence, there is no question that enterprises that want to customer centric, anticipate errors, predict churn, must have to equip with technology to monitor the business continuously and in real time (like it happen in the movie the Matrix, when people were looking to the screens watching the code trickle), if is “intelligent” or not I leave that discussion to analysts. But more important than that, it’s necessary that companies have highly skilled people, team workers, used to work under agile principles to make it happen. Without that it’s difficult to make the change. Are you as a manager up to job?


Where is the Order?

This post is based on the PRISM program scandal, Adam Dean writing style in a series of process design training sessions I carried some week ago and on the script of Batman’s Dark knight script.

John is seating at a table in an situation room. The company lost track of an important order and the customer is asking for a huge compensation, because the assembly line stopped and it’s waiting for the parts to continue to assemble the cars.


– Has he said anything, yet?


MARY shakes her head. JOHN leave the room and walks to the demoted process manager’s office and pushes through a door…

The radio is playing music on the background

“And the faster the world spins

The shorter the lights will glow”


The song is interrupted by faster keystrokes.


“Most of what you see my dear is purely for show

Because not everything that goes around comes back around you know”


– Evening, John, or should I call you the next board member with janitor competences?


– The order did not arrive it at the customer’s premises, despite we spent millions sending it trough an express carrier.



– Of course not.


– What have you done with it?




– Me? I was right here. Who did you leave it with? Your people? Assuming, of course, that they are  your people not Maroni’s…

(looking into the screen trying to show he is very busy doing something)

– Does it depress you, John, to know how alone you are?


John does not stop thinking that probably he should not have left the situation room where the important people are.



– Does it make you feel responsible for  the customer’s order current predicament?



– Where is it?


– What time is it?



– What difference does that make?



– Depending on the time, it might be in one spot.


– Or several. You know we cannot trust on this ultimate business intelligent system with forecast and big data analytics. You know, the problem was that when the system was setup the guys from IT only appeared in the end and collected some information, made some UML diagrams. I told that was the wrong approach, but you know, that’s not anymore under my control.


– If we’re going to play games, I’m going to need a cup of coffee.



– The intimidation routine?



– Not exactly.

John steps out. Went again to the situation room and debrief the colleagues. The new Process Manager, Mary get’s off the room and goes to talk with the former responsible. He was promised by the board that a huge compensation in company shares will be provide if she could put the process on track.



– Listen, stop playing games, you design the process in order to take full control of it.
– How can we track the order?
– Who did shipped the order?
– What address the order was sent?
– We already analyzed the lines of the invoice and we found nothing?
– Who is responsible for the configuration of the system?
– Why the timestamps of the warehouse dispatch are different from the express carrier when it acknowledged the order and put it inside the truck?
– Why the maps application is not working?



– Never start with so many questions I get fuzzy. Did you not invest in a big data solution?



– Shut up! You put this company into jeopardy!







– You wanted me. Here I am.




– I wanted to see what you’d do. And you didn’t disappoint…


– You lost let five orders. You are responsible to pay compensations around, 35 million € each?

– Then you let Susan take your place.

– Even to a guy like me… that’s cold!

– When you leave this room, Susan is going to take control.



– Where’s the order?



– Those center of process excellence fools want you gone so they  can get back to the way things were.

– But I know the truth. There’s no going back. You’ve changed things.

– Forever.



– Then why do you hate me?

The demoted process manager starts LAUGHING. After a moment he’s laughing so hard it sounds like SOBBING.



– Hate you? I don’t hate you.

– What would I do without you? Go back to ripping off the managers that don’t care about the company performance? No you… (points)You. Complete. Me.



– You’re garbage who destroys processes for no purpose. You just want to be recognized as a process thinker, but no one understands your point of view, and worse than that, the company does not see application.



– Don’t talk like one of them, you’re  not!

– Even if you’d like to be. To  them you’re a freak like me… they just need you right now.

He regards Mary with something approaching pity.



– But as soon as they don’t, they’ll  cast you out like a leper.

The demoted process manager looks into Marys eyes. Searching.



– Their morals, their code… their BPMN diagrams, their systems that never worked, full of integration patches because, because they never applied real enterprise architecture, it’s a bad joke.

– Dropped at the first sign of trouble. They’re only as good as the world allows them to be. You’ll see, I’ll show you… when the chips are down, these civilized people…

– They’ll eat each other. They just want the shares price to rise, and want to increase their bonus, they don’t care about if they can pay a little more to the blue collar workers, to empower them, to them make better decisions. They are just interested in buying the best technology money can buy, because they were taught to do that, without making a reflection if that is really necessary.

– They have process silo thinking. They don’t see processes interconnected.

– They don’t see exceptions happening.

– They see social business like having a corporate Facebook account.

– Did they not approved a system tailored to analyze and process big data? How it’s possible the company cannot track the order? Did read the reports about a secret program from the US government called PRISM that can track you if you have information that can put in danger US security? How that program works so well and your big data system cannot figure it out where the order is?

– See, I’m not a monster… I’m just ahead of the curve.


Mary is faceless and get’s out The demoted process manager cubicle. With no answers.


The radio keeps playing music:

“Too much luck
Too much conceit
It’s too much too much passion, information
Too much selfish
Too much fake
Too much computer
Too much to take
It’s too much”

Process Mining Camp 2013 – Expedition on Social Mining

I hosted a workshop at the Process Mining Camp 2013 about Social Mining. Here are the results of the discussion with my peers and fellow miners.

Kick-start to the workshop

We’ve been supporting our way of working, based on the increased processing capacity of information systems that have created the illusion that the world was more stable, predictable and standardized.

However the pace of change in the economy has been increasingly accelerated, fuelled by a nexus of converging forces — social, mobile, cloud and information — is building upon and transforming user behaviour while creating new business opportunities that let people do extraordinary things and are automating repetitive tasks and decision making at large. This implies that our vision of the future has to be changed.

Any system, any process must be able to handle the complexity of its elements and be active and adaptive to survive. This implies that any attempt to limit the existing variety will lead to the system, the process, the organization will lose the ability to adapt. This is the reason that business processes are not anymore normalized, standardized and are getting more difficult to analyze.

For sure there are research methods to tackle this kind of challenges, there is an example like Simplifying Discovered Process Models from Dirk Fahland, Wil M.P. van der Aalst, but the thing is variation, complexity cannot be predicted, and such methods can work in predefined or controlled because organizations live in a world where interdependence, self-organization and emergence are agility, adaptability and flexibility.

It is a networked composed world in the design of collaborative-networked organizations.

These networked configurations comes to the composition of complex systems, from cells, to society and enterprises (associations of individuals, technology and products). In those complex systems, characteristics of emergence, order and self-organization, develop a set of network interdependent actions not visible in the individual parts. This is the reason why defining methods to analyze a domain fail if the domain and the parts change, which is what most of the times occurs once we are living in a world of variety.

The facts that are changing everything

There a hand full of facts that are changing everything the way we work, basically that are two domains that are making a huge pressure on enterprises.

The technology factor

As communication costs drops and speeds increase, cost will no longer be a consideration in many parts of the world. As the cost of communication drops, the shift will be towards applications. Combined with increased computer capacity and speed, we will be able engage with, and have access to information in real time. Cloud will free organisations from fixed and limited availability and processing power. The way we are used to working  will dramatically change.

The social factor

On the social factor, in leading GDP countries, we are facing a displacement of “assembly line” people to aspiring ones; this is because work can be transferred to those that can do the same thing for less than a half of the cost. This shift occurs in industry sectors from manufacturing to services. But in the near future tiny tasks will be fully automated and unfortunately those brave workers will be obliterated, unless there are new work opportunities, or chances to execute more complex work. People will have to adapt and start pushing their capabilities to new boundaries.

This shift has also a profound implication on the type of people companies are sourcing in the labor market. As leading companies expand and operations are outsourced or transferred to low wages economies, the future workers profile will be aimed at highly skilled persons capable of embracing business dynamics.

The convergence of three important process dimensions

The complexity were are living with, implies that we to look and align other kind of dimensions we were not used to look before to tackle the factors that are transforming the way processes are executed. Control flow perspective does not provide any kind of insight because there are not two similar instances and because under social collaboration paradigms the process is the conversation or the interaction and there are infinite ways to do that. Time perspective is important and will continue to be important but is definitely not the best way to understand behaviour.

In fact today we have immense analytical capabilities, but how do we understand a fundamental challenge for organizations that is how people socialize? How do they work? the configuration makes sense? It is too centralized, depends always from the same person and the same organizational units or is open and anyone can be invited to join? The type of knowledge applied is abstract, i.e. people can apply recurrent solutions to daily problems in a multitude of situations, and only apply customized solutions (concrete knowledge)? Knowledge is reused? Information flows naturally or processes are too structured and best practice oriented that are turning organizations into fragile systems because they are not able to change, react to unpredictable facts and adapt?

This was the background of the workshop.

The quest

Our society is constructed around flows. This construction is also applied inside organizations and among its stakeholders. This is what we are made of.

Flows are the sequence of interaction between physically disjointed positions held by social actors, that belong to a particular social network.

Dominant social structure are those arrangements of organizations whose internal logic plays a strategic role in shaping social practices.

Thence the trick is you are able to align network structure to the process type being executed and evolve the network type according to circumstances. In order words, you need to introduce and maintain an adaptive social approach. But that is not enough. You can have the best social network configuration, but knowledge is poorly used, or you let people set them free when it should be supposed to reuse solutions all and over again.

Social dimension – social networks configuration

Once the process transformed into something that is the conversation, we need to understand how people engage. In other words, what is the network configuration. It’s somehow accepted that network patterns can indicate the way people work and share information.

As a reference on social network patterns, and social network discovery techniques you can learn it here in this post.

Challenge #1
Centrality is used to measure degree distribution. But all measures are wrong and some are useful.

From the discussion resulted that:

Information (logs) about social iteration that spreads into e-mail, social tools like activity streams, messaging, video chat, that can help to discover the way socialization occurs are difficult to obtain, due:

  • The effort to obtain this information can be infinite, because is recorded across multiple platforms and most of the records do not have a common key;
  • Some information is inaccessible if is recorded inside systems that the company, the entity that has interest in understanding what is happening is not responsible for the system administration (event if it is administrated indirectly);

Privacy concerns. There is a clear division about the approach how information is considered private across different parts of the globe. For example, in most European countries, at large, data like e-mail, stored in the employer devices is still personal, even if it is corporate e-mail. This challenge is amplified if data is stored in personal e-mail or devices even if it is from corporate source).

Building the complete log can be overwhelming if social interaction is spread in multiples systems. Without entering into technical details, is much more  difficult that joining different database tables.

It’s more important if the social dimension could be embedded in the control flow, rather than being analyzed separately. If the process is the flow and the process is social the visualization should be integrated. I consider that this point is key for developers.

Knowledge types – What type of knowledge exists and how it’s applied

Healthcare industry has always been characterized by the involvement of multiple professionals for diagnosis and treatment of patients where information sharing plays a key role. Health professionals (as well as professionals from other industries), tend to work around problems, address the immediate needs of patients rather than resolve ambiguities. As a result, people face “the same issue, every day indefinitely,” which results into inefficiency. In other words, people like to design, the same solutions always. How can you overcome this challenge and what can be done so that the knowledge use can be more abstract and knowledge itself can evolve within the organization?

Knowledge consumption should be aligned with the type process design. For example a repetitive task is usually automated turning into explicit knowledge use, documented and understood by all. There is often a temptation to simplify the existing complexity, automating and standardizing how to proceed to the point of “crystallize” only a small part of the information that people have to process, making it difficult to cope with the changing conditions of execution, thus leaving no room to use of the tacit dimension.

Knowledge is not then just a twin flavor (explicit or tacit) but it’s more than that.

Challenge #2
How to discover and measure knowledge type?

There can be different types across parts of the process and measuring is not automatic.

From the discussion resulted:

People would like to spot the indispensables. The ones that makes the difference, when a solution is build. That could be measured by how many person in the company “like”, use, apply the knowledge that was created.

Many think that the problem with knowledge discovery and usage is related with the tools used to store and share it (portals, wikis or alike). Some examples were provided in IT context, like a patch, a pattern that was sent over the development team, was considered to be handy, because everyone was involved working in the solution and as such knowledge gets codified, but big

knowledge repositories are not considered to be useful.

The lynchpins, the indispensables, don’t like to codify it’s knowledge, because it makes them … dispensable (I tend to agree, but there are some generations that live under the share paradigm and make the others contribute to the company success).

A side interested comment was presented:

Knowledge finding automation is highly requested. Even with a Swiss army of systems to manage knowledge, it’s hard to find.

Discovering process types

Process are not from a single flavor anymore. Today it’s possible to find a very pre-defined type, but also a blend of every type available across multiples process instances.

Today processes are blended. You can handle a claim with a customer in a loosely manner and in the end pay a compensation using a by the book, best practice, “ever day the same thing”.

Challenge #3

How to understand what process type we are looking at?

The structured ones are easy to find, but Ad-hoc and Adaptive put extra challenges, particularly if parts are blended with structured ones.

From the discussion resulted that:

Most important that have super algorithms to spot patterns and discover process types, at this point of time is more important to have access to recorded data to actually let people think.

Process Mining Camp 2013

For the first time I attended at the Process Mining Camp and at the same time I hosted an exploratory session on the theme Social Mining, that I will post in a specific post about it.

Process Mining Camp is organized by Fluxicon, Process Sphere’s partner, and is dedicated to bring a mix of something old (the concepts, important for those are putting their hands on process mining for the first time), case studies (evidence that process mining can be used in multiple challenges) and something new (future trends that can be just around the corner in the next couple of years).

If there is an event you should attend to actually learn about process mining and how is making the difference, helping organizations to change and adapt faster, this the one you should save the date.

This is my take on the event.

Anne Rozinat – Opening Keynote

Anne briefly presented some numbers about the process mining community that includes end users and researchers. What is evident is that the figures are growing and it’s it should be taken more seriously by industry analysts (despite Gartner already put it on the radar).

Then it jumped to an important concept, that is taking advantage of maps to visualize and interpret data. It introduced the work of Denis Wood, that was very brand new to me, I’m more familiar with the concepts of Edward Tufte and my countryman Manuel Lima. Denis had a very important contribution in the way data is visualized and it made meaningful to anyone to be able to interpret it. In the early days, as a geographer, he designed maps around, traffic, power grids, social, mail delivery, energy consumption and others. The point was regarding how we can get information to make decisions from data, taking into consideration the complexity of how difficult is to analyze it, particular in a era where processes are much more difficult to analyze once factors like socialization, exception handling, real time adaptation and others. For those that continue to continue to defend grimly that there is only one way to analyze a process map, let’s say using BPMN notation, are still is denial that that particular or any other process notation, lacks (sometimes) other important dimensions that are important, for example distance or time. I say from Anne’s presentation that there is no specific best process model notation because it will depend on the analysis perspective. This is important because it’s clear we need to deal with variety.  Sometimes, we need to create other ways to discover and understand the process because if we are exploring a social network, how such process can be modeled alike BPMN models or how can we identify the dogs that bark and probably bite (a hilarious Denis Wood concept) when the mailman is going to deliver the mail?

Tijn van der Heijden – Rabokank’s case study

The case study presented was about supplier invoice payment. Some real value presented here, when the real process map showed that when suppliers send invoices for the first time the process gets fuzzy and it does not work as it was designed. This is an important conclusion that most people think that is easy in theory to analyze structured processes, even when mostly they are automated and the work that is being executed is a mirror of the automation. Well, is not.

Lalit Wangikar –  Process Mining for operational performance improvement case study

Lalit, travel to the other extreme around ITIL processes types, but on this case the customer stand up and said that our processes are unique. We have a job shop approach. This is not an assembly line, thus these are difficult to analyze, he said. The approach was to collect huge amount of data, from e-mail and phone logs, that in the USA is easier to implement, not in Europe, where legislation forbids the employer to have access to information stored into a computer, even if the computer is property of the employer.

Philipp Horn – Purchasing Process in Volkswagen case study

Liked the concept about the importance of procurement at Volkswagen group. 60% of the money flows are related with suppliers operations, with lots of differentiation among car brands, but also with a buck of components and synergies occur in the process.

One important fact presented is making a workshop for buy in and acquaintance with process mining approach, before the project starts (where I heard this story before …).

Other point was related how to get the root causes of the problems or bottlenecks in the process, in their case, the approach was more based on serendipity, that proved to work best because sometimes people disperse and present very different ideas getting difficult to structure it.

Also we pointed that, as far I understood, it was possible to identify that in some cases they discovered that the wrong participant was making something on the process. This is critical, when on these days we are overwhelmed with information to process and some make us to waste time and be unproductive. If this kind of challenge exists in big companies, where is virtually impossible to make an assessment on everybody and managers want to make it most out of its human resources the realignment of a workforce can be done using process mining.

Michael Cunningham – Suncorp insurance company case study

Michael talked about a project at Suncorp in Australia. The process that was improved was related with managing incidents claims, with 600.000 claims a year, or 1 claim /minute.

I considered that the most important part of the presentation, besides the results, was the outcomes of the project the people vs culture (like an ad-hoc maturity model). He pointed out 3 different stages:

  • Understanding – show don’t tell, but it’s real.
  • Competent – reality vs preconceptions
    • o    That’s not right;
    • o    I don’t trust it.
  • Transformed – Process models just an idealized high level view.

I find this approach very practical and workable, because to make the case for process mining you need to reach the transformed state. For someone like me that experience this kind of reactions from customers, I assure that you don’t need to buy a maturity model for analyst firm to figure it out if your practice reach the right maturity level. I would only add a first stage that is Unbeliever – We don’t have data.

Walter Vanherle – Case study on Security services  

Process mining is really beautiful, but from manager’s buy in perspective, do not present them what they already know, do show what they don’t know and probably don’t like what they are looking at, that is where the improvement opportunities are coming from.

Walter pointed an important technical aspect about data quality. The timestamps. The stamps from the system used to support the service were different from the mobile phones used by the Security personnel where data was being recorded, making difficult to be sure that the SLA’s were being met and everything related to contract management.

Youri Soons – Case study Auditdienst Rijk

Youri presented the application of Process Mining from a auditing perspective. The Dutch National Auditing Service monitors the annual reports of all Dutch ministries and provides assurance on the financial statements that are included.

Apart from the case study, he make it a point trough a live demo, presenting how it is possible to be sure at 100% the segregation of duties was in place and the controls existed. I’m very keen with compliance, as someone in the past was responsible for audits and it always had the impression that something could be missing based on the sampling principle.  Youri clearly pointed out how it is possible to quickly discover what were the instances (from the thousands) that did not comply with the segregation of duties.

Wil van der Aalst – Closing Keynote

Wil went through a historical journey around process mining. Once I’m addict on the theme I will fast forward most of the presentation (Will actually joked in the beginning that making a presentation on the history of process mining, could suggest that his work around the discipline was over when he still have much more to give).

One interesting point was related with the fact that data, models and systems coexistence is difficult to separate (for this purpose go study around the viable system concept) and that’s the reason that there is so much confusion on BPM trying to break in parts the pieces that make part of the BPM philosophy.

Hence, for me, the key point were most about the future challenges, around the big data effect, that every two years everything doubles, meaning that the challenge is turning event data into real value, once the quest is how do we know that the data we are using can cover only a fraction of all possible behavior (think for a minute about data related with social interaction around end users dispersed in a multitude of systems, some or most, out of control of the agent that is taking control of the process). One of the dangers is to manually generalize.

Again, on the big data effect, and before jumping on criticism that this the reason that process mining can lose its hedge once it does not have the possibility to sit on the flood of data were the action occurs, I would conclude that is better to have the possibility of being exposed to part of the reality than live totally in ignorance and don’t have a change to transform your organization.

Social Network Analysis – part two

On part 1, I introduced the importance of social network understanding as the socialization of interactions is becoming a new working habit and as such classic control flow perspective analysis does not anymore provide information about how work is done.

On this post, I will explore important points to look for when performing Social Network Analysis (SNA).

On properties:

Social networks have typically the following properties:

  • Emergence: agents that belong to the network interact in an apparently random way. This feeling is amplified if there are many agents and / or there are too many interactions that make difficult to extract patterns. Emergence is all about separating the signal form the noise and make those patterns to emerge.
  • Adaptation: enterprises, communities, exist confined in a particular environment that when changes it makes agents to react. Environment can be external, interaction with customers, suppliers, government agencies; influence like the publication of a new law or regulation or competitor movements as they enter in new markets or create new products or services. Environment can also be internal and its related to the way agents interact that is ultimately associated with how business processes were designed, how IT solutions were deployed, culture, hierarchy configuration and formal recognition of authority, just to provide some examples.
  • Variety: Ashby, one of the father’s of cybernetics, defined the Law of Requisite Variety “variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states.” For an organisation, to be viable it must be capable of coping with the variety (complexity) of the environment in which it operates. Managing complexity is the essence of a manager’s activity. Controlling a situation means being able to deal with its complexity, that is, its variety [1].
  • Connectivity: The way agents are connected and how those connections are aligned with the process type that was designed / being executed and the type of knowledge that is necessary to support operations (more about this alignment here). The existing connections will unveil the emergent patterns that are necessary to identify and understand behaviour under a social point of view (high coupling or loosely coupling between agents or group of agents).

On network types:
Most of the times when people refer to social networks they are expressing their beliefs on community networks like Facebook, subject expert groups like enterprise wikis. Although those are important network types, they do not express the nature of organization operations, because they do not record communication acts expressed on social activity, hence I will only concentrate on Coordination Networks.

A Coordination Network is a network formed by agents related to each other by recorded coordination acts.

Coordination acts are for example, the interchange of emails, tasks as design on enterprise systems or activity streams just to provide some examples. The above definition is an adaptation of [2] because it does not include the importance of coordination act that is related with the nature of work, rather the connection itself. The former is the important dimension related with business process management and will guide the remaining content.

Coordination acts is meant to be as defined (adapted) [3] an act to be performed by one agent, directed to other agent that contains an intention (request, promise, question, assertion) and a proposition (something that is or could be the case in the social world). In the intention, the agent proclaims its social attitude with respect to the proposition. In the proposition, the agent proclaims the fact and the associated time the intention is all about, recorded by the system, supporting the definition Coordination networks, which configuration that can ultimately be discovered, patterns emerge, using discovering techniques like for example process mining.

Coordination Act V00

Coordination Act

On analysis dimensions:

Social network analysis is not new. Actually, the first studies were done around the 50’s of last century. Its refinement stumbled around:

  • Degree distribution: study connection number around a node of the network;
  • Clustering: groups with connection density larger than average;
  • Community discovery: measures alignment of connections regarding organization hierarchy.

There is an immense list of techniques to analyse each one of the above dimensions, that reflects the high maturity level of each method, but he drawback is that SNA analysed on each dimension alone can induce managers in the wrong direction. For example, studying community discovery can be important, because communities are a collection of individuals who are linked or related by some sort of relation. But carrying the analysis without taking into consideration the content of the conversation (coordination act) that drove the creation of the link is absolutely wrong, because the conversation is all about the way we humans work. I tend to disagree with other points of view from other practitioners that conversation does not matter (probably because they were influenced by Gordon Pask), only the network configuration. Conversation (the process) is the matter of study.

Social networks are self-organizing systems, but there are important patterns that emerge from the nature of the coordination acts that can be identified. Despite there are random factors and the type of patterns presented in most of scientific papers are based on graph theory and tend to be very simple compared with the reality (and hence maybe this is one of the reasons they are not taken seriously) it is the only way, as an abstraction, to understand agent behaviour. Pattern recognition is critical to align process type (from structured to unstructured), knowledge domain (simple to chaotic) and network type (central to loosely coupled). In order words, to infer trends and help humans to interact better regarding the role they play in the process ecosystem. Having said that, I would like to invoke Stafford Beer’s on models: “in general we use models in order to learn something about the thing modelled (unless they are just for fun)” [5].

Centrality is used to measure degree distribution. Centrality [2] is described as a process participant, business unit, group (a set of process participants or people) or an enterprise system (do not forget the machines) within the context of a social network. Centrality is also related with discovering the key players in social networks.

Some measures that can be used for Centrality are:

  • Degree centrality: calculate how many links a node has regarding the remaining network nodes (commonly called network stars). Higher degree centrality means higher probability of receiving information (but does not mean it drives information flow inside of the network).
  • Betweenness: measures the degree witch a process participant controls information flow. They act as brokers. The higher the value, higher is information flow traffic that moves from each node to every other node in the network. The importance of Betweenness in social network analysis is nodes with higher values stop processing coordination acts, will block information flow to run properly.
  • Closeness: measures how close a node is isolated in the network compared with other network mode. Nodes with low closeness are able to reach or be reached by most of all other nodes in the network, in other words low closeness means a node is well positioned to receive information early when it has more value. Closeness measure must be supported on time dimension (see reference about the timestamp attribute on the coordination act exemplification), without it, is useless.
  • Eigenvector centrality: used to calculate node influence in the network. Higher scores means a node can influence (touch) many other important nodes.

In order o put it all together its worth to consider the following self-explanatory picture [6]:

Diverse centrality measures V00

Diverse centrality measures

The challenge:

There is a lot of noise around what is the best measure to perform SNA, as I learned at the User Modelling, Adaptation and Personalization Conference 2011 it’s time to put the mathematical equations aside and practice it’s application.

At this moment of time, there are plenty of ways to measure network centrality, but somehow they neglect that those algorithms are not appropriate regarding the type of business process / information system interaction played. For example, Eigenvector centrality measure is important in unstructured processes, where the path is defined on instance mode and it is necessary to create a team and involve others as the process progress. Once SNA does not analyze the process type, only about agent relation, if applied analyzing a procure to pay process (highly structured process type) it’s useless and can damage results interpretation, because on this case, every agent, every process participant receives and process information basically the same way to achieve the same outcome every same day. Maybe this is the reason why is not yet taken more seriously, because these days the process is all about social  interaction and it cannot anymore be analyzed naively taking into consideration the dispersion, complexity and interdependence of relationships, something that can also be applied on IT requirements elicitation or IT system operation , which allows to understand communities interaction in order to support emerging and unique processes under a techno-social systems approach [7].

Social Network Analisys IT V00


[1] – Design and Diagnosis for Sustainable Organizations – Jose´ Pérez Ríos – Springer – ISBN 9783642223174
[2] – Large Scale Structure and Dynamics of Complex Networks – Guido Caldarelli; Alessandro Vespignani – World Scientific Publishing – ISBN-139789812706645
[3] – Enterprise Ontology – Jan Dietz – Springer – ISBN – 3540291695
[4] – Complex Adaptive Systems Modeling – A multidisciplinary Roadmap – Muaz A Niazi
[5] – The Brain of the firm – Stafford Beer – Jonh Wiley & Sons – ISBN – 047194839-X
[6] – Discovering Sets of Key Players in Social Networks – Daniel Ortiz-Arroyo – Springer 2010
[7] – José L.R. Sousa, Ricardo J. Machado, J.F.F. Mendes. Modeling Organizational Information Systems Using “Complex Networks” Concepts. IEEE Computer Society 2012, ISBN 978-0-7695-4777


Social Network Analysis – part one – the importance of God on complexity

On the previous article about A Social Platform Definition, I presented a framework about the elements of such Platform. The following articles I will expand each of the layers. This one is dedicated to the Search and Analysis component.

Before we dig into the component content, I would like to bring some background about its significance.

An important introduction to Social Network Analysis

Last week, I had a meeting with a college headmaster to figure it out if there was alignment between me and the headmaster’s expectations and values regarding how students will be prepared for the forthcoming decades, taking into consideration the shift we are facing regarding work patterns, information overload and technology disruption.

The institution is catholic oriented and have strong roots with the Catholic Church. Let me say that I do not consider myself catholic as by the book definition, but probably I’m more catholic that others that go to the church every day and don’t have ethics and values. This means I did not choose to evaluate the institution because it is linked with my religious beliefs, but because they are the best institution according to the evaluation program that was created by the Portuguese Government some years ago.

During the interaction with the headmaster (a religious person), we talked about two vectors I introduced into the conversation: values and student preparation for the forthcoming decades (how we prepare people to interpret and act on information and how they improve reasoning in the knowledge era). When the headmaster was talking about values, introduced an amazing characteristic from the human being point of view (sorry by the religious background I’m putting into the discussion but I consider that it’s worth for the sake of clarification about social network analysis).

God created humans as a single and unique entity. There are no equal human beings (even perfect twins) and God created animals and all the other living organisms differently that belong to a system (let us call planet earth that belongs to other system called the universe) made by diversity in constant balance and adaptation.

This point of view opens and reinforces the main characteristic that we humans who belong to families, communities, organizations, arrangements that are part of a super system called the universe whose foundations rely on the top of diversity and complexity, not on standardization. Somehow, we keep pushing in into an ordered regime because it is much simpler to understand concepts, interactions and our own existence in an controlled manner rather than in a complex one.

The world is complex and we cannot change that as much we would like to

Ashby’s law teaches us that any system must match the complexity of its elements in an actively and adaptive way to survive and prosper.

In addition, Ashby pointed out other important conclusion: any attempt to limit part of the variety (because it is considered noise by the humans) that constitutes the system will lead that the system will lose the capacity to adapt and lead into implosion. This reflects in the way some business processes cannot respond to exception handling, because the misleading adaptation consists into fighting against the process model rather than adapt to changing executing conditions. If we consider a different organization layer like strategy management, think when external signs are ignored that can lead the organization to bankruptcy or financial loss.

In the social era we are being misleading about what is Social Network Analysis, one of the reasons it is about the semantics, the meaning of Social, broadly understood connected people, but a Social Network is much more than that. In very general terms a Social Network can be described as a graph whose nodes (vertices) identify the elements of the system. The set of connecting links (edges) represents the presence of a relation or interaction among these elements. With such a high level of generality it is easy to perceive that a wide array of systems can be approached within the framework of network theory [1].

Social Networks can be made of Organizational Units, Business Units, Roles and Functions, Individuals, Data, Technology consumption (what part of the IT solution is used), Technology interaction (how IT solutions communicate), Business Processes, Traffic, Biological, Physics (these last two categories lend so much of its properties to business analysis) etc.

All the networks are self organizing systems, but there are important patterns that can be identified anywhere from the self organization, despite randomness, patterns are critical for humans to understand how data can be transformed into information, that ultimately is transformed into knowledge used to understand the behavior of such networks (see note below).

Self-organization refers to the fact that a system’s structure or organization appears without explicit control or constraints from outside the system. In other words, the organization is intrinsic to the self-organizing system and results from internal constraints or mechanisms, due to local interactions between its components [2] (that can be put on top of a business process). These interactions are often indirect thanks to the environment. The system dynamics modifies also its environment, and the modifications of the external environment influence in turn the system, but without disturbing the internal mechanisms leading to organization [2] (think for example social interaction with customers that change the course of the business process, or events during product research and development that makes to alter the characteristics and features). The system evolves dynamically either in time or space, it can maintain a stable form or can show transient phenomena. In fact, from these interactions, emergent properties appear transcending the properties of all the individual sub-units of the system [2] (and these emergent properties are the ones than be understood using a combined set of discovering techniques like process mining, social network analysis and data mining).

I tend to agree that with argument that looking for patterns into a complex landscape is a waste of time for the reason that into complex domains any attempt to take a snapshot is a distorted version of the reality. Nevertheless, the objective of patterns discovery and understanding is not to predict behavior but to infer trends or in Jason Silva’s words “to understand is to perceive patterns” .

The objective of Social Network Analysis is not to predict outcomes, but to understand, to construct knowledge around emergence self-organization and adaptation in scenarios like for example decision making or distributed systems that are becoming real enterprise challenges as business complexity and interactions grow exponentially.

Huge amount of data is being recorded today (see image bellow) that allow us to make discovery and analysis of complex interactions. The argument that does exist and it cannot be done only fits in a category like airport security information that typically relies on paper.

The Internet of Things – new infographics - Source:

The Internet of Things – new infographics – Source: Bosch

On part two, I will explore techniques to analyze social networks.

On Fastcompany’s article: “IBM’s Watson Is Learning Its Way To Saving Lives”  is said that “Watson is poised to change the way human beings make decisions about medicine, finance, and work” […] “They believed Watson could help doctors make diagnoses and, even more important, select treatments”. I argue that IT can help humans to process and show data to help humans to make better decisions. Last weekend, a family member stood at a hospital during a day making analysis on what could have been a heart attack. Diagnosis were automatic: they make a 1 minute electrocardiogram (considered insufficient by experts) combined among others with measurement of troponin levels (diagnostic marker for various heart disorders). The results found correlation between the results and the family member was told a cardiologist should immediately see him. When the cardiologist looked to the results he said that there was no correlation at all, the results of the electrocardiogram were insufficient and the troponin level was 1/100 of the danger threshold and was unlikely to raise suddenly. In the end the diagnostic was wrong and the cause of sickness was nervous system. This evidence like many others should make us think as Einstein said: “Information is not knowledge, the only source of knowledge is experience”; I would add information cannot be stored.

[1] Preliminaries and Basic Definitions in Network Theory – Guido Caldarelli and Alessandro Vespignani – Large Scale Structure and Dynamics of Complex Networks: From Information Technology to Finance and Natural Science – World Scientific Publishing Company – ISBN 978-9812706645

[2] Self-Organisation: Paradigms and Applications – Giovanna Di Marzo Serugendo, Noria Foukia, Salima Hassas, Anthony Karageorgos, Soraya Kouadri Mostéfaoui, Omer F. Rana, Mihaela Ulieru, Paul Valckenaers, and Chris Van Aart – Engineering Self-Organising Systems – Springer – ISBN – 3-540-21201-9

BPM – a year in review – 2012


year in review 2012

As usual, here is a retrospective of this year’s activity around BPM. The choice of the themes is mine and the order its presented is random, and do not mean any raking scheme.

Enterprise Architecture: for me it was clear that this was the year where this particular domain expertise got lost. Enterprise architecture suffers from two pains. The first is the concept division between the American School and the European School. The American divide what is business architecture and IT architecture, while the Europeans do not. Envisioning an enterprise system without seeing the whole is an. The second is enterprise architecture relies for a year in static, cumbersome, time-consuming methods to “draw” the architecture and this is not a problem of the framework used. Unfortunately, EA frameworks are not adapting to the needs of the enterprises that need to shift gears quicker. This year I did not have any answers from distinguished enterprise architects how to fix this. Unless there is someone that invents “agile EA”, looks like companies will deal with some outdated skeletons in the enterprise content systems.

Social: One of the hottest themes. But the discussion that is being taken is not the most important. I mean there is lot of hype around building a social practice, use the right tools to collaborate, the bring your own device (actually this is making a huge breach in enterprise architecture) engaging with the customers, etc. But the most important aspects are being forgotten: understand how people are engaged and how it should relate. In other words look to the social dimension of a process. This year, many books about social business were published. I put my hand on two from prominent experts and the ease it took to read the books, seems this area is being treated superficially (probably because is new and there is a need to understand the basics). I measure a domain maturity for the difficulty it is to understand (the more difficult is to jump into the concepts, the more mature it is). Hence, is critical to understand how social patterns look alike (the aesthetics). Does your company have lions or lemurs? Are they working together in a way that is aligned with the knowledge type required to play the process or are you creating variability when you need standardization or vice versa? In other words, how do you put people evolving and gaining new competences to handle the exponential complexity we are facing? How do you put people learning with the others?

It is a commonplace that the world is getting more complex and that organizations are struggling to cope with it with their operations. This is difficult, because complexity is hard to describe. The language of networks patterns can bring visibility how complexity can be handled properly. A good reflexion can be found <here> .

Intelligence. This category includes many scattered sub categories all of them important. They are: big data, mining and prediction.

Before jumping into each knowledge domain, it’s important to make a reflection about why humans are so fascinated with it. We like to predict events but we miserably fail to do it. We like to be sure we do not get wrong in our assumptions. But due changing conditions that we rely to make our wizardry it’s getting more difficult to do it. Sometimes we do not realize that the facts we use to predict changed when the event occurs. That happens because the world is a complex system and there is no linearity in how each variable is related. Thus, we are looking for new predictions models and technology that can help us with one of the major human limitations: being capable to be correct about the future.

Technology can help us, broadly speaking, to make better decisions, but being able to “be sure” is a step that probably is still too far to be a reality. Why? Because intelligent systems do not have the capacity to learn. In a previous experience with IBM’s Watson super computer when asked about what is the capital of North Korea it replied “this country does not have diplomatic relationship with the USA”. The machine was not able to find other way of search and get an answer (actually it did not realize it was wrong). In addition, because machines suffer from what is called cognitive illusion (to understand better the concept read this article how out of context, a system – it can be a human as system we are – makes the wrong decisions).

Talking about predictions, this could take us to a long discussion dedicated only to the subject, nevertheless, strictly speaking within the human dimension, it’s interesting to distinguish two types of person:

•    More P or M individual in Gordon Pask’s words;
•    More power of Ideas or power of Politics in Rick Brandon’s and Marty Seldman’s words, or;
•    More Fox or Hedgehogs in Philip Tetlock’s words.

The former are humans who like to question, evaluate options and engage with others, like to share knowledge and evolve concepts and like to learn with failure.
The last are humans that believe in formal authority, governance, centrality, control, and their ideas cannot be questioned, are somewhat like dogmas.

Foxes, power of Ideas or P Individuals, like o spend time to construct models to understand reality, than unfortunately changes and the models are useless or return wrong answers.
Hedgehogs, politics and M individuals like to blame the errors due to bad luck. This type of human predicts what is called under “smelling felling” approach.

Coping only with this human type variability, should make us wonder why is so difficult to predict.

Talking a about big data, the challenge is real and is here, and without a question is important to get important information about organizations, operations, but 99,99% is noise. In other words, is not relevant. The challenge for big data is to process photos, video, documents where there are data chunks that can be important to understand how good you perform, or to execute sentiment analysis for example.

Scepticism is around process mining. People do not like to believe that it is possible to discover process models and analyse a process from different points of views because they tend to think that there is no data for that. Also, because we know that when exceptions occur, process participants jump off the systems and start collaborating using e-mail. Fortunately, process mining is much more powerful and quick to analyse a process and it has out of the box a huge palette of techniques even in the case where the is no event log.

Self-organization: I will dedicate in the future a post only to this matter, but for know, I sense a shift in the way that companies are embracing change projects. Somehow we were used to implement governance models, to help others to make the change, to design and implement business processes, in other words to sell professional services (from the provider point of view). One of the shifts I talked about at the BPM conference in London this year, the social factor, as I put it: “On the social factor, […] we are facing a displacement of “assembly line” people […] will have to adapt and start pushing their capabilities to new boundaries. This shift has also a profound implication on the type of people companies are sourcing in the labour market. As leading companies expand and operations are outsourced or transferred to low wages economies, the future workers profile will be aimed at highly skilled persons capable of embracing business dynamics”.

People will also question the existing order, like the pianist Glenn Gould, that opposed the way Bach composed. He said that Bach made some errors and some notes should be changed [1].

The social factor is changing the profile of the people that work at the companies. It’s time to teach these different type of people how to get most of the systems and how to design business processes, because they do not need you anymore, as a provider, as a mentor, as a manager, to do their job, them they will auto organize to deliver it internally. Not only the type of technology will change dramatically and will put people controlling directly process composition, letting IT embedding business logic and system interoperability, as also governance models as we know it, tend to disappear our will be less formal, it will define at maximum the rules of the game people must comply.

Self-organization must not be confused with anarchy, but it will a key human component as finally organizations are seen like organisms, living things that must fight for survival like any other animal. It must adapt or die. Cybernetic management will emerge.

Interested in 2011 list? Click <here>.

[1] References: Booklet – Bach: The Goldberg Variations.