How Adaptive Case Management can be deceiving

One year ago,  the first edition of BPM Conference Portugal was up and running. Today we are now one month of the completion of the second edition. I was remembering and revisiting some of the key facts of the conference and if something dramatically changed since 2013. Cybernetics and Adaptation were two key themes them.

Last year during a side discussion, there were some arguments against and supportive of the motion about Adaptive Case Management and how knowledge workers would made the difference is achieving goals and competitive advantage, more based on guidelines rather prescriptive way of working. In fact in theory, I agree with the approach if the nature of operations are dynamic, rather than highly structured (as pointed before in earlier blog posts). At that time, the supporters against the motion argued that human nature can disrupt the guidelines (because they tend to think by themselves) and instead of knowledge workers companies, would have a company of heroes.

In the beginning of this year, Patrick Lujan after reading BPM a year in Review 2013 interacted in Twitter arguing with the very same argument, that if you have the bad knowledge workers (and most of them are really bad), decentralized, goal orientated management, smart advanced technology will not make any difference.

 

Knowledge Workers tweeter stream with Patrick Lujan

Knowledge Workers tweeter stream with Patrick Lujan

 

During my today’s reflection, I remembered a part of Beer’s book The Heart of The Enterprise about the loss of human autonomy and how it hurts organizations. Beer wrote brilliantly, about interviewing fictional managers that wanted to change management style (towards Adaptive Case Management orientation):

 

We hope that we are a modern and progressive management team. We have put ourselves through business schools. We have studied, and tried to understand, behavioral theories of management. We had much discussion of Theory X and Theory Y, and we have used consultants in personality testing, managerial grids, ans so on.

As a result, we have abandoned autocratic methods. We have made it clear that we expect our operational elements to work autonomously. We just hope that they can do it … However, we have embarked on a very elaborate management developed program for our people, and spent a lot on sophisticated recruitment techniques, so we have some confidence that will be well.

As far as we board members are concerned, however, and to be perfectly blunt,  there is something of an “identity crisis”. What we are ourselves supposed to do? If we are were to give rulings about things, that would be autocratic. So we have reduced ourselves to the role of advisors – benevolent, avuncular holders-of-hands.

That would be all right if anyone took the advice. They don’t seem to do that. They ask: is that an instruction? We say: no, of course not. So they promptly do something different.

 

BPM Blogs worth reading 2013

Here is the list of BPM blogs I think work reading. This year there is a shift since I started to create the list. There are less pure BPM blogs. Nevertheless, here is the list, arranged by the first time, in categories.

On BPM:

On Intelligence:

  • Flux Capacitor – The place to look for process mining, from Fluxicon.

On Social Business:

On Complexity:

On Enterprise Architecture:

  • Tetradian – Tom Grave´s writes about enterprise architecture.

If you are interested in 2012 list click here. Till next year!

Fiat Lux – the rise of the real time enterprise

Extreme connectivity is coming to the enterprise

This year at Process Mining camp 2013, during a workshop I lead, the attendees were discussing about the access, usage, transfer and reuse of knowledge. The context in a part of the discussion was about IT development and implementation. Some told that not every bit of information flowing among the development team deserved attention. The question was not where the information was stored or how it was transmitted. What determined the importance of information was the relevance for the duties in the context of the project. This meant that the development team was always filtering, analysing pieces of code, patterns, working solutions, seeking parallel developments, retrieving information from linked projects autonomously to make impact analysis (something that I also learned in a previous challenge as the Master Project Officer told me that such thing like linked work packages, project dependencies, simply does not exist as we think as a concept).

Real time enterprise

Two months ago I had a chat with a person that will be one of the speakers of the forthcoming edition of the BPM Conference Portugal. He is responsible of everything about customer support. That means that his team is responsible to update and change IT to support constant business evolving requirements and particularly to monitor multiple process instances like complains, connection requests, whatever. They rely on a divided brain to make it happen. Enterprise systems to enter into the addictive loop and out of the box tools like Github to share information among team members and make changes happen. Change in this context (Telecom) means you need to deliver new services every month and prevent customer churn on real time without waiting for next month outdated business intelligent reports. The challenge here is twofold: monitoring and intervene on operations and support business change.

The externalization of knowledge contributes to knowledge diffusion

One month ago I was talking with a manager of an utilities company that also confirmed the need of operational online addiction. Regulation it’s tightening and the company does not want to throw money out of the window because they missed slas to provide an answer about a customer complain, about start billing earlier energy, about finding hidden bottlenecks on operations. At this company some of the real time inspiration came from Lean discussion forums about the ones that are responsible for monitoring on real time energy  distribution and the status of the infrastructure using Scada systems. These people are used to control energy flow, transmission lines disruption, maintenance operations on continuous mode. For them they “do not understand” why their colleagues do not embrace a similar attitude, and as such, they played an important role translating abstract knowledge necessary to embrace the always on journey into proper codification that could be used by their peers.

The role of Enterprise Architecture

Enterprise Architecture under a system thinking approach can make a difference when designing the transformational step of entering in real time mode:

  • What are the horizontal barriers to be monitored? For each process domain is being monitored it’s necessary to identify the stakeholders that touches the process, that would be one of the main source of variety. Note that the idea is not figure out in process design if a particular stakeholder do something and pointing the measurement channels to those points. It’s identifying the stakeholders and absorb the information flowing in the context of process execution (that is a huge difference).
  • What are the vertical barriers to be monitored? What are the processes in the value chain that are related and a truly end-to-end vision must be setup? For example, in an Utilities environment a Complain Handling process should not be connected with Meter Reading? and with Billing and with Churn Management?
  • The Algedonic Channel – what in Cybernetics is defined with the objective to transmit alert signals concerning any event or circumstance that could seriously put the organization in a critical situation: failure of delivering services; an hike in customer churn; a flop on revenue, sales, etc. This is one aspect that is very neglected by managers, because they rely on the organizational structure to communicate alerts and supervening facts and sometimes it’s to late to intervene.

We are in a brink of another major change

There is a lot of discussion since last year about the concept of “intelligent BPMS”. For me, intelligent means the system is able to “think” and “reason” by it self, without falling into the cognitive illusion abyss (that leads to the question what the System should “think” about ?). Hence, there is no question that enterprises that want to customer centric, anticipate errors, predict churn, must have to equip with technology to monitor the business continuously and in real time (like it happen in the movie the Matrix, when people were looking to the screens watching the code trickle), if is “intelligent” or not I leave that discussion to analysts. But more important than that, it’s necessary that companies have highly skilled people, team workers, used to work under agile principles to make it happen. Without that it’s difficult to make the change. Are you as a manager up to job?

Does combined Case Management Model & Notation fits its purpose?

I participated at an OMG meeting two weeks ago in Berlin that during a side conversation I was talking with some peers about the new CMMN designed to model and execute non prescriptive, standard, “bpm”, whatever you want to call process types.

I skimmed across the beta release and I did not found nothing extraordinary that BPMN could not do to model Case Management approach, by the looks and feel, I would say that CMMN is a subset of BPMN.

People from the CMMN committee told that the difference is how the language is executed, once is based on stage transition.

For me it was a surprise, because stage transition is what “BPM” processes (structured) are all about., the process moves, or change stage when activities reach and end state. ACM, or even Case Management is not about stages, is about availability of data, is user and data driven processes, as such is much more object oriented that defines the path the process takes.

Thus is CMMN missing the target and actually is a BPMN subset with a different name, or is something different?

CMMN - FTF Beta 1 - Expanded Stage with Expanded PlanningTable and Expanded HumanTask PlanningTable

CMMN – FTF Beta 1 – Expanded Stage with Expanded Planning Table and Expanded Human Task Planning Table

My argument is X Management (Case management, Adaptive Case Management, Purpose Case Management, Production Case Management … ) is object oriented, not task oriented. Actually we can handle a case with no tasks at all, it’s possible to combine multiple approaches to do it: activity streams, documents, etc. The difference is that is data and data availability that is transformed into information that drives the case. Most part of the data is coming from sources outside the form of the case (if it exists).

For example, if you are analyzing a complain and you get the contract to understand the type of conditions that were setup, the penal clauses or if an opinion coming from Legal Dpt about how the complain should be handled regarding the context and contract that was signed, this is what helps people involved in the case to steer direction.

The big difference as I see is that X Management is object aware approach. The overall process (case) is structured around object types involved and outcomes of its manipulation (goals, tasks, documents, attributes, etc) and may refer to other object types or be referenced by them.

It’s not my intention to start a “holy” war against CMMN, I just say that looks like it’s stuck in the middle of the bridge. It does not means that in future it can evolve and can cross the bridge.

I’m arguing that as a design principle, the language is based on stage transition, like prescribed / structured process does not look right to me, because X Management is all about processing of data that helps to decide the path.

BPM Conference Portugal 2013

The first edition of the conference was held last week on April 18th, and brought a blend of different viewpoints on the most advanced and innovative themes on BPM, as also, had practical approach rather than a conceptual only approach.

For the first time I was involved as a chair of the event. The main difference between being invited to participate or manage the conference regarding the sessions, the agenda, the themes, is that when you deliver your talk you strive to make your best and inspire the others, while when you are the chair, you are responsible for all the speakers that ultimately constitutes a very different kind of challenge. Again, I would like to thank to everyone that make this possible. The event organizers, the speakers, the attendees. I retain the idea the event equalizes with others around the world (taking into consideration the size of the market), much more forward thinking (I fight for that) and I hope that future editions will have more sessions around the HOW TO DO IT that is something that attendees expressed in a hand full of informal talks I had. They are not looking for workshops, but for sessions that explain how the result was achieved.

If you think you can make e difference in 2014 edition, send me an e-mail and I will be glad to enroll your presentation proposal.

The themes of the conference:
The topics of BPM Conference Portugal were: Cybernetics, or the ability to deal with diversity; Adaptation, how companies sense, innovate and change the way operations are performed and Socialization, how managers can change the way people get engaged out of the organization charts and use other approaches to achieve the intended results.

The goal of the event was to provide new perspectives on the challenges companies face; new methods to overcome challenges and see in practice, in real life, how to achieve competitive advantage.

I opened the conference with a very concentrated pitch around the conference themes summarized bellow:

The baseline of the conference is the fact that company environment has not changed. It continues to evolve, but faster.

  • The pace of change in the economy has been increasingly accelerated, fueled by ubiquitous access to information and enterprise systems that are allowing to change the way work is done. Predict what will happen next is exponentially more difficult. Uncertainty has become an enduring variable, as companies have noticed lately. This implies constantly changing, or in other words adaptation.

To perceive is to understand patterns.

  • Is a fact that today companies have immense analytical capabilities, but how managers understand a fundamental challenge for organizations that is to deal with all this interaction variety is necessary to understand patterns. Understanding patterns is not predict behavior, but infer trends, so people can think, act and adapt.
  • Organizations that manage to be better aligned these three perspectives: social network, knowledge type and process design, are those that will be ahead in terms of execution capabilities, flexibility and adaptation to change.

The role of human resources development.

  • Without retaining and nurture highly skilled workers, knowledge cannot be applied effectively.
  • In the current context, organizations need all kinds of knowledge of all organizational units coming from all business units. Organizations need to use all styles, because organizations never know in advance which people they need to solve a problem, taking into account uncertainty times we are facing.
  • People are deeply knowledgeable of the organization’s rules and apply them in the work they do because systems are imbued with logic and interoperability required for execution. Not only the type of technology has to be different, which often involves changes in technology architecture, but enabling people directly in the design and execution of business processes.

The conference sessions:

José Tribolet: Adding value to BPM by enforcing the fundamental principles of Enterprise Engineering

Professor Tribolet is a disciple of Dietz’s Enterprise Ontology method and he and his team is applying it in government agencies. The case presented was around handling judicial procedures where it was possible to identify that failures occur in the acts related with process execution, with an impact in delays, complains, superseded judicial decisions.

DEMO, (Dietz’s method) is somehow misunderstood around the community because is difficult to understand  (heavily based on computational science and three axioms: social agreements; content of communication; means of communication ) difficult to apply (it’s necessary to have a lot of conditions to be used like being able to trace process actions recorded by enterprise systems), but effective if you want to evaluate  consistency and completeness of your process models in run time mode.

Business transactions specify the pattern-based behavior that describes how actors collaborate in order to achieve business results. The method takes as input a process model that is converted to a transactional model. The transactional model is then analyzed and revised so that all transactions comply with the Ψ-theory axioms. Finally, the original BPMN process model is revised to become consistent with the transactional model and complete in the sense it expresses all transactional steps.

As a result is possible:

Identify consistency issues:

  • Activity sequencing (control flow) violates the transaction pattern.
  • Data flow violates the transaction pattern.

Identify completeness issues:

  • Behavior of an activity cannot be classified as a coordination or production act.
  • Coordination or production acts cannot be mapped to any activity (i.e. the act is either implicit or missing on the process model).

Keith Swenson: Planning and Supporting Innovative Work Patterns

Keith split his presentation in two parts: the concept around anti-fragile systems and adaptive case management.

Most of the talk was around anti-fragility a concept rose by Nassim Taleb’s book Antifragile: Things That Gain From Disorder. Without being repetitive you can find most of Keith’s key points in his own words on this post. I would add in a different perspective and revisiting Ashby’s law,  any system, any process must be able to handle the complexity of the elements that constitute in an active and adaptive way to survive and thrive. This implies that any attempt to limit the existing variety will lead to the system, the process, the organization, will lose the ability to adapt leading to implosion, in Keith’s words, turning into fragile. This idea was also presented by Vitor Santos when we was arguing in his talk around the concept of enterprise elasticity (that I will come back to it in the end of this post).

Without changing the objectives of Keith presentation, there is a concept that I think (but I might be wrong) managers don’t even understand yet when dealing with enterprise systems deployment. Most are worried with the function, with the features, with business support and forget system engineering concepts, I mean how the system was conceived (engineered) to evolve and adapt to changing conditions (probably to revisit  in next edition sessions). I do no mean that the system itself will have such kind of character (by the way those that say IT can behave like complex adaptive system are in science fiction mode because one thing is the system to behave like that, other are the patterns that emerge as humans act on systems), but hey should be engineered with that objective (that can be evolved taking into consideration enterprise ecosystem, rather substituted).

Regarding adaptive case management there where some key ideas I would like to stress: the future is more around  providing guidelines that show people where to go, but do not prevent deviations if they are necessary, rather than enforcement where people fight against process design. Still, the idea that knowledge workers know what to do, because they understand the business model, the business rules and apply their knowledge building solutions to business problems was refuted by Tribolet. On his words, sometimes knowledge workers do what the hell they want to do and enter into contraction with company objectives. Hence the idea that knowledge workers know best to achieve the goals, sometimes does not apply and business suffers. It’s a matter of human behavior.  This is something we should make a reflection about.

Denis Gagné: Business Process Simulation: How to get value out of it

For those that are familiar with Deni’s style, already know that his sessions are very practical oriented. Denis talked about the reappearance (some argue that never disappeared) and the importance of simulation.

In the past simulation was seen like an evil tool that did not deliver value because process models were incomplete,  data used in simulation was inappropriate (mostly because it not even close of reality). There are some seminal reflections on simulation of Process Mining godfather Will Van der Aalst  were it argues that any attempt to simulate will be an incomplete exercise and will lead managers to make the wrong decisions, but as I envisioned before (something that for sure Gartner and Forrester will bring to the intelligent BPM assessment reports) Process Mining and Simulation are poised to merge. This is because today most of what we do is recorded by enterprise systems and it’s possible to construct real world models and use real world data to hep to build scenarios and make decisions about future directions.

Back again to Denis’s presentation, one of the key points was bringing awareness about what is the difference about process improvement that can be done using a myriad of approaches and business process management the management philosophy (not project based; continuous improvement culture and process-based management).

Regarding simulation he stressed more on capacity simulation aspects of a process model, usually dynamic analysis (using discreet simulation methods). Finally he talked about BPSIM the standard that allows data interoperability embedded into process models providing for pre-execution and post-execution optimization. As I told in the conference closure session, simulation is sexy again and it’s a way to explore how to improve process redesign in an era where all the data you need to do it is available inside your enterprise systems, rather than some years ago where you took cumbersome effort making studies driving you into the wrong direction to make decisions.

Ivo Velitchkov – Reasoning with Taskless BPMN

For me this was the most innovative presentation of all, because it challenged the current state on BPMN process modeling. BPMN is difficult to learn (but once it’s learned believe me can produce rich process model models) it has an endless symbol pallet, modeling by itself can lead to highly capillary detail or to high level approaches does not tell the complete story, in order words produce incomplete models. Hence Ivo, presented a new approach, based on taking from the process map the tasks (tasksless).

He defended the idea that taksless model diagrams, based only on process state transitions, conditional events and process rules, can produce easily understandable process models. On his words, tasks try to restrict what should be done during run time with what is known during design time. I see great potential of his ideas, translating business models into high level IT requirements, substituting state transition diagrams.

Tom Graves – Serving the story how BPM and EA work together in the enterprise

In a time where Enterprise Architecture finally is being understood as something valuable, that goes beyond creating boxes like collecting trading cards or tokens,  because today when you carry a process improvement initiative you realize you cannot anymore “automate” something because the pace of process change is touching other processes, systems, people if you don’t have the broader picture, aligning business model, value chain, organization and  IT all together, the risk your transformation project will fail is high.

Tom, brought a different perspective of how to do EA right. Putting people talking to each other in order to provide each of enterprise architecture layers (business model, value chain, organization, IT) perspective to the project.

He somehow stressed that enterprise architecture is not about IT like TOGAF framework, letting no place to the people that make part of the enterprise that know what the enterprise is all about contrary to the silicon servers. On his words, let the people include the people-story otherwise EA will be incomplete.

Michael Poulin – Business Processes in a Service-Oriented Enterprise

Michael walked mostly around a set of principles on Service-Oriented Enterprise, but I will highlight the concept that Michael created around Purpose Case Management. Conceptually, Purpose Case Management makes the blend between ACM and BPM (in this context BPM is structured process not the management philosophy) at it can drive smoothing transitions between unstructured and structured actions across ACM/BPM independently of the approach.

Robert M. Shapiro – Visual Analytics and Smart Tools

Robert talk was also on simulation, focused  on using data on executing processes to get an understanding on what is happening, what problems are and where you should look where to make improvements.

He walked through on a a very practical perspective intended to combines the process model, simulation to add data to the model in order to capture the behavior of the process model, analyze the different dimensions of the simulation result (time, cost, resources) and optimize that makes comparisons regarding different improvement scenarios. He also presented a method on Return on Investment on thinks like spending money on training with people’s performing the task in the process vs IT task automation, calculating benefits that can be used on process deployment, helping managers to decide before the rubber hits the road. This was very new to me.

Vitor Santos: Organizational elasticity with BPM

Vitor tried to demystify the approaches to build IT systems. He talked about the engineering approach that tries to align the enterprise holistic approach, and pointed out the concept of IT adaptability (elasticity on his words) built on the concept of viable systems that prevent the hike of maintenance cost or replacing IT time to time rather than making a bigger investment upfront that will fulfill business for a larger period.

 

In the next couple of weeks, videos from the sessions will be available. If you are interested, take a peek at the conference website.

Interested in a different view about the conference? Here is Tom’s view.

Social Network Analysis – part two

On part 1, I introduced the importance of social network understanding as the socialization of interactions is becoming a new working habit and as such classic control flow perspective analysis does not anymore provide information about how work is done.

On this post, I will explore important points to look for when performing Social Network Analysis (SNA).

On properties:

Social networks have typically the following properties:

  • Emergence: agents that belong to the network interact in an apparently random way. This feeling is amplified if there are many agents and / or there are too many interactions that make difficult to extract patterns. Emergence is all about separating the signal form the noise and make those patterns to emerge.
  • Adaptation: enterprises, communities, exist confined in a particular environment that when changes it makes agents to react. Environment can be external, interaction with customers, suppliers, government agencies; influence like the publication of a new law or regulation or competitor movements as they enter in new markets or create new products or services. Environment can also be internal and its related to the way agents interact that is ultimately associated with how business processes were designed, how IT solutions were deployed, culture, hierarchy configuration and formal recognition of authority, just to provide some examples.
  • Variety: Ashby, one of the father’s of cybernetics, defined the Law of Requisite Variety “variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states.” For an organisation, to be viable it must be capable of coping with the variety (complexity) of the environment in which it operates. Managing complexity is the essence of a manager’s activity. Controlling a situation means being able to deal with its complexity, that is, its variety [1].
  • Connectivity: The way agents are connected and how those connections are aligned with the process type that was designed / being executed and the type of knowledge that is necessary to support operations (more about this alignment here). The existing connections will unveil the emergent patterns that are necessary to identify and understand behaviour under a social point of view (high coupling or loosely coupling between agents or group of agents).

On network types:
Most of the times when people refer to social networks they are expressing their beliefs on community networks like Facebook, subject expert groups like enterprise wikis. Although those are important network types, they do not express the nature of organization operations, because they do not record communication acts expressed on social activity, hence I will only concentrate on Coordination Networks.

A Coordination Network is a network formed by agents related to each other by recorded coordination acts.

Coordination acts are for example, the interchange of emails, tasks as design on enterprise systems or activity streams just to provide some examples. The above definition is an adaptation of [2] because it does not include the importance of coordination act that is related with the nature of work, rather the connection itself. The former is the important dimension related with business process management and will guide the remaining content.

Coordination acts is meant to be as defined (adapted) [3] an act to be performed by one agent, directed to other agent that contains an intention (request, promise, question, assertion) and a proposition (something that is or could be the case in the social world). In the intention, the agent proclaims its social attitude with respect to the proposition. In the proposition, the agent proclaims the fact and the associated time the intention is all about, recorded by the system, supporting the definition Coordination networks, which configuration that can ultimately be discovered, patterns emerge, using discovering techniques like for example process mining.

Coordination Act V00

Coordination Act

On analysis dimensions:

Social network analysis is not new. Actually, the first studies were done around the 50’s of last century. Its refinement stumbled around:

  • Degree distribution: study connection number around a node of the network;
  • Clustering: groups with connection density larger than average;
  • Community discovery: measures alignment of connections regarding organization hierarchy.

There is an immense list of techniques to analyse each one of the above dimensions, that reflects the high maturity level of each method, but he drawback is that SNA analysed on each dimension alone can induce managers in the wrong direction. For example, studying community discovery can be important, because communities are a collection of individuals who are linked or related by some sort of relation. But carrying the analysis without taking into consideration the content of the conversation (coordination act) that drove the creation of the link is absolutely wrong, because the conversation is all about the way we humans work. I tend to disagree with other points of view from other practitioners that conversation does not matter (probably because they were influenced by Gordon Pask), only the network configuration. Conversation (the process) is the matter of study.

Social networks are self-organizing systems, but there are important patterns that emerge from the nature of the coordination acts that can be identified. Despite there are random factors and the type of patterns presented in most of scientific papers are based on graph theory and tend to be very simple compared with the reality (and hence maybe this is one of the reasons they are not taken seriously) it is the only way, as an abstraction, to understand agent behaviour. Pattern recognition is critical to align process type (from structured to unstructured), knowledge domain (simple to chaotic) and network type (central to loosely coupled). In order words, to infer trends and help humans to interact better regarding the role they play in the process ecosystem. Having said that, I would like to invoke Stafford Beer’s on models: “in general we use models in order to learn something about the thing modelled (unless they are just for fun)” [5].

Centrality is used to measure degree distribution. Centrality [2] is described as a process participant, business unit, group (a set of process participants or people) or an enterprise system (do not forget the machines) within the context of a social network. Centrality is also related with discovering the key players in social networks.

Some measures that can be used for Centrality are:

  • Degree centrality: calculate how many links a node has regarding the remaining network nodes (commonly called network stars). Higher degree centrality means higher probability of receiving information (but does not mean it drives information flow inside of the network).
  • Betweenness: measures the degree witch a process participant controls information flow. They act as brokers. The higher the value, higher is information flow traffic that moves from each node to every other node in the network. The importance of Betweenness in social network analysis is nodes with higher values stop processing coordination acts, will block information flow to run properly.
  • Closeness: measures how close a node is isolated in the network compared with other network mode. Nodes with low closeness are able to reach or be reached by most of all other nodes in the network, in other words low closeness means a node is well positioned to receive information early when it has more value. Closeness measure must be supported on time dimension (see reference about the timestamp attribute on the coordination act exemplification), without it, is useless.
  • Eigenvector centrality: used to calculate node influence in the network. Higher scores means a node can influence (touch) many other important nodes.

In order o put it all together its worth to consider the following self-explanatory picture [6]:

Diverse centrality measures V00

Diverse centrality measures

The challenge:

There is a lot of noise around what is the best measure to perform SNA, as I learned at the User Modelling, Adaptation and Personalization Conference 2011 it’s time to put the mathematical equations aside and practice it’s application.

At this moment of time, there are plenty of ways to measure network centrality, but somehow they neglect that those algorithms are not appropriate regarding the type of business process / information system interaction played. For example, Eigenvector centrality measure is important in unstructured processes, where the path is defined on instance mode and it is necessary to create a team and involve others as the process progress. Once SNA does not analyze the process type, only about agent relation, if applied analyzing a procure to pay process (highly structured process type) it’s useless and can damage results interpretation, because on this case, every agent, every process participant receives and process information basically the same way to achieve the same outcome every same day. Maybe this is the reason why is not yet taken more seriously, because these days the process is all about social  interaction and it cannot anymore be analyzed naively taking into consideration the dispersion, complexity and interdependence of relationships, something that can also be applied on IT requirements elicitation or IT system operation , which allows to understand communities interaction in order to support emerging and unique processes under a techno-social systems approach [7].

Social Network Analisys IT V00

References:

[1] – Design and Diagnosis for Sustainable Organizations – Jose´ Pérez Ríos – Springer – ISBN 9783642223174
[2] – Large Scale Structure and Dynamics of Complex Networks – Guido Caldarelli; Alessandro Vespignani – World Scientific Publishing – ISBN-139789812706645
[3] – Enterprise Ontology – Jan Dietz – Springer – ISBN – 3540291695
[4] – Complex Adaptive Systems Modeling – A multidisciplinary Roadmap – Muaz A Niazi
[5] – The Brain of the firm – Stafford Beer – Jonh Wiley & Sons – ISBN – 047194839-X
[6] – Discovering Sets of Key Players in Social Networks – Daniel Ortiz-Arroyo – Springer 2010
[7] – José L.R. Sousa, Ricardo J. Machado, J.F.F. Mendes. Modeling Organizational Information Systems Using “Complex Networks” Concepts. IEEE Computer Society 2012, ISBN 978-0-7695-4777

 

Social Network Analysis – part one – the importance of God on complexity

On the previous article about A Social Platform Definition, I presented a framework about the elements of such Platform. The following articles I will expand each of the layers. This one is dedicated to the Search and Analysis component.

Before we dig into the component content, I would like to bring some background about its significance.

An important introduction to Social Network Analysis

Last week, I had a meeting with a college headmaster to figure it out if there was alignment between me and the headmaster’s expectations and values regarding how students will be prepared for the forthcoming decades, taking into consideration the shift we are facing regarding work patterns, information overload and technology disruption.

The institution is catholic oriented and have strong roots with the Catholic Church. Let me say that I do not consider myself catholic as by the book definition, but probably I’m more catholic that others that go to the church every day and don’t have ethics and values. This means I did not choose to evaluate the institution because it is linked with my religious beliefs, but because they are the best institution according to the evaluation program that was created by the Portuguese Government some years ago.

During the interaction with the headmaster (a religious person), we talked about two vectors I introduced into the conversation: values and student preparation for the forthcoming decades (how we prepare people to interpret and act on information and how they improve reasoning in the knowledge era). When the headmaster was talking about values, introduced an amazing characteristic from the human being point of view (sorry by the religious background I’m putting into the discussion but I consider that it’s worth for the sake of clarification about social network analysis).

God created humans as a single and unique entity. There are no equal human beings (even perfect twins) and God created animals and all the other living organisms differently that belong to a system (let us call planet earth that belongs to other system called the universe) made by diversity in constant balance and adaptation.

This point of view opens and reinforces the main characteristic that we humans who belong to families, communities, organizations, arrangements that are part of a super system called the universe whose foundations rely on the top of diversity and complexity, not on standardization. Somehow, we keep pushing in into an ordered regime because it is much simpler to understand concepts, interactions and our own existence in an controlled manner rather than in a complex one.

The world is complex and we cannot change that as much we would like to

Ashby’s law teaches us that any system must match the complexity of its elements in an actively and adaptive way to survive and prosper.

In addition, Ashby pointed out other important conclusion: any attempt to limit part of the variety (because it is considered noise by the humans) that constitutes the system will lead that the system will lose the capacity to adapt and lead into implosion. This reflects in the way some business processes cannot respond to exception handling, because the misleading adaptation consists into fighting against the process model rather than adapt to changing executing conditions. If we consider a different organization layer like strategy management, think when external signs are ignored that can lead the organization to bankruptcy or financial loss.

In the social era we are being misleading about what is Social Network Analysis, one of the reasons it is about the semantics, the meaning of Social, broadly understood connected people, but a Social Network is much more than that. In very general terms a Social Network can be described as a graph whose nodes (vertices) identify the elements of the system. The set of connecting links (edges) represents the presence of a relation or interaction among these elements. With such a high level of generality it is easy to perceive that a wide array of systems can be approached within the framework of network theory [1].

Social Networks can be made of Organizational Units, Business Units, Roles and Functions, Individuals, Data, Technology consumption (what part of the IT solution is used), Technology interaction (how IT solutions communicate), Business Processes, Traffic, Biological, Physics (these last two categories lend so much of its properties to business analysis) etc.

All the networks are self organizing systems, but there are important patterns that can be identified anywhere from the self organization, despite randomness, patterns are critical for humans to understand how data can be transformed into information, that ultimately is transformed into knowledge used to understand the behavior of such networks (see note below).

Self-organization refers to the fact that a system’s structure or organization appears without explicit control or constraints from outside the system. In other words, the organization is intrinsic to the self-organizing system and results from internal constraints or mechanisms, due to local interactions between its components [2] (that can be put on top of a business process). These interactions are often indirect thanks to the environment. The system dynamics modifies also its environment, and the modifications of the external environment influence in turn the system, but without disturbing the internal mechanisms leading to organization [2] (think for example social interaction with customers that change the course of the business process, or events during product research and development that makes to alter the characteristics and features). The system evolves dynamically either in time or space, it can maintain a stable form or can show transient phenomena. In fact, from these interactions, emergent properties appear transcending the properties of all the individual sub-units of the system [2] (and these emergent properties are the ones than be understood using a combined set of discovering techniques like process mining, social network analysis and data mining).

I tend to agree that with argument that looking for patterns into a complex landscape is a waste of time for the reason that into complex domains any attempt to take a snapshot is a distorted version of the reality. Nevertheless, the objective of patterns discovery and understanding is not to predict behavior but to infer trends or in Jason Silva’s words “to understand is to perceive patterns” http://vimeo.com/34182381 .

The objective of Social Network Analysis is not to predict outcomes, but to understand, to construct knowledge around emergence self-organization and adaptation in scenarios like for example decision making or distributed systems that are becoming real enterprise challenges as business complexity and interactions grow exponentially.

Huge amount of data is being recorded today (see image bellow) that allow us to make discovery and analysis of complex interactions. The argument that does exist and it cannot be done only fits in a category like airport security information that typically relies on paper.

The Internet of Things – new infographics - Source: http://blog.bosch-si.com/the-internet-of-things-new-infographics/#more-6995

The Internet of Things – new infographics – Source: Bosch

On part two, I will explore techniques to analyze social networks.

Note:
On Fastcompany’s article: “IBM’s Watson Is Learning Its Way To Saving Lives”  is said that “Watson is poised to change the way human beings make decisions about medicine, finance, and work” […] “They believed Watson could help doctors make diagnoses and, even more important, select treatments”. I argue that IT can help humans to process and show data to help humans to make better decisions. Last weekend, a family member stood at a hospital during a day making analysis on what could have been a heart attack. Diagnosis were automatic: they make a 1 minute electrocardiogram (considered insufficient by experts) combined among others with measurement of troponin levels (diagnostic marker for various heart disorders). The results found correlation between the results and the family member was told a cardiologist should immediately see him. When the cardiologist looked to the results he said that there was no correlation at all, the results of the electrocardiogram were insufficient and the troponin level was 1/100 of the danger threshold and was unlikely to raise suddenly. In the end the diagnostic was wrong and the cause of sickness was nervous system. This evidence like many others should make us think as Einstein said: “Information is not knowledge, the only source of knowledge is experience”; I would add information cannot be stored.

References:
[1] Preliminaries and Basic Definitions in Network Theory – Guido Caldarelli and Alessandro Vespignani – Large Scale Structure and Dynamics of Complex Networks: From Information Technology to Finance and Natural Science – World Scientific Publishing Company – ISBN 978-9812706645

[2] Self-Organisation: Paradigms and Applications – Giovanna Di Marzo Serugendo, Noria Foukia, Salima Hassas, Anthony Karageorgos, Soraya Kouadri Mostéfaoui, Omer F. Rana, Mihaela Ulieru, Paul Valckenaers, and Chris Van Aart – Engineering Self-Organising Systems – Springer – ISBN – 3-540-21201-9