Quantcast
Channel: Intellyx – The Digital Transformation Experts – Analysts
Viewing all 3156 articles
Browse latest View live

Red Queen or Dystopia? The Sorry State of Cybersecurity

$
0
0

Warning: if you like a manageable inbox, then whatever you do, don’t get a press pass to the RSA Conference. I’ve had such passes to conferences before, even quite large ones, but nothing prepared me for the onslaught of PR pitches for this venerable enterprise security shindig.

To make matters worse, all the pitches sounded alike. Apparently there are only about a dozen or so buzzwords in the security industry, and it’s the role of each vendor – and especially their PR wonks – to put said buzzwords into a unique order, thus differentiating their products from the hundreds of other widgets and gewgaws looking to rise above the noise.

The clamor surrounding enterprise cybersecurity is to be expected, of course, with all the breaches – ahem, “incidents” – over the last year or so. Home Depot. Target. Anthem. The list goes on and on. And with breaches come enterprise dollars, frantically swirling over the proverbial barn door after the horse is long gone, having fallen victim to some central Asian DDoS attack, no doubt.

So in come the big bucks, driving the cybersecurity market into a frenzy. Which would be all fine and good, if anyone had a freaking clue how to get a handle on all those incidents. Which, apparently, no one really does.

Tools, Tools Everywhere

One advantage to receiving over 200 PR pitches is I had the luxury of being picky. So I filtered for the more interesting companies from my perspective of agile digital transformation, and then only set up calls with CEOs or other senior executives. In the end I ended up meeting with about 20 vendors during my 2 ½ days at RSA.

Most of them were tools vendors – a proportion representative of the conference at large. And yes, the tools are getting better, and some of the vendors I interviewed had some admittedly cool gear (see my recent Forbes article for a summary).

And yet, none of the vendors had the nerve to say they would stop every attack – or even uncover every attack, for the ones that focused on after-the-fact analysis. Furthermore, assembling all the tools wouldn’t stop the hackers either. As they say, there’s no such thing as perfect security.

The bottom line: the most complete suite of the best cybersecurity tools on the planet would do little to stop the determined miscreant, since all tools have blind spots, and blind spots are just what the bad guys go after.

redqueenIn other words, ask a security tool vendor what they do, and they’ll be only too happy to go on and on about their features and problems solved. Then ask them what they don’t do, and they’ll look at you funny and basically say, well, everything else. It’s the everything else that presents an engraved invitation to the bad guys.

At this point you might be tempted to throw up your hands and wonder whether there’s any point in investing in any security tools at all. Forgoing them would save you a lot of money to be sure. Well, sorry to disappoint. You still need to buy the tools. A lot of them. And you need to set them up properly. Which probably means hiring some expensive and possibly nonexistent security experts. But even then you’ll still be vulnerable.

Tools, you see, don’t stop hackers. At best they deter them. Deterrence is the name of the game at RSA. After all, deterrence is why you lock your front door. Everybody knows a determined burglar can still break your door down. But if you lock your door, then the burglar will likely move on to the next house, since those bozos next door left their door unlocked and a pile of newspapers on their lawn.

Deterrence does work in many cases. After all, hackers are inherently lazy. They’re looking for the easy way to the treasure. If they poke around and find some hacks are difficult and others are easy, they’ll go for the easy ones every time.

That is, unless the treasure you’re protecting is exceptionally valuable to them. Burglars still break into highly secure bank vaults on occasion, after all, even though there are plenty of easier targets to be had.

But there’s an even more insidious problem with the deterrence value proposition. If everybody locks their doors, then yes, you still need to lock yours, but no, locking them no longer reduces the odds that a burglar will finger your house over someone else’s.

Hence the true value prop for nearly all the security tools on the market: “we can’t stop the bad guys, but we can convince them to hack someone else. Maybe. Until everybody has the same gear you do. Then you still need to buy our stuff, but it won’t do you any good.” Somehow I’m not getting any PR warm and fuzzies anymore.

OK, So Forget the Tools. Now What?

People, process, and technology, folks – the mantra of every consultant out there, and plenty of them were touting their wares at RSA. If technology won’t solve our cybersecurity crisis, then we’d better figure out the people and process side of the story, or we’re toast.

Only we’ve long since run out of people – that is, people who really know their cybersecurity stuff. After all, a solitary hacker only needs to find a single vulnerability, but each enterprise needs to deal with all the vulnerabilities, and thus needs a veritable army of highly qualified, expensive anti-hackers on staff.

As a result, the ratio of available bad-guy-security-experts to good-guy-security-experts is appallingly skewed, and is only getting worse. And that’s not even taking into account the high-reward, low-risk lifestyle of the professional hacker, sucking the best and brightest of the enterprise security crowd over to the Dark Side.

That leaves process. Only one problem: the process part of cybersecurity is perhaps the most appalling Achilles heel of every enterprise, because that’s where social engineering fits in. All it takes is one low-level sysadmin clicking that malware link in that spoof IRS email to hand the keys to the kingdom to the North Koreans.

How do we fix that click-the-bad-email process? Training? Good luck with that.

All That Remains is to Clean Up the Mess

There are two kinds of enterprises in today’s world: the ones that know they’ve been hacked, and the ones that don’t know they’ve been hacked – but hacked you are. To make matters worse, hackers are getting better and better at hiding their tracks.

There’s a good chance that unbeknownst to you, malefactors have long since infiltrated your network, and may have been siphoning off your valuable data for months now. The hackers might as well be measles viruses at an anti-vaxxer convention.

It’s no wonder, therefore, that so many of the products at RSA are more mops than locks – more for cleaning up the mess (or for finding it in the first place) than for prevention of attacks.

An entire category of tools focuses on detecting the traces the hackers leave behind, in hopes either of stopping them before they get what they want, or at the very least, collecting forensic information to throw them in jail. Eventually. Maybe.

These hacker detection tools, however, have their own limitations: the level of noise on the typical enterprise network. After all, the good guys are monkeying with things on an ongoing basis – apps are getting updated, software is getting patched, and networks are getting reconfigured all the time.

The detection tools have to spot the hacker activity above all this noise. All the bad guys have to do to avoid detection, therefore, is to operate below the noise level. What you do want to bet they’re working on that right now?

The Intellyx Take: Where the Money Is

If you really want to make money in the cybersecurity arena, the second-most lucrative corner of the market is incident response (the first being hacking, of course). All the big consulting firms and SIs have their incident response (IR) teams. If you have a breach, who ya gonna call? Now you know.

IR includes identifying the damage done, shoring up defenses to keep the attack from happening again, and then supporting the investigation into the crime. IR teams work with the appropriate law enforcement agencies to gather evidence usable at trial.

IR also includes remediation – for example, sending letters to all your customers informing them that whoops, sorry, you’ve let their deepest darkest secrets escape to some unknown foreign hacker, but here’s some cheap-ass credit monitoring for your troubles, and oh yes, please don’t sue us.

However, if you think the law enforcement angle is going to stymie – or even deter – that many hackers, well, welcome to the 21st century. Cybercriminals almost never get caught. And it’s getting easier and easier to become a hacker.

Hacking tools are free and plentiful. There are plenty of hacker communities out there that will get the most ignorant n00b up to speed quickly. And there’s nothing on the horizon that promises to turn the tide.

Welcome to your dystopian nightmare.


The Three Dimensions of Digital Diversity

$
0
0

Enterprise IT shops have long struggled with the dual challenges of homogeneity and heterogeneity. Homogeneous environments clearly had appeal: single-vendor shops would gain the benefits of working with one point of contact, and perhaps the various applications and infrastructure components would work together as advertised – or perhaps not.

But no one vendor ever had the best of everything, a dismaying fact that led to best of breed strategies: select the app or tool in each category that best met your needs, even though over time, the end result was inevitably a complex hodgepodge. And if something didn’t work? All you’d get from the vendors would be fingerpointing.

cubeBack and forth the CIO would go, trying to meet the diverse needs of various lines of business while still struggling to get everything to work together. Some would place bets on single vendors, only to live to regret their decision as the inevitable weak spots in their chosen product line came to the fore.

The end result of this dance: a grudging acceptance that heterogeneity was a necessary evil. Necessary to be sure, as the business demanded it – but also unquestionably evil, as it left the IT shop as a rats’ nest of technical debt and architectural regret – the proverbial money pit that has dogged CIOs for years.

The Challenge of Digital Diversity

Today, such enterprises are undergoing digital transformations, realigning their technology efforts with ever-changing customer preferences and behavior. For their part, today’s customers are demanding mobile-first, omnichannel interactions with the companies they do business with – and the IT shop as well as everyone else in the organization has to step up to the plate.

And yet, in this new digital world the very notion of an application itself is undergoing a transformation. Today’s applications have many different components, from mobile apps to web plugins, tags, and services, to components running in the cloud, to back end, legacy applications running on-premise. And yet, everything has to work together at speed.

In other words, today’s digital application is inherently heterogeneous. There’s simply no way to get all the moving parts for a modern enterprise digital app from one vendor.

However, this fact doesn’t mean that we’re stuck in the old days of heterogeneous IT, where we inevitably ended up with a rats’ nest of complexity. We’ve been down that road before, after all, and we don’t want hazard it again.

Fortunately, we have learned many vital lessons over the years. We now understand the importance of properly governed APIs. We’ve learned the best lessons of SOA and brought them to the cloud. And we’ve largely moved past the days of proprietary, fixed data schemas – although admittedly there’s still more work to do in all these areas.

If we take all these lessons to heart, then the heterogeneity of today’s digital era doesn’t have to be the evil agility-killer of days of yore. Instead, it can actually be a source of strength – which is why I use the phrase digital diversity. Just as the diversity of people is a strength of our communities and companies rather than a weakness, so too the diversity of technologies that go into today’s enterprise digital applications.

The Three Dimensions of Digital Diversity

Digital diversity is here today whether we like it or not. And even though we have many lessons from the last twenty years to help us deal with such diversity, the evils of heterogeneity are right under the surface. We continually face the risk of falling into old patterns, thus ending up with an intractable digital mess on our hands.

Understanding the characteristics of digital diversity, therefore, is especially important to ensuring such diversity is for the good and not evil. To this end, let’s break down the digital app context into three dimensions to provide greater clarity into the challenges – and advantages – of digital diversity.

Dimension #1: Front to back. The front office is where the customer lives. It’s the focus of the marketing department and the user experience folks. The back office is where the DevOps effort and enterprise systems of record belong. In the middle is the cloud, middleware, and everything else necessary to connect the dots between front and back.

I discussed the different contexts between front and back in a recent Cortex newsletter, where I contrasted the digital revolution (front to back) to the DevOps revolution (back to front). In this dimension, the greatest challenge is building a seamless, high performance, end-to-end digital experience.

Dimension #2: Breadth of interaction. This dimension reflects the diversity of customer touchpoints and form factors – smartphone, tablet, laptop, digital television, etc., only now with the addition of a wide range of Internet of Things (IoT) touchpoints. The breadth of interaction dimension is also where omnichannel strategies live, as they seek to combine multiple marketing channels into a unified, customer-driven experience.

From the technology perspective, breadth of interaction includes decisions about iOS/Android/mobile web, as well as Linux vs. Windows or even AWS vs. Azure vs. other cloud choices. You might even consider the NoSQL vs. relational decision to fall under the breadth of interaction dimension, especially if it impacts the customer experience. In other words, breadth of interaction means selecting the right tool for the job.

Dimension #3: Depth of community. The digital story is never a one-company or one-vendor story. There are always multiple participants. Any modern enterprise web page contains numerous third-party plugins and services. Any app with social functionality brings together communities of people using different technologies. And the ecosystem of mobile apps themselves leverages an extensive network of social applications and protocols.

Partner networks of all shapes and sizes fall under the depth of community dimension, from tightly knit supply chains to vendor OEM relationships to today’s cloud-infused managed service provider (MSP) business models. Any open source community drives this dimension forward as well.

The bottom line with the depth of community dimension is varying levels of control. From your relationship with your cloud provider to participation in open source communities, enterprises no longer have the tight-fisted control over their IT environments they did in the old days. But remember, with such control came the evils of heterogeneity. I’ll take digital diversity over the old rats’ nests any time.

The Intellyx Take: The Center of Digital Excellence in Action

In a recent article for Wired I wrote about the Center of Digital Excellence (CODE) as a way for enterprise architects to reinvent themselves for the digital area. The three dimensions of digital diversity is a good place to start. After all, taking a complex problem and breaking it up into simpler elements is the EAs stock in trade.

You might even think of the three dimensions as a partial replacement for the now obsolete Zachman Framework. Instead of trying to shoehorn our enterprise into arbitrary who/what/when/where/why questions, we now have three dimensions that represent the current challenges that any digital effort faces.

There is more to your architecture than the three dimensions, of course, as my writing on Agile Architecture will attest to. But for organizations that are struggling with the diversity of their digital efforts, the three dimensions should provide an organizing principle that will help them move beyond the homogeneous/heterogeneous dichotomy that has burdened enterprise IT for generations. It’s about time.

Image credit: Alex M.

Secondary Digital Effects of Social Media

$
0
0

For those of us old timers who rode the dot.com rollercoaster, an era some people have retroactively labeled Web 1.0, today’s notion of digital offers up some serious déjà vu.

Just as digital does today, the web represented a new way for companies to connect with their customers, as well as a call for a better connection between customer-facing groups in the enterprise and the technology back end.

Of course, there are important differences between digital today and the good old Web 1.0 days – most notably the rapid global ubiquity of mobile technologies plus the penetration of social media into so many aspects of our day-to-day lives.

socialFor digital professionals, social media obviously present numerous channels for interacting with customers. It’s no wonder that Facebook, Twitter, Instagram, and all the others are top of mind, especially for digital marketers.

However, social media have affected all of us in subtle ways beyond the obvious – and the effects are particularly important to digital marketing. Each of these secondary effects makes sense in and of itself, but putting them all together in one place reveals some important lessons.

The Facebook Effect

The World Wide Web was much simpler back in the day. You built a web site and published it on the web. Everybody who visited your creation saw the same site. If they wanted to load different content, they would click a link, and the page would refresh accordingly. Those were the days!

Then along came Facebook. Now, the notion of one page that looks the same for everyone is suddenly a thing of the past. Everybody has their own personal site at www.facebook.com – billions of them. And sure, you can click links, but even if you don’t, the content updates automatically.

In other words, Facebook is fully personalized and asynchronous – and because Facebook is so pervasive, now we’ve all come to expect the same personalization and asynchrony from every other site. Whether we like or use Facebook or not, it has forever spoiled us.

Personalization has been with us to some extent since the Web 1.0 days – but not like Facebook. Now, everybody’s Facebook page has unique content – different from everybody else’s, and different from moment to moment. It’s no wonder Facebook (or other sites that follow the same pattern) are so addictive.

The Amazon Effect

The Amazon.com ecommerce site has similarly spoiled us – although in this case, the Amazon effect is most dramatic in the business-to-business (B2B) world. It doesn’t matter if you’re shopping for electrical equipment, automotive parts, or aircraft – now everybody expects the ease of use, performance, and features of Amazon.

Recommendation engines? Check. Reviews? Check. Communities of sellers? Check. Similar or alternative products? Check. In fact, these “social metadata” are every bit as important as your product information itself – if not more so.

In fact, the Amazon effect is an important influence over the “bring your own device” (BYOD) trend. Because people want to use the same devices at home and at work, they want the same types of experiences in both environments. If purchasing/procurement is part of your job, therefore, it only figures that you would want an Amazon-like experience, regardless of the device you use.

B2B marketers must be especially creative to take advantage of the Amazon effect. On the consumer side, there is only one Amazon, but in the commercial and industrial worlds, there are numerous specialized marketplaces for everything from machine parts to cloud computing services.

Not only do you need to make sure your products and services are listed in all relevant marketplaces, but you must also insure that you have the appropriate social metadata around those products, just as Amazon does.

The Wikipedia Effect

The Wikipedia effect is also most dramatic in the B2B world, as well as for B2C for companies whose products or services are complicated – for example, fishing gear or high-end stereo equipment.

If you’re selling such gear, or if you’re providing just about any B2B product or service, you might spend hundreds of person-hours and piles of money to build out the informational parts of your web site. Page after page of spec sheets, instructions, and detailed solutions that you’re hoping customers will identify with and leverage to make their purchasing decisions.

Only where do they go to learn about what you have to offer? Wikipedia, of course. Everybody knows Wikipedia isn’t perfect, but it’s more likely to have an impartial perspective on your product category than your web site.

Many Wikipedia pages for product categories conveniently list vendors who sell such products – so clearly you need to be on such lists. But don’t go overboard with your Wikipedia editing, as the secret Wikipedia Gestapo frowns upon heavy-handed commercial content.

Keep in mind that just as the Facebook and Amazon effects are not simply about Facebook and Amazon, the Wikipedia effect similarly goes well beyond Wikipedia. Fundamentally, it doesn’t matter what you’re selling or how complete and detailed your own web site is, your prospects and customers will do most of their research about your products on other sites.

In fact, you could think of the B2C version of the Wikipedia effect to be the Yelp effect, as an increasing number of consumers use crowdsourced rating sites like Yelp for their product information.

For marketers, the Wikipedia effect is downright chilling. How can you expect to communicate your value proposition to your audience of potential customers if they’re more than likely to simply ignore you, instead choosing to get the information they need to make purchasing decisions from third-party sites you have little to no control over?

Sure, you can frantically run from one third-party site to another, editing a Wikipedia entry here and responding to a Yelp comment there. In the end, however, you’ll quickly realize that there are simply too many places for people to go for you to have much control over the information they find.

The Intellyx Take

The three examples of social media secondary effects above should get you thinking: what other such effects are out there? Clearly, the immediacy of Twitter, the ephemeral nature of Snapchat, or the anonymity of Yik Yak promise numerous such effects. What about Instagram, Pinterest, or the hundreds of others?

For digital marketers who still believe social media are little more than a new way to issue a press release, such secondary effects are yet another wakeup call that there’s more to digital than meets the eye. Even for the savviest of digital professionals, however, there are still subtleties to the practice of digital that bear careful attention.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers.

Bitcoin: The Queen of the Cyberwar Chessboard

$
0
0

In last week’s article for Forbes, I questioned the purpose of the radically innovative cybercurrency Bitcoin. Libertarians, criminals, speculators, and consumers all have an angle – but business models focusing on each of these constituencies each have serious flaws.

There is one purpose for Bitcoin I didn’t discuss in the Forbes article, partly because space didn’t allow, but also because it requires a higher level of speculation than Forbes may be comfortable with. The Intellyx Cortex, however, has no such constraints.

The final, and perhaps most significant purpose for Bitcoin is cyberwarfare.

The Ongoing Cyberwar

Inside a secret Chinese Bitcoin mine

Inside a secret Chinese Bitcoin mine

The role Bitcoin might play in a cyberwar is mostly speculative, but the ongoing cyberwar is all too real – and has been going on for years. I’ve written about this cyberwar before, both for DevX and ZapThink – and the cyberwar even made an appearance on ZapThink’s Enterprise 2020 poster.

And while terrorist organizations like ISIS command the headlines, the cyberwar has for the most part avoided the public consciousness. The reasons for this lack of attention are numerous: an unclear adversary, largely secret and technically complex attacks and defensive actions, and the simple fact that it’s been going on for so long.

But cyberwar there is. The latest skirmish? China’s hack of the US Government, stealing personal information about millions of government employees.

But simply saying our adversary is “China” is misleading. The Chinese government claims no responsibility, for what that’s worth. In reality, simply the fact that attacks originated in China might mean that the bad guys are organized criminals separate from the government, or perhaps disorganized criminals with no particular political agenda. It’s difficult to tell.

And then there is the question of the goals of the attack. Personal information about government employees is quite different from, say, sensitive military information. Why bother?

Among the best theories I’ve seen is that the malefactors will use the information for spear phishing attacks – fooling other government employees into believing an email is coming from a colleague, in order to introduce malware onto government networks. But the fact is, we really don’t know what the hackers are up to.

Bitcoin’s Cyberweakness

And that brings us to Bitcoin. At the center of the Bitcoin industry are miners – people with powerful computers that manage the Bitcoin infrastructure and in return, generate Bitcoin rewards for themselves.

In the early days of Bitcoin, most any computer would do as a mining computer. Today, however, for Bitcoin mining to be cost-effective, miners must assemble large farms of specialized hardware in locations with low labor and electricity costs. A great place for such mines? You guessed it: China.

While China is an economical place to build toys and iPhones, Bitcoins are different, because of how any weakness in the Bitcoin infrastructure might impact the global economy.

The Chinese (as well as everyone else) are quite aware that perhaps the most serious weakness of the Bitcoin infrastructure is the 51% problem: if anyone controls 51% of the Bitcoin mining computers, then they can commandeer the entire Bitcoin infrastructure.

Now, before you sell all your Bitcoin in a panic, rest assured that Chinese mines account for substantially less than 51% of the total at this time. But don’t sit too comfortably, as miners group their mines into pools for greater efficiency – and if one of these pools approaches the 51% mark, then it too presents a risk to the entire system.

Assembling such dangerous pools, however, doesn’t really explain China’s interest in Bitcoin, at least not directly. Today, Bitcoin mining operations are ostensibly profit-generating businesses – and thus miners are simply entrepreneurs, regardless of whether they’re located in Dalian, China or Dayton, Ohio. And furthermore, taking over the Bitcoin infrastructure would hardly impact the global economy in any substantial measure.

Today.

Playing the Cyberwar Chess Game

Here’s where the speculation comes in. Hypothetically, if a nation state had a plan to either topple or take control of the global economy, and had the resources, patience, and technical capability to mount a long-term cyberwar with that goal in mind, how would they go about it?

Bitcoin would be a good place to start. If they could simultaneously support the spread of Bitcoin, thus making it a greater part of the overall global economy, while at the same time establish control over enough of the Bitcoin mines to make a sophisticated play for the 51% target at some point in the future, then they may be able to wrest control over the global economy from the likes of the International Monetary Fund, the World Bank, the US Federal Reserve, and the other establishment institutions who have controlled it since World War II.

The US-led power base in charge of the global economy would, of course, have something to say about a theoretical Bitcoin-driven takeover, and at best would be putting in place safeguards, both public and covert, to prevent such a calamity.

However, because of its cyberspying ability, our hypothetical enemy would likely know of these efforts to divert the economic takeover. What, therefore, can our enemy do today to lay the groundwork to neutralize US-led efforts to prevent a Bitcoin-driven takeover of the world economy?

Mount spear phishing attacks with US government workers’ personal information, perhaps?

The Intellyx Take

The theory laid out above is purely speculative, of course. I have no more information than anybody else. Furthermore, I am loathe to single out China as a potential perpetrator. Today’s cyberwar is being fought by a complex, anonymous coalition of parties, consisting potentially of multiple governments, organized crime elements, or others. China may be the largest, but is by no means the only participant.

Furthermore, any connection between the recent theft of government employees’ information and Bitcoin is tenuous at best. The point to this article is less to draw a realistic connection between the two, and more to illustrate how cyberwars work.

In fact, the ongoing cyberwar is subtle, nefarious, and complex. The aggressors and their motives may be difficult to determine. And furthermore, the tactics in a cyberwar may be akin to moves in a chess game, where the strategy centers on game play several moves ahead.

But make no mistake, in this ongoing cyberwar chess game, Bitcoin is no mere pawn. Bitcoin is the queen in the ranks of the cyberwarrior – potentially the most powerful piece on the board.

Play the game carefully, people.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: confidential.

DevOps Insights into Conway’s Law

$
0
0

Both digital transformation and devops are organizational and cultural transformations more so than they are technology changes – although in both cases, technology plays a large part in driving the organizational change necessary to achieve the business value for either effort. Just how the organizational changes and technology changes work together, however, is a difficult question.

Fortunately for us, this question is an old one – 47 years old, in fact. In a 1968 paper, computer scientist Melvin Conway wrote, “Any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure.”

Melvin Conway, circa 1968 and more recently.

Melvin Conway, circa 1968 and more recently.

Conway produced no evidence of this statement, and his paper was initially rejected as a result – but to this day, we refer to this statement as Conway’s Law.

Law, however, is decidedly an overstatement. Observation is actually more accurate. In fact, the Wikipedia page for Conway’s Law states that “Although sometimes construed as humorous, Conway’s law was intended as a valid sociological observation.”

Whether the statement be humorous observation or law, however, today Conway’s Law plays a central role in our efforts to break down organizational silos in order to improve business velocity and better meet the needs of customers – the purported goals of devops and digital transformation, respectively.

For Conway’s Law to be a useful tool, however, we need a better causal story. Can changing our organizational structures impact our technology? Or more importantly, how does technology impact our organizational structures? And why?

A Brief History of Conway’s Law

Melvin Conway was an early computer scientist whose most interesting accomplishment other than the law itself may have been his part in creating the compiler for the original 1984 version of Mac Pascal.

His eponymous law, however, apparently languished until sometime before 1996 or perhaps even earlier than 1991, when open source guru Eric Raymond included Conway’s Law in his Jargon File. He restated it with the snarky example, “If you have four groups working on a compiler, you’ll get a 4-pass compiler.”

Our story then gets more interesting when the Harvard Business School attempted to put some meat on the bones in a 2008 study. They use an objective measure of the modularity of software to find evidence for Conway’s Law. They conclude:

  • A product’s architecture tends to mirror the structure of the organization within which it is developed.
  • New organizational arrangements can have a distinct impact on the nature of the resulting design, and hence may affect product performance in unintended ways.

It’s important to note two important updates to the context of Conway’s Law by the time of the 2008 study: an organization’s “communication structure” becomes the structure of the organization itself, while “system design” becomes “product architecture” – with the constraint to product architecture being the scope of the study, rather than any sort of judgment on the scope of Conway’s Law itself.

What the 2008 study is essentially lacking, furthermore, is a discussion of the underlying causal principles behind the law. Instead, it is more or less taken for granted that siloed technology teams will create modular systems with the modularity aligning to the team structure simply because that’s the way people behave when they’re in teams.

The reason this question of causality is so important is because it goes to the heart of how we might use Conway’s Law to actually improve things. In particular, will changing organizational structures enable us to build better software?

Jonny Leroy and Matt Simons of ThoughtWorks explored this question when they coined the term “Inverse Conway Maneuver” in an article in the 2010 issue of the Cutter IT Journal. They state:

Conway’s Law … can be summarized as “Dysfunctional organizations tend to create dysfunctional applications.” To paraphrase Einstein, you can’t fix a problem from within the same mindset that created it, so it is often worth investigating whether restructuring your organization or team would prevent the new application from displaying all the same structural dysfunctions as the original. In what could be termed an “inverse Conway maneuver,” you may want to begin by breaking down silos that constrain the team’s ability to collaborate effectively.

The ThoughtWorks article takes the context of Conway’s Law from observational to normative: not satisfied simply to reflect on the way things work, they take the important step of opining on how things should – and should not work. They also intentionally focus on the negative: rather than simply describing how to build good software, they note that Conway’s Law is about building dysfunctional software.

Reversing the Inverse Conway Maneuver

It’s no surprise that ThoughtWorks’ focus is on changing organizational structures in order to build better software – after all, ThoughtWorks is a software development organization, and most modern software development thinking, including both the Agile and Lean movements, focuses on organizational change in furtherance of better software.

For digital transformation efforts, however, we must reverse this discussion. Technology change is driving changing customer preferences and behavior, which in turn are driving organizational change across increasingly software-driven enterprises.

The causality question behind Conway’s Law, therefore, is less about how changing software organizations can lead to better software, but rather how companies can best leverage changing technology in order to transform their organizations.

Hints at how to answer this question surprisingly come from the world of devops – surprising because the focus of devops is ostensibly on building and deploying better software more quickly. Be that as it may, there’s no question that technology change is a primary facilitator and driving force for the devops cultural and organizational shifts.

Connecting the Dots to Conway’s Law

If we didn’t have the cloud computing example of fully automated deployment and operational environments, and if we didn’t have today’s dramatic innovations in continuous development, continuous integration, and continuous delivery tooling, then devops would never have left the whiteboard stage. There’s no question that the devops story is a tale of technology-driven organizational change.

The devops technology landscape, however, doesn’t have an end-to-end, seamless technology story. On the contrary, this landscape is cluttered with dozens of various tools, many open source, all of which are in various stages of maturity. Therefore, it’s not readily apparent how to apply Conway’s Law, since we’re trying to leverage diverse toolchains of tools and technologies in order to help evolve our organizational structures.

In fact, Conway’s Law describes how such a diverse tooling marketplace came to be in the first place, as open source teams are generally quite modular, and thus will produce modular software.

Once we’ve solved the organizational challenges of digital and devops, breaking down silos in order to deliver customer value at velocity, then we can expect Conway’s Law to kick in again, and predict the rise of end-to-end software solutions as the result of horizontally self-organized teams.

It’s the middle piece, however, we’re struggling with now: how do we leverage a diverse set of disparate technologies to facilitate the cross-cutting reorganization of our businesses, contrary to the observation of Conway’s Law? And do we really think going against Conway’s Law will actually work?

Not to fear. In fact, the technologies that underpin devops aren’t as diverse and disparate as the application of Conway’s Law might suggest. True, the creators of all the tools in our devops or digital tool belts are working mostly independently of each other, a pattern of behavior which in the past has led to incompatible software mishmashes.

Today, however, we’re seeing the rise of what we might call crowdsourced architecture, as all of these teams work within the context of mature communication protocols, RESTful interfaces, and other emerging architectural trends like containerization and microservices.

As a result, mostly independent doesn’t mean completely independent. Instead, we have the loosely connected communication structure that assures us that Conway’s Law is still alive and well. What’s changed is the context for this notion of an organizational communication structure.

Conway was referring to the communication structures of companies or software development organizations or perhaps individual software development teams. Today, however, we’re talking about the broader technology community itself.

The Intellyx Take

There was no way Melvin Conway could have observed such crowdsourced architectural maturity back in 1968, because the telephone and snail mail-based communication structures of the day weren’t conducive to crowdsourcing anything. In contrast, today we have a plethora of tools and processes for facilitating communication and collaboration across traditional projects, teams, and open source efforts.

The end result is essentially a two-level application of Conway’s Law: a collaborative extended community of technologists that creates not simply a collection of disparate tools but rather chainable tools that leverage crowdsourced architectural principles to facilitate a level of coordination and interactivity we’ve never seen before.

This coordinated technology environment, in turn, facilitates the reorganization within companies, as they now have the tools they need to break down organizational silos, and people within those companies self-organize along horizontal lines, connecting customer experience to back-office software development and operations.

Conway’s Law, therefore, does work both ways. Organizational structures impact system design, and system architectures impact organizational structures as well. In the final analysis, however, Conway’s Law remains stubbornly observational. The underlying causal story – why such observations are remarkably universal – remains to be told. Stay tuned!

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Melvin Conway.

Conway’s Law and the Emergence of Business Agility

$
0
0

In my last Cortex newsletter, I discussed the history of Conway’s Law, and took a close look at how this erstwhile law can help us understand the reorganizations and deeper cultural shifts behind devops and digital transformation.

The law – “any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure” – is more of an observation of correlations between system designs and communication structures, rather than anything resembling a law.

Nevertheless, in the 47 years since computer scientist Melvin Conway inadvertently entered the techie lexicon, there have been various attempts at deriving causal principles from the law. And while causal relationships between our organizations and the software systems they create have plenty of examples, the true causal story – the why of such correlations – has largely escaped us.

That is, until now.

Party Like It’s 1967

zapposRemember that Conway came up with his law in 1967, where newfangled software organizational concepts like Agile and Lean were little more than a gleam in his eye. Instead, his context was the traditional command and control lines of communication within a standard hierarchical organization.

Within such hierarchical organizations, the fundamental difference between one twenty-person team and four five-person teams, for example, are the differing decision making and enforcement structures.

Regardless of whether we have one large or four smaller teams, the resulting systems will clearly follow Conway’s Law – with the one large team creating a single system design, while the four smaller teams will end up producing modular, differing system designs.

When we look at the relationship between our software and our organizational structures within the context of digital and devops transformations, however, the Conway’s Law causal story gets decidedly murkier.

As I explained in the previous Cortex, the diversity within the continuous development, test, integration, and delivery toolchains that devops efforts use would contraindicate the cross-organizational cultural shift that devops represents.

Only if we recognize that the broader open source community is largely responsible for such toolchains, then we can look to the decision making structure of that community at large to indicate why such toolchains have a chance of working together properly. After all, only when our technology works as an integrated whole will Conway’s Law give us hope that we’ll be able to achieve the cultural change we desperately desire within our own organizations.

So too with digital transformation. If we take too narrow a view, perhaps focusing only on the fact that customers are demanding mobile interactions with companies, then Conway’s Law would suggest that the modularization of our system design around mobile technology will cause us to build mobile teams separate from other teams within the digital initiative.

Taking this modularized approach to digital initiatives, however, is a recipe for failure, as this view shortchanges the true nature of digital transformation. But many enterprises are falling into this trap nevertheless, as Conway’s Law would predict.

Self-Organization: The Missing Piece of the Puzzle

However, customers aren’t simply demanding mobile experiences. In reality, they desire omnichannel experiences – interactions with companies that cut across technology touchpoints and form factors to support coherent, long-term relationships – or customer journeys in digital marketing parlance.

Ideally, such omnichannel customer demand should drive crosscutting technology architectures, which in turn should drive crosscutting digital reorganizations. But if we understand Conway’s Law in terms of traditional command-and-control communication structures, we’ll inevitably end up with siloed technology that reinforces rather than breaks down our siloed organizations.

If we do away with such hierarchical thinking, however, an amazing thing happens. People will organize themselves into teams (if you even want to call them teams). Such self-organization, in fact, is behind the success of open source-driven devops toolchains.

The open source community at large is primarily self-organized, as individuals decide where and how to participate, based upon individual priorities and the priorities of the various projects – not 100%, as sometimes external forces drive the organization of such teams, but the community is sufficiently self-organized for sufficiently integrated toolchains to emerge.

Self-organization is also the key to digital transformation. If people self-organize around customer omnichannel priorities, then they will form cross-organizational communication structures that will drive end-to-end system architectures, as Conway’s Law predicts – but only in the presence of such self-organization.

Self-organization alone, however, is not a panacea. After all, in an environment of highly modular software, people are likely to organize by technical specialty around each package. To complement self-organization, therefore, we must introduce the appropriate constraints.

Such constraints should always include the strategic business priority, and should also include necessary governance and security limitations that any team, self-organized or not, must adhere to.

Perhaps the most important effect that establishing this model of self-organization within the context of external constraints is its fundamental adaptability. The constraints can be as dynamic as the situation requires, and people will naturally organize or reorganize as necessary to comply with those constraints – in marked contrast to the inflexibility of traditional command-and-control organizational structures.

Self-Organization, Emergence, and Complex Adaptive Systems

Self-organization within the context of external constraints isn’t a new idea – how Netflix delivers innovation and resilience is one example, and Zappos’ holacracy is another. But without a solid justification for such cutting-edge organizational models, enterprises rightly see them as little more than experiments.

Conway’s Law helps us move past this experimental context for self-organization. Once we have such inherently adaptable organizational structures, we will necessarily end up with inherently adaptable system architectures – the Agile Architecture I’ve been discussing in my research for years.

But most significantly, the shift from command-and-control to self-organization finally shines a light on the why of Conway’s Law: emergence.

The causal pieces to this story only fall into place once we realize the entire enterprise – people as well as technology – form a complex adaptive system. Depending on the constraints that govern the behavior and interaction of our component subsystems (the humans and their software), different emergent behaviors result – emergent in the formal sense of a property of a complex system that isn’t a property of its subsystems.

Furthermore, the nature of emergence explains Conway’s Law – giving us the answer to our basic why question, just as emergence explains why bees create hexagonal hive structures or why many galaxies are spirals.

And we can rest assured that Conway’s Law applies across the full spectrum of subsystem constraints, with traditional command-and-control at one extreme, and fully self-organizing teams at the other. Change the constraints, change the emergent behaviors.

The Intellyx Take: Turning the Agility Dial

In fact, we now have new insight into the usefulness of Conway’s Law, as it helps us understand how to turn the business agility dial on our organizations.

Do we want highly controlled, predictable behavior, well-suited for optimizing traditional business metrics? Turn the dial toward the traditional command-and-control we find in most enterprises today – but don’t be surprised if the systems we end up building are inflexible at best or fully dysfunctional at worst.

On the other hand, if we turn our dial toward self-organization, we’ll end up with technology systems that cut across now-defunct organizational silos, responding to changing business priorities and supporting strategic innovation goals.

A desirable outcome to be sure – but easier said than done, as we must relinquish our preconceptions about how to run a business. Are you ready?

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Erica Joy.

Are Microservices ‘SOA Done Right’?

$
0
0

Given my years at ZapThink, fighting to help architects understand what Service-Oriented Architecture really was and how to get it right, it’s no surprise that many people ask me this question.

If you took my SOA course, you can probably guess my answer: it depends.

Even to this day we don’t have a universally accepted definition of SOA. But even if we did, we’d also have to figure out what “SOA done wrong” means, in order to contrast microservices with it.

However, the question “Are Microservices ‘SOA Done Right’?” itself is a mix of apples and, well not oranges – more like confusion between apples and best practices for running an orchard.

What we really mean to ask is whether microservices architecture is SOA done right. But then, of course, we’d have to figure out what microservices architecture was. And if you think defining SOA is difficult, pinning down microservices architecture is unquestionably frying pan into fire time.

The last thing I want to do, however, is make this Cortex into a discussion of the definitions of terms. True, defining terms is what architects love to do best – even though they never end up agreeing. It’s no wonder that the collective term for these folks is an argument of architects.

stationSOA Done Wrong

In my Bloomberg Agile Architecture Certification course I paint the difference between what I like to call first generation and second generation SOA. The first generation centered on the role of the ESB, and sported Web Services as the primary type of service.

It might well be argued that this middleware-heavy, XML-laden approach to SOA was SOA done wrong, and to be sure, many times it was – but then again, on occasion a particularly hardworking and indubitably masochistic architecture team actually got this stuff to work.

Second-generation SOA is REST-based, favoring lighter weight approaches to moving messages around than the heavyweight ESBs that gave SOA a bad rep. Note, however, that the rise of REST-based SOA predates the microservices wave – so only through some convoluted revisionism might we call this approach microservices architecture.

We might therefore also ask whether this second generation, REST-based SOA was SOA done wrong as well. Once again, the answer is sometimes yes, sometimes no.

RESTful interfaces clearly cleaned up a lot of the mess that Web Services left behind. But implementing properly abstracted, governed service interfaces in the absence of traditional middleware often proved to be surprisingly difficult.

ESBs might be SOA crutches to be sure, but don’t forget, the alternative to crutches is often falling on your face.

Reinventing the Service

Fortunately, the microservices story isn’t about ESBs. It’s about services. Starting from the earliest days of first-generation SOA and running throughout SOA’s RESTful days, the word service meant a contracted software interface. In other words, a service abstracted the underlying software rather than being software itself.

Microservices, in contrast, rethink the notion of service altogether. No longer is a service a contracted software interface at all. Instead, a microservice is a unit of execution.

I like to define a microservice as a parsimonious, cohesive unit of execution. By unit of execution I mean that microservices contain everything from the operating system, platform, framework, runtime, and dependencies, packaged together. And naturally, microservices are fully encapsulated, supporting interactions entirely through their (usually RESTful) APIs.

See my recent BrainBlog post for a deeper discussion of microservices (in particular, what I mean by parsimonious and cohesive) – as well as some insight into how microservices architecture is supposed to work. But even with my thought-provoking connected car scenario from that post, we still have the questions of the day: is such a microservices architecture SOA? And if so, is it SOA done right?

Microservices typically have RESTful interfaces, and we’re likely to have sufficient metadata that will qualify as a service contract. As a result, we end up with the awkward namespace collision that microservices expose services – but aren’t services themselves in the sense of service as a contracted interface.

But that distinction is neither here nor there. Maybe it’s time to update our notion of service to include units of execution. Especially if we’re using containers.

Microservice Architecture as Container-Oriented Architecture

It’s no coincidence, of course, that the definition of microservice makes them particularly well-suited for containers. And while you could certainly implement microservices without containers and vice-versa, there’s no question that the cool kids are combining these two approaches.

If you’ve been following the activity in the container world, you’ll have noticed that quite a bit of thought has been going into the question as to what container-oriented architecture best practices might be. Docker in particular is blazing this trail – but the work isn’t nearly ready for prime time.

Even in its formative state, however, container-oriented architecture doesn’t look much like SOA – done right or not. And yet, the questions of whether microservice architecture and container-oriented architecture will end up being the same thing – or even whether they should be the same thing – still remain to be answered.

Regardless of which side you fall on these questions, there are aspects of container-oriented architecture that weren’t part of the SOA story – or at least, the first-generation SOA story: cloud architecture best practices.

What cloud computing brought to the SOA table are the principles of horizontal scalability and elasticity, automated recovery from failure, eventually consistent data (or more precisely, tunable data consistency), and a handful of other now-familiar architectural principles.

And in fact, these cloud principles complement next-generation SOA, just as they form the basis of container-oriented architecture. And since microservices are container-friendly by design, our argument of architects might argue that microservices architecture is SOA plus cloud architecture done right.

Service Composition: The Missing Piece of the Puzzle

One important aspect of the SOA story is missing from this discussion, however: service composition. Central to first-generation SOA was the goal of composing services in order to implement business processes – a goal that led to the BPEL standard for Web Service composition, a debacle that to this day remains notorious for its abject failure.

Bottom line: if you want to talk about SOA done wrong, look no further than how service compositions failed to implement business processes.

As we moved to second generation, REST-based SOA, the service composition story takes an interesting turn, as REST does have an angle here: hypermedia. Roy Fielding’s original vision for REST was as an architectural style for building hypermedia systems – and lo and behold, what is a hypermedia system but an executable composition of RESTful services?

Still with me? No? Well, you’re not alone. Few people understood all this gobbledygook about hypermedia, in particular, REST’s hypermedia constraint – you know, hypermedia as the engine of application state? – so they decided to punt on the whole shebang. REST in practice became little more than an API style, which it remains to this day.

So now we’re discussing microservices architecture – which means we have to ask how best to compose microservices. The jury is still out on this question, but there’s one thing I can say about microservice composition: it must be parsimonious and it must be cohesive.

And given that microservices have RESTful interfaces, the architectural approach for composing them that is both parsimonious and cohesive is to treat microservice compositions as hypermedia systems.

Mark my words: microservices architecture will never be SOA done right unless it means building hypermedia systems of microservices. You heard it here first, folks.

The Intellyx Take: SOA Done Right

SOA has always been a rather loose collection of architectural best practices. And you know what makes a practice a best practice? You try a bunch of things, and the one that sucks the least is the best practice. At least, until a better one comes along.

Now that SOA has a few decades under its belt, the loose collection of best practices we call SOA continues to mature, as new practices gradually supplant earlier contenders. In this bucket we’re now adding various microservices best practices and container best practices – both unquestionably in their “whatever sucks the least” phase.

And we’ve also added cloud best practices and REST best practices to the mix – including the hypermedia best practices that have always been the raison d’être of REST. And to be sure, we have our tried and true SOA practices – not the ones that led us to failure, but the practices that through years of hard work and trial and error finally led us to successful implementations that supported our business agility drivers.

SOA done right, therefore, isn’t a fixed goal to aspire to. It’s a journey of architectural discovery, as we piece together the hard-fought lessons of enterprise system deployments, one practice at a time. And as long as we learn the lessons of the past, we will continue to make progress toward our ultimate business goals – sucking less as we go.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. At the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Aussie~mobs.

Bring the Omnichannel Purchase to the Digital Customer

$
0
0

Notwithstanding my exhortations against digital marketing creepiness, B2C digital marketers love to circumlocute around the notion of the purchase. “It’s all about the customer journey!” they exhort. Focusing on the purchase transaction, it seems, is too mercenary for today’s enlightened digital professional.

And yet, you can’t have a customer journey without a customer. And what turns an ordinary human being into a customer? A purchase.

The point to the customer journey is that the purchase isn’t the end of the relationship between customers and the companies serving them. After-purchase support is important to be sure. But the real value behind the customer journey? Follow-on purchases. Even with lifelong customer relationships, the purchase is where the rubber hits the road.

Marketing, especially in the B2C space, has always been about driving toward the purchase transaction. That’s what marketing funnels are all about, of course. Now with all this talk about customer journeys, perhaps marketers have lost their way. The complexities of today’s omnichannel, digital-infused world seems to have muddied the waters.

Today, marketers don’t want to be too forward with their focus on sealing the deal. Selling stuff is too crass, so let’s focus on building relationships. Yet what do we really mean by building relationships with customers? Selling more stuff, of course!

The Disingenuousness Trap

In fact, the broad ecosystem of customer experience technology is in reality all about driving customers toward the purchase, as the figure below illustrates. Even the customer journey plays its part in driving customers toward future purchases.

Purchase diagram

All too often, however, the result of this complex marketing shell game is a perception of disingenuousness. Consumers, after all, are a rather savvy lot. They know when they’re being sold to, even when the seller in question layers on all the modern digital relationship-building, customer experience hullabaloo.

The problem: successful relationships are by their nature two-way, but selling stuff online has always been one-way.

In particular, if you look at traditional ecommerce, you have several basic elements: Search. Catalog. Product information, perhaps with recommendations and reviews. And then you have the shopping cart, the financial transaction, and behind the scenes comes logistics and fulfillment.

In other words, today’s ecommerce brings customers to the purchase transaction. What’s missing from this scenario is bringing the purchase to the customer.

Multichannel vs. Omnichannel

This “bring the customer to the purchase” mentality is central to multichannel marketing. Perhaps you bring the customer to the web site, or maybe to the store, or maybe to the telephone – all separate channels.

This approach is now so firmly entrenched in the way we conduct commerce, we don’t think twice about it. But from the consumer’s perspective, any effort on the part of the merchant to build a relationship comes across as phony.

Bringing purchases to the customer involves a rethink, even within separate channels. For example, the Apple Store doesn’t have cash registers. Instead, sales associates roam the floor with mobile devices, and are able to complete a purchase wherever in the store the customer is standing. In order words, Apple is physically moving the purchase to the customer.

Another single-channel example: The Garmin Vivo wearables site at http://sites.garmin.com/en-US/vivo/ . This web site has a fully-featured ecommerce back end, where the customer can drill down into as much detail as they like. But notice the “buy now” buttons on the linked page. They bring the purchase to the customer so they don’t have to explore the complexities of a labyrinthine ecommerce site if they don’t want to.

Note that neither Apple nor Garmin is making any excuses about the fact that they’re selling stuff. In fact, they’re both making it easier for customers to buy their products. But neither merchant comes across as disingenuous, because the purchase call to action is an explicit aspect of the customer relationship.

Omnichannel without Disingenuousness

While bringing the purchase to the customer is an added but optional bonus for single channels, taking this approach is absolutely necessary for omnichannel marketing. Take for example showrooming, the prototypical omnichannel interaction: a customer walks into the retail store, holding their phone, comparison shopping on the phone as they peruse the physical merchandise.

Now it’s up to the sales associate to interact with the customer on their terms, bringing any information or technology to bear to bring the purchase to the customer. If the associate drops the ball, they lose the purchase.

The best retail sales associates have always known when and how to bring purchases to customers in the in-person, retail setting. Now with digital-enabled omnichannel marketing, everyone in the organization from the CMO to the retail associate has to relearn this basic lesson.

In fact, if we extend these single-channel examples to unified omnichannel experiences, we’ll cross over mostly into the realm of fiction.

How about “buy now” buttons on, say, the clothing that television characters are wearing? Press a button on your TV remote and your garment arrives the next day.

Or perhaps a vending machine that cross-sells other merchandise. You pay for your Coke and a cookie in a single transaction at the vending machine, and a sales associate hands you your cookie.

These are both fictitious examples – today. But first, there are no technology limitations preventing such omnichannel purchases, and second, coming up with your own cool examples isn’t that tough. Why can’t you purchase from a digital sign in a mall or an airport? Why can’t you use your phone to purchase an item in a supermarket?

Omnichannel today usually means multichannel with a bit of glue added between channels. Tomorrow, in contrast, I predict entirely new, deeply disruptive omnichannel business models – even more disruptive than Apple Stores.

True, there is still some friction due to the newness of the technology. But the real market friction is a lack of imagination.

The Intellyx Take

Today’s marketing is all about building relationships. But the point to all this relationship-building isn’t to hide the fact you’re trying to make the sale. If you take that approach you’ll simply come off as phony.

Instead, embrace the fact that your customer actually wants to buy something. Make it easy for them. Build your relationship with them around their purchase transactions, because that’s what the relationship is really about. Customers will appreciate your honesty.

The confusing, dynamic, and unquestionably powerful range of digital technologies available to today’s marketer make it easy to come across as creepy or disingenuous. Don’t fall for those traps. The same technologies also facilitate well-executed omnichannel strategies your customers will love.

And when you strip away all the digital marketing mumbo-jumbo, what do you get? People buying your stuff. Isn’t that what it’s all about?

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. At the time of writing, none of the organizations mentioned in this article are Intellyx customers.


Seven Promises of the Digital Brand

$
0
0

Brands are more than the sum of their brand elements – logos, colors, shapes, and the like. Brands are promises. Promises from a company to its customers that its products will deliver the value and experience customers expect.

Today, digital is transforming enterprises across numerous industries. As companies become software-driven organizations, their brands transform into digital brands. But if brands are promises, then what do digital brands promise – and how do those promises differ from traditional, non-digital brands?

brandsDigital Extends Branding

Before I get into the list of digital brand promises, it’s important to point out that digital vs. traditional branding isn’t an either-or situation. Rather, digital technologies themselves, as well as the broader context of digital as recognizing that customer preferences and behavior drive enterprise technology decisions, extend and transform tried-and-true branding principles.

Such extension and transformation, therefore, isn’t a one way street. You can’t simply say that you’ll take some existing brand and turn it into a digital brand. In reality, digital is transforming branding itself – so even your traditional brands will undergo a digital transformation, whether you like it or not.

With those points in mind, then, here are the seven promises of digital brands.

Promise of authenticity. Digital – social media in particular – transforms the brand interaction into a conversation. But the only way a person wants to have a conversation with a company is if the conversation is truly authentic. Technology can easily get in the way of such authentic interactions.

My last Cortex newsletter, Bring the Omnichannel Purchase to the Digital Customer, went more in depth in how easily brand interactions can be disingenuous. Don’t make this mistake.

Promise of respectfulness. Misuse of marketing technology can disappoint, shock, or anger customers. The more you know about a customer, the easier it is to stalk them and spy on them. Avoid the digital marketing creepiness factor and respect your audience.

Promise of coherence. Multichannel marketing recognizes that customers may favor one channel over another – for example, they may want to shop in a store one day or online another.

Omnichannel marketing extends the notion of multichannel by recognizing that from the customer perspective, all interaction touchpoints should be a single, coherent channel. If I want to use my phone to shop while I’m in a store, or if I want to interact with a brand via Twitter while I’m watching their commercial on TV, then so be it.

To keep the promise of coherence, brands should recognize customers across all touchpoints. If I get an email newsletter from a brand, I want their sales associates to know I get the newsletter and what it said when I go into the store. If I call an airline, I want the customer service rep to know I’m a premium flyer.

Promise of individualization. Traditional marketing offers personalization and segmentation. Neither one goes far enough in the digital world.

When I go to the Amazon or Netflix web sites, I see personalized recommendations based upon my past purchase history and what other people with tastes similar to mine liked. But – if I ordered a kids’ movie for my grandson last week, they’re likely to recommend other kids’ movies today, even though my grandson isn’t visiting at the moment.

Other brands focus on segmentation. I’m a professional in my fifties, so AARP mails me stuff with annoying regularity. But – they have no way of knowing if I’d be interested in joining, and among all the perks of being an AARP member, they have no idea which ones I’d like. So directly into the trash it goes.

Individualization takes personalization and segmentation one huge step further. With individualization you essentially segment your target market so finely that each segment has a single person in it. As long as you remain authentic and respectful, you can now analyze digital’s copious quantities of data to offer each individual customer precisely what they want, how they want, when they want it.

Promise to stay current. Some brands promise consistency and stability. When I buy a bar of Ivory Soap or rent a room at the Hampton Inn, I expect it to be exactly like every other bar of Ivory Soap I’ve ever purchased or Hampton Inn room I’ve ever rented, respectively. Surprises with such products are almost always bad.

Digital brands, in contrast, have to keep up with the times. Customers always want the latest and greatest, whether it be current pricing on the web site or current merchandise in a store.

It’s important to note that a brand can promise to be consistent as well as current. When I go to the Hampton Inn web site, it had better be current. A hotel web site with out-of-date pricing or availability data is worse than useless. But the last thing I want from the room is a surprise.

Promise of performance. Performance overlaps the final promise, quality – but digital brands have a particular promise of performance that warrants calling out. With digital branding, speed is the name of the game. Every interaction must be in real-time, or as close to real-time as is practical for the type of interaction.

Customers don’t care that mobile networks are slower than their cable TV Internet at home – they want mobile apps and web sites to respond blisteringly fast regardless. When a customer emails a company, they want a personal response right away. And it goes without saying I should be able to get a real human on the line when I call the call center at 3:00 AM on a Sunday morning.

Promise of quality. Quality, of course, has always been one of the most important brand promises since, well, the invention of commerce itself. I repeat it here because digital raises the bar on quality – because of the other six promises. As a brand strives to keep its other promises, the promise of quality cannot be allowed to suffer.

In fact, the quality promise overrides each of the others. You could have the most authentic brand in the world, but if your quality sucks, it doesn’t matter. You could have the most coherent brand in the world, but if your quality sucks, well, you get the point.

The Intellyx Take: The Promise of Delight

Few digital brands are able to keep all seven of the promises above, and truth be told, they are all somewhat negotiable, except for the promise of quality. Coherence and individualization, for example, are extraordinarily difficult to get right – but as a result, it’s unlikely your competition is going to get them right, either.

So the reality of digital branding is that it doesn’t have to be perfect. It just has to be better than the other guy’s.

On the other hand, today’s world is full of brands who suck at digital in one way or another, with some industries worse than others. Sure, cable companies, telcos, and auto dealerships have mostly gone digital. But everybody hates their cable company, mobile phone provider, and the dealership where they bought their car nevertheless (with only rare exceptions). There’s clearly room for improvement.

As brands figure this stuff out, in contrast, something magical happens. Customers actually end up liking the brands they interact with. Customer delight, in fact, is the overarching brand promise that all the other promises roll up into.

Sure, you can point to the fact that none of your competition is doing this any better than you are, but that’s just an excuse. Instead, focus on what it will take to get this digital branding thing right. Not only will you run circles around your competition, but you’ll delight your customers – and you can take delighted customers all the way to the bank.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. At the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: wackystuff.

Is Agile Killing Enterprise Architecture?

$
0
0

Earlier this month, author, IT strategist, and pyromaniac Charles Betz wrote a column for The Data Administration Newsletter that asked the question, “Is Agile killing enterprise architecture?”

Not satisfied with a minor campfire, I figured I’d fan the flames, so I posted links to the article to several EA-focused groups on LinkedIn. Little did I know at the time the conflagration that would result. Clearly this question found some dry combustibles.

Time to fetch the gasoline and provide my take on this question. What could possibly go wrong?

churchDefinitions and Elbows – Everybody’s Got a Couple

Over the years I’ve written on numerous topics, but the one topic that’s sure to get the most comments is enterprise architecture (EA) – especially if the definition of EA is in question. It seems there’s nothing an enterprise architect likes to do more in this world than opine on what EA is and what it isn’t.

Perhaps EA is IT portfolio management. Yes, someone has got to keep track of all the applications, servers, and other miscellanea cluttering up the enterprise’s data centers and clouds. Even better, keep track of when everything is going out of date, and while you’re at it, make a note of how everything talks to everything else.

IT portfolio management is important to be sure, but is it EA?

Or perhaps the core of EA is some set of diagramming activities – or modeling, or planning, or visualizing, or documenting, or some other word that boils down to drawing pretty pictures. Everybody loves pretty pictures to be sure. If your three-year-old draws one for you, it’s definitely going on the fridge. But is it EA?

Or maybe EA is essentially governance. If someone from a line of business wants something from IT, they have to pass the request by the EA gatekeepers first. After all, nobody wants duplication or spaghetti integration, right? Been there, done that, got the T-shirt. So nothing gets done until EA gives it the stamp of approval.

If that’s the definition of EA, then the first thing we should do is burn that sucker to the ground. There’s no way any organization will ever be agile if there’s a whole department in charge of roadblocks. If Agile is up to the task, then more power to it.

My Kingdom for a Capital A

But wait, you say! In that last sentence, I used the word Agile in one place and agile in another, and the meanings were entirely different. The simple act of capitalizing the first letter took a basic business concept and turned it into a religious argument.

Agile-with-a-capital-A, of course, refers to the Agile Manifesto, and the software development methodologies like Scrum that follow its precepts. How do you know if your developers are following Scrum? Simple. If they have meetings standing up, they’re following Scrum.

These standups as they’re called are an important Scrum doctrine. And as with any religious mandate, if you screw it up you’re in for a world of hurt.

Or perhaps, worshipping the dogmata of the Church of Agile is missing the entire point. The aforementioned Manifesto, after all, called for an iconoclastic approach to software development.

If the rules and regulations and paperwork and all the rest of the folderol that comes with traditional software development are getting in the way, then chuck them out.

We have peeled this onion sufficiently to understand what we’re really asking when we wonder whether Agile is killing EA. If EA amounts to a bunch of documentation, and if Agile calls for chucking out all the documentation, then certainly Agile is calling for the demise of EA.

But those are two enormously hairy IFs. Certainly if some company’s EA means nothing more than a lot of paperwork that gets in the way of basic goals like working software that keeps customers happy, then we can only hope Agile drives a nail into that coffin.

On the other hand, sometimes paperwork is a good thing. Only an overly dogmatic reading of the Agile Manifesto would lead one to conclude that we don’t need no stinkin’ documentation.

Taking a more iconoclastic view of Agile, therefore, would indicate that rather than killing EA, Agile might actually help us separate the EA wheat (the good bits) from the chaff (all the paperwork or governance-laden bottlenecks that are doing us more harm than good anyway).

EA: The Good Bits

Fair enough. Let’s pour all of EA into our Agile sieve and see what comes out the bottom. Surely EA isn’t completely useless?

Here’s the rub: a lot of what passes for EA is in reality either useless, or may be useful but isn’t really EA (like IT portfolio management). So before we can theorize what our Agile sieve might yield, we must first define what EA should be.

Skip the gasoline, folks – let’s just chuck dynamite on that fire! There’s only one question that sends architects into a tizzy more than the “what is EA” question – and that’s the “what should EA be” question. And yet, in spite of years of arguing, we really have no consensus on this question whatsoever.

This question is so intractable because it has three dimensions. First, we must ask: of all the practices anybody in the enterprise might undertake, which are the ones they should undertake.

Second, of all those desirable practices, which ones do we want to lump under the EA banner, rather than falling into some other category?

The third dimension is the trickiest of all: the dimension of change. As business needs evolve, how should the practice of EA keep up?

Just because we might identify a particular practice today as something that EAs should do, that doesn’t necessarily mean that we’ll want to keep doing it – and it also doesn’t necessarily mean that we’ll want to keep calling that practice EA, regardless of whether someone should still be doing it.

No wonder so many people are calling for EA’s demise.

The Intellyx Take: Rethinking the Agile Sieve

In the end, ‘Is Agile killing EA?’ is the wrong question. We don’t really care what EA has been or should have been up to this point in time. Water under the bridge.

And the right question? Perhaps it’s ‘How can Agile help us transform EA into what we need it to be?’

Of course, that question falls short as well. Perhaps Agile can help, perhaps not. Clearly, dogmatic Agile can’t help us with this question. We must also ask what Agile should be as well.

Instead of thinking about Agile, we must seek to become agile. How can our companies deal better with change overall?

The true question, therefore: How do we transform EA to help our organizations become more agile?

Ladies and gentlemen, start your fires.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Erik Olson.

Digital Overload Mega-Rant Catharsis

$
0
0

I love all this digital hullaballoo, really I do. I love all these different trends and disruptions and turmoil. I especially love the confusion – it gives me something to write about.

But sometimes, I’ve just had enough. Especially when the digital disruption story starts repeating itself.

Take, for example, the unicorn meme. Unicorn, as in a VC-funded startup with a billion-dollar-plus valuation. You know, the Ubers and AirBnBs and Facebooks of the world.

unicornIt seems that every presentation, every talk, every press release compares whatever some DigiCloudDataTech startup is doing with Uber. Or AirBnB. Or Facebook. Like you’d ever come up with Uber. Or AirBnB. Or Facebook. Hearing about Uber or AirBnB or Facebook just one more time is going to make me pull what’s left of my hair out – and I still have most of my hair.

Unicorns aren’t rare, folks. They’re mythical. As in nonexistent. So stop talking about them already!

And then there’s digital marketing. Marketing, of course, is an important part of the digital story, since digital is customer-driven and marketing is supposed to be expert on everything customer-driven.

But if you look at what the digital marketers are doing, it seems that they’ve all reached a plateau where they’re all doing the same stuff. And a lot of it simply sucks.

Take retargeting, for example. Retargeting is where you look at some widget on some site somewhere, and then for the next three weeks ads for that stupid widget follow you everywhere online. Look up one of those super-realistic digital Japanese robotic sex toys? You’ll get nothing but ads for sex toys on every damn web site you visit. Better hope your mom doesn’t come over.

The problem with retargeting, of course, is that the marketers really have no clue whatsoever if you’re really in the market for that sex toy. Perhaps you already bought one. Or maybe you just thought that toy was hot and had to click through to see if there were more sexy photos of it. The marketers simply have no freaking idea.

It gets worse. What about digitalwashing? You know, like cloudwashing, where people pretended to do cloud because it was cool. Now digital is cool so people are digitizing this and that, in hopes of a cushy seat on the bandwagon.

How do you tell digitalwashing from the real thing? Try replacing the word digital with either of the words web or ebusiness and see if it’s something someone might have said back when we were partying like it was 1999.

And what’s up with cybersecurity, anyway? Welcome to the 2010s, hackers, it’s your decade! We have no freaking clue how to keep you out, so go ahead and hack us. We probably won’t even notice. Here are our credit card numbers. Have fun with all those iPads and Rolexes and Japanese sex toys you’re going to order.

The hacking problem has gotten so bad that the Chinese are complaining that we’re complaining too much about how much the Chinese are hacking us. Go ahead, read that sentence again. You can’t make this stuff up.

Digital transformation, of course, is more about the transformation than the digital. As in business transformation. As in, you need to reorganize everybody and run your entire business differently in order to be digitally transformed. Simple.

Well, good luck with that. About as far as anyone is really getting with their digital transformation initiatives is putting a marketing person on the dev team, or maybe putting a developer on the marketing team. Paste your favorite Dilbert cartoon here, seriously.

Oh, and what about the management consultants? You know, those highly paid MBAs who love to string together buzzwords into preformatted tomes of advice, only to sell them for a few mil to unsuspecting executives who will skim them, nod their heads, and go back to whatever they were doing?

Well, the consultants are all digital now. Every last one of them. Giving management advice to managers for how to be digital visionaries and drive their visions down the throat of their rank and file. After all, look at Uber! And AirBnB! And Facebook! You can be a unicorn too, Mr. Insurance Executive or Ms. Banker. Self-organization is the key to innovation, so tell all your people to self-organize or get the hell out.

Maybe the industry analysts will help? Not a chance. Gartner is recommending that you should go fast and slow at the same time. Fast as in all digital and devops, slow as in all that creaky old IT. Both. At the same time.

CIOs can breathe a sigh of relief – according to Gartner, this digital stuff is easier than they thought. No monkeying with all those arcane IT governance policies and mind-numbing procedures and legacy spaghetti. Just hire some Goths and Lumbersexuals and put them in charge of the new gear and you’re off and running.

The Intellyx Take

Hear that noise? That’s the music for this big digital game of musical chairs we’re all playing. There’s so much activity, so much vendor hype, so much enterprise spending, so much VC investment today that it seems this whole digital extravaganza is never going to stop. So round and round we go.

Well, I hate to break it to all you digital-native millennials reading this, but what comes up must come down. For all you old fogies like me who played our first game of dot.com musical chairs back in the 1990s, we’ve heard this music before. Take my advice: make sure you have a chair when the music stops.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Adam Rifkin.

What is Emergent about Emergent Architecture?

$
0
0

The best architectures, requirements, and designs emerge from self-organizing teams.” – Principles behind the Agile Manifesto

I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?

galaxiesFirst let me throw water on that whole dogma thing. The whole point to the Agile Manifesto and its Principles is to be less dogmatic about how we build good software. So I don’t really care what the esteemed crafters of this bite-size morsel of profundity actually meant by emerge and self-organizing.

Instead, what I do really care about is how to help our organizations achieve their goals – especially how to be more agile. Building great software is part of this story to be sure. And as it happens, emergence and self-organization are fundamental principles that can move us forward, especially now that so many enterprises are struggling with digital transformation.

The Notion of Emergence Emerges

Let’s start with the surprisingly multifaceted definition of the word emergence, and its verb form, to emerge. The dictionary defines emerge as to come into being through evolution; to rise from, come out into view. So right off the bat we have several related concepts.

Perhaps emergent refers to coming into being, as in going from not existing to existing.

Or maybe to emerge means to evolve, as in going from a less mature or advanced state to a more mature or advanced state, as opposed to being static.

Then there’s the notion of coming out into view, as in going from hidden to visible. It was always there, but emerged from the shadows.

But there are other senses of emergence that the dictionary definition doesn’t quite capture. For instance, the notion of being assembled piecemeal. The photo on a jigsaw puzzle emerges as we put it together.

And then there’s a sense of emergence popular in discussions of emergent architecture: the notion of unintentional. In other words, there is a spectrum between emergent architectures on one hand and intentional ones on the other, where intentional architectures are essentially pre-planned and on purpose, while emergent architectures are somehow accidental.

In spite of all these subtle differences in meaning, what most people are apparently trying to say when they use emergent in the context of architecture or design is: by deferring important architectural and design decisions until the last responsible moment, you can prevent unnecessary complexity from undermining your software projects (a quote I found on IBM DeveloperWorks, but if you know who actually said this first, please fill us all in by commenting).

This software-building principle thus introduces yet another notion into the mix: the concept of deferred. Human decision making is responsible for driving architecture and design, so such architecture and design emerges simply by virtue of the fact that we don’t make up our minds about any of it until we have to.

Then perhaps it comes into view, or evolves, or comes into being. And while such decision making would clearly be intentional, at least it wasn’t intentional at the beginning of the effort, in the sense that the team didn’t pre-plan anything.

The Elephant in the Room Emerges

All of the subtle variations in definition above miss one important element: the role of self-organization. Sure, people would generally prefer to organize themselves than to have someone else do it for them, so perhaps a self-organized team might be more productive or more collaborative than a team that a manager organized for them.

However, if you’ve read some of my recent articles on self-organization – or my book The Agile Architecture Revolution for that matter – you’ll recognize a bigger picture here: emergent in the context of complex adaptive systems (CAS).

In this context, an emergent property of a CAS is a property of the system as a whole that isn’t a property of any of the sub-systems of that system.

Self-organization is one of the primary driving forces behind complex systems. Natural systems from beehives to galaxies all have self-organizing subsystems. Perhaps the original Agilists were thinking about this sense of emergence when they wrote the sentence at the top of this Cortex.

Or perhaps not. But regardless of whether the original Agilists were thinking of CAS or not, many people over the last 15 years since the Manifesto appeared have made this association, for better or worse.

On the surface the appeal of emergent design or architecture being the sort of emergence that complex systems exhibit is tantalizing, as though emergence were some kind of secret magic. All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness!

If only it were that easy, right?

Unfortunately, making this jump from emergent-as-deferred-and-evolving to emergent-as-property-of-CAS has serious issues. First of all, in the CAS context, emergence applies to the properties of complex systems. In the case of a software team working on some software effort, it’s not clear where the complex system is, let alone what properties it has.

Furthermore, it’s a stretch to think of architecture or design as a single property of a system. Perhaps they represent a collection of properties of a software system – scalability, performance, and what not – but architecture represents more than simply the properties of a system. How one component talks to another could be thought of as an element of an architecture, but not a property in the way that scalability is a property of a system.

And in any case, there’s no general reason to consider software systems to be complex systems, as the properties of their architecture or design are manifest in their components. Even when a property of a software system is a property of the system as a whole, it may still very well be a property of the components of that system – and thus it isn’t an emergent property.

Recognizing a Complex Adaptive System

Here’s how I like to think of emergence in the context of complex systems: if you look too closely at a CAS, you can’t see the emergent properties. Instead, you must step back – sometimes way back – and look at the big picture of the system as a whole to see its emergent properties. In other words, the pattern emerges from the big picture.

If you study the behavior of individual bees you’ll never see the structure of the hive. If you look at individual stars you’ll never see the shape of the galaxy. If you examine water molecules you’ll never know what it means to be wet.

When we think about the sorts of software systems that self-organizing teams can build – that is, the two-pizza teams that the Agile world favors – we’re simply not stepping far enough away from the component level to get any sense of emergent properties.

Bottom line: it doesn’t matter how self-organized individual teams are, there won’t be anything particularly emergent about the software design or architecture they produce, in the CAS sense of emergence.

Now, don’t throw up your hands and conclude that I’m missing the point of the sentence at the top of this article entirely. In fact, I’m pointing out a subtle but critical aspect of the entire Agile Manifesto. It’s not really about software at all – or at least, not just about software.

The Agile Manifesto is in reality about people and how people interact with software. How developers in collaboration with stakeholders create it and ensure it meets the ongoing needs of the organization.

However, even if we look at the self-organizing teams themselves plus the software they create, we’re still too close to see any emergent properties. We must step away and look at the organization as a whole.

Just how big the organization must be is rather slippery to define. It may be the entire company or perhaps a large division or business unit. Large enough, however, for emergent properties to manifest themselves.

The Intellyx Take

When we look at our enterprise as a whole, we may note several emergent properties, both positive and negative. We’re not likely, however, to see an emergent design or architecture for the enterprise – at least, not without stretching our definitions for those terms well beyond their usual application.

In my opinion, however, it doesn’t matter that neither design nor architecture emerges. Instead, I see architecture as a set of intentional acts that seek to influence the organization to exhibit desirable emergent properties, of which business agility is the most important.

I like to call this approach Agile Architecture, a reinvention of enterprise architecture that influences the behavior of human and technology subsystems in an organization to shift its emergent behavior toward business agility. But business agility is the property that emerges, not the architecture.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Hubble Heritage.

VW Emissions Scandal: Death Knell for IoT?

$
0
0

One can hardly read a word about the recent Volkswagen emissions scandal without replacing our collective Fahrvergnügen with Schadenfreude. Massive German auto maker, caught red-handed falsifying emissions data. Heads are gonna roll!

While we have to give VW execs some credit for finally owning up to the deception, their scapegoating is a different story. According to the VW leadership, who’s at fault in this sorry tale? Three rogue software engineers.

Seriously? With billions of dollars at stake, who’s responsible for planning and executing a massive cover-up involving hundreds of thousands of vehicles? Three coders?

pressureImplausible as this fingerpointing sounds, the information about the specifics of who-did-what-when in this sordid tale has yet to be revealed. So from this point on, I’ll be speaking hypothetically.

Hypothetically speaking, then, let’s consider an automobile manufacturer we’ll call, say, XY. Are the programmers of the emissions device software at XY the likely perpetrators of such an escapade?

It is certainly possible to program software to yield incorrect results. After all, you can program software to give you whatever results you want. However, any good software quality assurance (SQA) team should be able to catch such shenanigans.

The basics of SQA are white box and black box testing. White box means the testers analyze the source code itself – which would usually catch any code that intentionally gives the wrong result.

However, even if the coders were subtle enough with their malfeasance to slip by white box testing, then black box testing should trip them up.

With black box testing, testers begin with a set of test data and run them through the software. They check the actual results against the desired results. If they don’t match, then they know there’s a problem. Since the whole point of the malicious code is to generate incorrect results, any competent black box test should call out the crime.

We can only assume the code in question passed all of its tests. So at the very least, the testers at XY are either incompetent or in collusion with the three rogue engineers – and either of these situations indicates a broader problem than simply three bad coder apples.

The Insider Calibration Attack

So, are the perpetrators in XY’s sordid tale of deception a broad conspiracy involving engineers and testers? Perhaps, or perhaps not.

There is another approach to falsifying the emissions data altogether, one that wouldn’t have to involve the engineers that wrote the code for the emissions devices or the testers either. That approach is a calibration attack.

Calibration attacks are so far off the cybersecurity radar that they don’t even have a Wikipedia page – yet. Which is surprising, as they make for a great arrow in the hacker’s quiver, since they don’t depend upon malicious code, and furthermore, encryption doesn’t prevent them.

In the case of XY, their subterfuge might in fact be such an insider calibration attack. Here’s how it works.

There are emissions sensors in each automobile that generate streams of raw data. Those raw data must find their way into the software running inside the emissions device that is producing the misleading results. But somewhere in between, either on a physical device or as an algorithm in the software itself, there must be a calibration step.

This calibration step aligns the raw data with the real-world meaning of those data. For example, if the sensor is detecting parts per million (PPM) of particulate matter in the exhaust, a particular sensor reading would be some number, say, 48947489393 during a controlled test. Without the proper calibration, however, there’s no way to make sense of this number.

To conduct the calibration, a calibration engineer would use an analog testing tool to determine that the actual PPM value at that time was, say, 3.2 PPM. The calibration factor would be the ratio of 48947489393 to 3.2, or 15296090435.3125 (in real world scenarios the formula might be more complicated, but you get the idea).

The engineer would then turn a dial somewhere (either physically or by setting a calibration factor in the software) that represents this number. Once the device is properly calibrated in this way, the readings it gives should be accurate.

However, if the calibration engineer does the calibration incorrectly – or a malefactor intentionally introduces a miscalibration – then the end result would be off. Every time. Even though there was nothing wrong with the sensor data, no security breach between the sensor and emissions device, and furthermore, every line of code in the device was completely correct.

In fact, the only way to detect a calibration attack is by running an independent analog test. In other words, someone would have to get their own exhaust particulate measuring device and run tests on real vehicles to see if the emissions device was properly calibrated.

Which, of course, is how the dirty deeds at VW – oops, I mean XY – were finally uncovered.

The Bigger Story: External Calibration Attacks

So, why did I put “death knell for the IoT” in the title of this article? XY’s emissions devices weren’t on the Internet, and thus weren’t part of the Internet of Things. But of course, they could have been – and dollars to donuts, will be soon.

The most likely scenario for XY’s troubles is an internal calibration attack – but scenarios where hackers mount calibration attacks from outside are far more unsettling.

My Internet research on this topic turned up few discussions of this type of attack. However, there has been some academic research into external calibration attacks in the medical device arena (see this academic paper from the UCLA Computer Science Department as an example).

Here’s a likely scenario: your IoT-savvy wearable device sends diagnostic information to your physician. Physicians have software on their end that they use to analyze the data from such devices for diagnostic purposes.

If a hacker is able to compromise the calibration of the transmitted data, then the physician may be tricked into reaching an incorrect diagnosis – even though your wearable is working properly, the physicians’ software is working properly, and the communication between the two wasn’t compromised.

The conclusion of the UCLA report reads in part: “The proposed attack cannot be prevented or detected by traditional cryptography because the attack is directly dealing with data after sampling. Traditional cryptography can only guarantee the data to be safe through the wireless channels.”

In fact, as with the XY scenario, the only sure way to detect such an attack is to run an independent, analog test of the data. In the case of XY, there was a single calibration attack that impacted a large number of devices – and it still took years before somebody bothered to run the independent analog test.

In the case of the IoT, every single IoT device is subject to a calibration attack. And the only way to identify such attacks is to run an independent test on the data coming from or going to every IoT endpoint.

Even if there were a practical way of running such tests (which there isn’t), we must still ask ourselves whether we would rely upon IoT-enabled devices to run such tests. If so, we haven’t solved the problem – we’ve simply expanded our threat surface to include the devices we’re using to uncover calibration attacks themselves.

The Intellyx Take

Let’s say you just put on your fancy new fitness wearable. You go for a run and when you get back, you get a frantic call from your doctor, who tells you your blood pressure is 150 over 100 – a dangerous case of hypertension.

But then you ask yourself, how do you know the values are accurate? Well, you don’t. The only way to tell is to test your blood pressure with a different device and compare the results. So you borrow your spouse’s fancy new fitness wearable, and it gives your doctor the same reading.

If they’re the same model from the same manufacturer, then of course you’re still suspicious. But even if they’re different devices, you have no way of knowing whether your doctor’s software is properly calibrated.

So you get out your trusty sphygmomanometer (like we all have one of those in our medicine cabinets), and test your blood pressure the old fashioned way.

Then it dawns on you. What good is that fancy new fitness wearable anyway? You’d be suspicious of any reading it would give your doctor, so to be smart, you’d put on that old fashioned cuff for a trustworthy reading anyway. But if you’re going to do that, then why bother with the new IoT doodad in the first place?

This blood pressure scenario is simpler than the XY case, because we’re only worried about a single reading. In the general case, however, we have never-ending streams of sensor data, and we need sophisticated software to make heads or tails out of what they’re trying to tell us.

If a calibration attack has compromised our IoT sensor data, then the only way to tell is to check all those data one at a time – a task that becomes laughingly impractical the larger our stream of IoT sensor data becomes.

Encryption won’t help. Testing your software won’t help. And this problem will only get worse over time. Death knell for the IoT? You be the judge.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Morgan.

Enterprise Architecture: Ripe for Digital Disruption

$
0
0

Ever since I published The Agile Architecture Revolution, people have been confused by what I mean by Agile Architecture. The crux of the confusion: the difference between architecting a software system and architecting a human/software system.

If our goal of following Agile is to build good software, the theory goes, then we should ask ourselves what kind of architecture our software requires, and by definition, such architecture is Agile Architecture. To this day, if you Google “Agile Architecture,” you’re likely to uncover discussions that presuppose that definition – unless, of course, your search turns up something I’ve written.

When I use the phrase Agile Architecture, in contrast, I’m talking about a style of Enterprise Architecture whose primary goal is to make our organizations more agile – in other words, better able to deal with change, and to leverage change for competitive advantage.

To accomplish that enterprisewide goal, we must architect the organization itself – and what is an enterprise but a human/software system?

Emergence and Architecting the Enterprise

The key to Agile Architecture is emergence. In fact, business agility is the emergent property we seek from the Complex Adaptive System (CAS) we call the enterprise. (See my recent Cortex newsletter for a discussion of emergence as it relates to architecture).

Agile Architecture is a set of intentional acts we as individuals should take in order to get our enterprises to exhibit this most important of emergent properties. The question of the day, therefore, is what are these intentional acts? How do we actually go about architecting an enterprise to be agile?

At this point many of the enterprise architects reading this will want to argue over whether the Agile Architecture I’m discussing is actually Enterprise Architecture (EA). Frankly, I don’t give a damn what you call it.

Arguing over what is or is not EA – or even worse, what EA should or should not be – is a complete waste of time, and happens to be one of the reasons executives wonder why they’re spending so much money on EA in the first place.

For the sake of argument, therefore, let’s just say that Agile Architecture is a reinvention of EA, which you can call EA if you want. But whatever you call it, it’s essential to understand the difference between architecting a software system and architecting a human/software system, in particular at the enterprise level.

planCity Planning: The Wrong Metaphor for EA

To make this distinction, let’s take a common metaphor for EA – the metaphor of city planning. Cities are made up of city blocks connected by streets, and within each block are buildings that contain homes, offices, etc. Those homes and offices are analogs for various software systems and applications. We might consider the blocks to represent the systems in a particular department or line of business.

City planners deal with city-wide issues like traffic, utilities, and the like, just as EAs should deal with enterprisewide issues like business/IT alignment, efficient business processes, etc. The tools planners use to influence their cities, including zoning regulations, public works investments, etc., are analogs for the tools of the traditional EA, namely the various artifacts and governance policies that are the EA’s stock in trade. So, is city planning a useful metaphor for EA?

EAs who appreciate the city planning metaphor will point out that there are plenty of people in the city to be sure, and many of their activities influence or otherwise deal with the citizens in their enterprise. They will also rightly claim that city planning focuses on how cities deal with change, rather than how to assemble a static system like a model railroad layout.

But EA-as-city-planning is not Agile Architecture. In fact, it’s just the opposite. The more planned a city is, the less agile it becomes. Why? Because city planning allows for change but not for emergence. The question we should be asking instead is: how do we produce the results we want from an unplanned city?

If we take the complicated problems we have today and seek to instill some sense of order and planning in order to achieve a particular final state, we’re heading in the wrong direction. Even if we were able to accomplish this Sisyphean task, we’d be no more agile than when we started.

Self-Organization: The Most Important Tool in the Agile Architect’s Tool Belt

If instilling order and planning is the wrong approach to EA, then clearly we must rethink our entire notion of EA. Once again, we can find the answer in complex systems theory, and the principle of self-organization.

My earlier Cortex also discussed the importance of self-organizing teams to achieving desirable emergent properties, with the important caveat that emergence won’t appear at the two-pizza team size favored by Agile-centric organizations. Nevertheless, self-organization is the key to emergence, just not at the two-pizza level.

In fact, in the context of the organizations in which they participate, the behavior of individual humans is never emergent. If we focus on influencing individual human behavior, therefore, we’re focusing on the wrong thing.

For example, if we can craft a test for the behavior we think we want and select for people who can pass the test, then we are selecting for non-emergent behavior. The better we get at selecting people who pass the test, the less agile our organization becomes.

Because every human being acts autonomously and is thus inherently unpredictable, the emergent properties of human/software systems are what CAS theorists refer to as strongly emergent, because you can’t derive the emergent behavior by more careful analysis or control over subsystem behavior.

For this reason the iteration central to applying Agile Architecture is absolutely essential, because you can influence (but not control) the emergent behavior by iterating the initial conditions or other constraints that lead to effective self-organizing teams.

In fact, there’s no way to know for sure ahead of time if some policy or process we might put in place to aid our self-organizing teams will actually result in better agility overall. Instead, we must try different things, see what emergent properties result, and feed back that information to improve our policies and processes.

The better and faster our organizations can gather the necessary business insight, feed it back to the decision making processes, and make the decisions that will drive business agility, the more agile our enterprises become.

The Intellyx Take: Is It Architecture?

In my opinion, the iteration of constraints and initial conditions that drive and influence self-organization within the enterprise is the actual role of an architect who is architecting emergent behavior – in particular, business agility.

You may call such activities something else – management practice or some such – and to be sure, we must reinvent management practice along the same lines as EA. But whatever we call it, there needs to be an understanding that creating the conditions that lead to effective self-organizing teams is itself an architectural activity, an activity separate from the architectural activities such teams undertake when their goal is to implement a software system.

Furthermore, self-organization at the team level is insufficient. Emergent patterns never appear at the team level, after all. We must also architect self-organization across teams, remembering all the while that the people within the teams are making their decisions about how they should behave and interact.

Managers cannot manage this self-organization from outside the self-organizing teams – either at the team level or across teams. The reason for this impossibility is brutally obvious once you see it: managing a team from outside is part of organizing that team – and if an external party takes that role, then the team is no longer self-organized.

If you’re a manager and you think you’ll be out of a job as a result, not to worry. Managers can still be on the teams as participants. Even outside the teams, executives have three important roles: communicate the strategic goals of the organization, delineate the constraints, and get out of the way.

The secret to being an agile architect? Not architecting. The secret to managing an agile organization? Not managing. At least, in any traditional sense of architecting or managing.

The good news is that many organizations are already well on their way to implementing this vision of emergent business agility – enough of them, in fact, that the ones who aren’t with the program are increasingly at a competitive disadvantage.

This shift, in fact, is at the heart of digital disruption. Agile Architecture is the secret to weathering the storm. Disrupt or be disrupted – your choice.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Leeann Cafferata.

Seven Extreme Customer Centricity Tips for Digital Transformation

$
0
0

One of the most important tenets of digital transformation is that it’s customer-driven. In fact, the only reason technology is involved at all is because today’s customers demand technology-based interactions with the companies they do business with.

It’s no surprise, therefore, that we at Intellyx agree with Patrick Maes, CTO, ANZ Bank, when he said, “the fundamental element in digital transformation is extreme customer centricity.”

So true – but note the insightful twist that Maes added to the customer-driven digital mantra: extreme.

In the context of digital transformation, then, what are some examples of customer centricity we would consider to be extreme? Here’s our take.

Extreme Customer Centricity #1: Ditch the IVR

Quick show of hands: who likes interactive voice response (IVR)? You know, “press 1 for sales, press 2 for support,” etc. Anybody? The fact of the matter is, everybody hates IVR – that is, except for the call center bean counters who are looking to squeeze every last penny of cost out of the system.

In fact, IVR is all about reducing costs, rather than addressing customer preferences, let alone providing customer delight. Want extreme customer centricity? Ditch your IVR.

IVR aficionados are sure to object at this point and note that IVR has come a long way since the “press 1” days. Today’s high-tech IVR has human-like voice interactions, and allow for a range of spoken inputs. So, should we cut IVR a break?

Not on your life. Even today’s most modern IVR system is far from customer-centric. I find the oh-so-friendly humanoid voices downright patronizing, as though a fake person will fool me into thinking I’m chatting with someone who really cares about why I’m calling. No thank you!

Extreme Customer Centricity #2: Free Shipping Both Ways

Zappos pioneered this policy, of course. Everybody said it would cost too much, that customers would abuse the privilege, blah blah blah. Zappos made it work and is now part of Amazon, where it continues to blow away conventional wisdom along with its competition.

Free shipping means smaller margins to be sure, but it also means more revenues – both from new customers as well as existing ones. Do the math.

Extreme Customer Centricity #3: No More No-Reply Email

We’ve all received these emails from our friendly neighborhood big companies – emails that don’t permit a reply, because they’re being sent from an “unmonitored mailbox” or some such.

Ask yourself: why doesn’t that big company want a reply? Answer: because it doesn’t want to make it easy for you to interact with it.

Now put on your extreme customer centricity hat. You’d love to have an email response from a customer, because establishing relationships with customers is your top marketing priority. The whole point to email is to facilitate human interaction, after all. So ditch the one-way emails!

Extreme Customer Centricity #4: Digital Signage Should Be Interactive Signage

timessquareWhen I was a kid back in the pre-digital days, moving billboards fascinated me. This early technology leveraged vertical triangular tubes. A motor periodically rotated a row of several tubes to show one of three faces, thus switching the entire billboard.

Today, signage is likely to be digital, from small, iPad-based signs by doors to the massive, Times Square-size screens that shine across entire city blocks. And yet, so many digital signs have no more interactivity than the analog moving sign of the last century.

There’s no excuse today for all of these digital signs not to have a level of interactivity appropriate for their purpose.

The small signs in Delta Airlines’ Sky Clubs indicate how long since a restroom has been serviced, and will indicate when one is out of service. Staff simply interact directly with the sign to make updates via its service management interface.

For larger signs, interactivity must either be a group exercise, or the interaction must take place away from the sign interface so it can be a personal experience. With beacon technology, signs can now know when people are nearby. Why can’t we interact with signs from our phones?

Extreme Customer Centricity #5: Mind the Customer Journey

A central facet of digital marketing is the customer journey. This journey takes an anonymous visitor (either online or in a physical location) to becoming an identified prospect to the purchase transaction, and then onto becoming a happy customer (who might buy more) or an unhappy one (who needs some kind of special treatment to be happy again).

For every step in this journey, each customer requires interactions specific to that step. However, in many cases, the merchant in question either has no idea where particular individuals are in their journey, or even worse, does know where they are but disregards that information in its interactions.

Take retargeting, for example. I’ve lambasted retargeting before, but now let’s put a finer point on the problem.

Here’s how retargeting is supposed to work: let’s say you visit a web site for shoes because you’re shopping for shoes, but you remain undecided at the end of your visit. Thereafter, the retargeter wants to feed you ads for shoes similar to the ones you were looking at in hopes of moving you along your journey to the purchase transaction.

Alternatively, let’s say you completed the purchase. Now, serving you ads for shoes similar to the ones you purchased is not only pointless, but annoying. Instead, retargeting should recognize that you made the purchase and feed you ads for appropriate accessories instead.

Here’s another example. Let’s say there was a defect in your shoes, and you tweet your dissatisfaction. In this case the merchant should leverage sentiment analysis to identify your dissatisfaction, and take some kind of action like offering a free exchange to address the problem – maybe via a Twitter direct message, or perhaps an email or a text, depending on your preference. But feeding you more ads will only make you more frustrated with the merchant.

Extreme Customer Centricity #6: Let the Customer Choose the Channel

Sometimes customers like to interact with companies’ web sites. Other times they like to make a phone call. Perhaps interacting via social media is more your thing. Furthermore, some customers have one particular channel preference, while others use different channels for different purposes.

Extreme customer centricity requires that companies fully allow for such customer preferences. Does one customer only want to interact on Twitter? Then so be it. Maybe another prefers email? No problem.

One important caveat here: sometimes security or broader compliance issues limit a company’s (or government agency’s) ability to interact via certain channels. In those situations, it’s important to educate the customer on the given constraints, and to provide customer-centric alternatives (like a secure web-based messaging alternative to email). Educated customers will generally prefer appropriately secure and confidential interactions, after all.

Extreme Customer Centricity #7: Incentivize Self-Service without Penalizing Full Service

Ditching IVR? Dealing with responses to customer emails? Interacting via multiple channels? Sounds expensive, right?

Call centers in particular are surprisingly expensive to run – from $15 to more than $40 per customer call. It’s no wonder companies do what they can to keep customers from calling, and if they do call, to keep those calls short.

But that’s not how to be extremely customer-centric. However customers choose to interact with a company – even if they call in – they should have a truly delightful interaction. So make sure when a customer calls, a real human being who can truly address the customer’s need answers the phone – and answers immediately.

If you’re wearing your customer hat, you’re cheering right now. But put on your bean counter hat, and all you see are exploding costs. Fortunately, there’s an extremely customer centric approach to managing costs, even for call centers that actually delight customers.

The answer is to incentivize self-service interactions – but without penalizing full-service interactions. Most of the time, customers would rather use a self-service channel like a web site or mobile app anyway. So for those situations where self-service would be perfectly fine but customers call, email, or tweet anyway, think about what you can do to incentivize them to choose the self-service option.

For example, Comcast uses big data to predict when particular customers are about to call in to report a problem – and preemptively sends them a text letting them know it’s working on the issue (with the proper customer opt-in, of course). In this case, proactive information is the only incentive a customer needs not to call.

It’s important to note that this principle is all carrot and no stick. Ever call a company and the first thing you hear is how you can go to their web site? Well duh, why didn’t I think of that, right? Such disincentives are high on the annoyance factor and low on customer centricity.

Or banks like Bank of America, which offers an account that has no monthly fee if you never see a teller. Go into the branch once, however, and it zings you with a fee. B of A is on the right track to be sure, but penalizing customers for using a teller in those situations where only a teller will do will only antagonize customers. Offer a reward instead for making the self-service choice.

The Intellyx Take

Customer service is no longer a euphemism in today’s digital world. It’s a mandate. Short-term cost considerations must give way to long-term customer delight. If you don’t figure this out, your competition will – and you’ll be toast.

Nobody is saying it’ll be easy. In fact, digital transformation is hard. But get it right and your customers will love you forever.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Jason Bloomberg is a current or former customer of Bank of America, Comcast, Delta Airlines, and Zappos. Image credit: Travis Wise.


Bring Anonymous’s War on Daesh to the Enterprise

$
0
0

As the war against Daesh (formerly ISIS or ISIL) heats up following the terrorist attacks in Paris, the hacker group Anonymous has taken a leadership position in the global effort. Its battleground isn’t in Europe, the Middle East, or any other location on earth. Anonymous is battling Daesh in cyberspace.

Anonymous has already taken down thousands of Twitter accounts and other social media Daesh uses for communication, propaganda, and recruiting. Furthermore, the secretive group is actually leveraging this operation, dubbed #OpsParis or #OpsISIS, to fire up its own recruiting efforts.

anonymousThe result: the membership of Anonymous is exploding. Hackers of all experience levels are joining the cause – without going to a recruitment center, signing any enlistment papers, or even identifying themselves. In other words, Anonymous is self-organized.

Welcome to the modern digital world. Not only is technology transforming our lives directly, but it is also transforming our organizational structures. Anonymous is merely a harbinger of further disruption on the horizon.

Today, as technological and organizational disruption hit the enterprise, more people are realizing that self-organization is the key to success in the face of such disruption. Should businesses take a closer look at Anonymous as an example of successful self-organization? What lessons – both positive and negative – can we learn?

Understanding the Lessons of Anonymous

Anonymous’s self-organization is unquestionably the group’s most important characteristic. No one person is in charge. Anyone can join, and in reality, anyone can call themselves Anonymous, and there is no one to officially dispute such a claim. (To learn more about the history of Anonymous, I recommend this Wired article by Quinn Norton.)

Because anyone in Anonymous can propose a goal, over the years its goals have been varied and occasionally contradictory. Some of its activities have been blatantly illegal, while other efforts, albeit often illegal, have a Robin Hood-like altruism to them. Perhaps its most successful efforts, however, have political motivations – as does the current battle against Daesh.

The rules it follows are likewise up for discussion – and different members or subgroups may follow different rules. Perhaps the only universal rules are the eponymous call for anonymity, as well as a Fight Club-like call not to talk about Anonymous. However, even these rules are made to be broken.

Anonymous’s self-organization gives it power, resilience, and above all, agility – in fact, far more than traditional organizations with vastly superior resources. On the other hand, its efforts are often capricious, and once a particular target loses its appeal, Anonymous’s attention tends to wander elsewhere.

Enterprises, in contrast, generally have clear, long-term goals – profitability, growth, customer satisfaction and the like, while Anonymous is deeply anarchic. Understanding how the members of Anonymous choose their goals, however, provides a measure of insight into self-organization for enterprises.

Why, then, does Anonymous choose to organize around particular principles and not others? Why go after Daesh instead of, say, becoming a self-interested criminal organization intent on stealing money from the financial system?

While its internally-generated rules are always in flux, the reason Anonymous points in one direction rather than another is because of external constraints on the behavior of the organization. For example, enough members realize that if they pursue certain blatantly illegal activities, then law enforcement will actively pursue them.

In fact, the FBI turned Anonymous member Hector Xavier Monsegur, code name Sabu into an informant, eventually bringing several members to justice. Subsequently, every member of Anonymous now realizes both that certain illegal activities will attract the attention of law enforcement – and that their desired anonymity may not protect them.

In fact, the behavior of any self-organizing team, as well as its efficacy, always depends upon its goals as well as its constraints. Compare, for example, Anonymous on the one hand with self-organizing groups like the Underground Railroad or the French Resistance in World War II on the other.

Because Anonymous decides on its own goals, its behavior tends to be both chaotic and unpredictable. In contrast, the Underground Railroad and the French Resistance had clear goals. What drove each effort to self-organize in the manner they did were their respective explicit constraints: get caught and you get thrown in jail or executed.

Such drastically negative constraints led in both cases to the formation of semi-autonomous cells with limited inter-cell communication, so that the compromise of one cell wouldn’t lead to the compromise of others. In the case of Anonymous, the fear of getting caught impacts the types of goals and activities each member is likely to pursue.

Bringing the Secret Sauce to the Enterprise

In the absence of externally imposed organization, groups of people will always organize themselves. Fundamentally, such self-organization is an inherent part of our behavior as social creatures. How self-organized teams behave, however, depends upon each team’s goals and constraints. Change the goals or the constraints, and you’ll change the behavior of the team.

The central challenge for enterprise executives, therefore, is to provide the appropriate goals and constraints without taking the common but unproductive additional step of organizing people. This task seems simple, but it is difficult to achieve in practice, as it goes against many commonly held beliefs about how to manage people.

Constraints in particular become the bane of self-organization, as traditional approaches to governance often limit or completely eliminate self-organizing behavior. Instead, people should think of constraints as boundary conditions – where as long as behavior stays away from the boundaries, then anything goes.

In practice, shifting governance from traditional approaches to maintaining compliance with constraints at the boundaries of allowed behavior becomes a core part of how we achieve business at velocity – an explicit goal of DevOps.

Today DevOps organizations focus on continuous development, continuous integration, and continuous delivery. To this list we must add continuous governance – leveraging automation to remove bottlenecks and other barriers to self-organization.

Anonymous, of course, doesn’t have the luxury of continuous governance – or any other kind of governance, for that matter. The result is a level of chaotic behavior that limits the organization’s ability to achieve its goals consistently.

But if we take the self-organization of Anonymous and add continuous governance, we have a model for enterprise self-organization that will empower large organizations to achieve their goals in the face of disruption.

The Intellyx Take

Self-organization is no longer optional, even in traditional enterprises – it’s becoming mandatory. Traditional hierarchical command-and-control organizational models simply do not perform in disruptive environments – and today, most enterprises are experiencing unprecedented levels of disruption.

Fundamentally, self-organization is adaptive behavior – and in environments that are experiencing dramatic levels of change, adaptation is the key to survival. It’s no surprise, therefore, that the twin efforts of digital transformation and DevOps both leverage self-organization to deal with disruption.

In the business sphere, companies that leverage self-organization to capitalize on disruption will out-innovate more traditional competitors, but such self-organization doesn’t mean the chaos of Anonymous. The difference is how organizations deal with the constraints to self-organized behavior – in other words, governance.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Danijel James.

Cybersecurity: Easy as Tiger Repellant?

$
0
0

A drunk was leaning up against a lamppost on an urban street corner, doing his best to snap his fingers.

“Why are you snapping your fingers?” I asked him.

“To keep the tigers away,” he said.

“But we’re in the middle of the city!” I pointed out. “There are no tigers here!”

“See, it’s working!” He replied.

tigerAn old joke, to be sure – but one that teaches a serious lesson for today’s challenging cybersecurity marketplace. What do jokes about drunks and tigers have to do with cybersecurity? The source of the humor in the joke above derives from the difficulty inferring counterfactual causal relationships. And it’s just that type of causal relationship at the heart of the cybersecurity value proposition.

Marketing the Prevention Value Proposition

Any product that promises the prevention of adverse events suffers from the problem of counterfactuals. To see the problem, let’s define prevention, in the context of cybersecurity:

Product X prevents intrusions – If you use Product X, the chance of an intrusion is smaller than if you hadn’t used Product X.

Let’s say we’re discussing a fictional company, let’s call them Horizon. Horizon had a serious breach last month. But if they had bought our cybersecurity product, Product X, then they wouldn’t have had a breach.

This argument is fundamentally weak, for a few different reasons. First, it’s impossible to prove. Second, there’s no way to make the argument for Product X stronger than the corresponding argument for any competitor’s product.

And third, because no cybersecurity is 100% effective, we can’t say that Product X always prevents breaches. The most we can really say is that Horizon might not have had the breach, had they been using Product X – or more specifically, the chances that they would have had a breach would have been smaller, had they used Product X, as compared to not using it.

Adding such a probability factor strains the credulity of our marketing claim even further, because a skeptic cannot even come up with a counterexample. At least if we’re guaranteeing Product X will prevent a breach, then a single user of Product X who nevertheless experiences such a hack would prove us wrong – but not if all we’re saying is that Product X will reduce the chances of a breach.

We might as well just be snapping our fingers on a street corner.

Prevention vs. Deterrence

Fortunately, there’s more to cybersecurity than prevention. Let’s add deterrence to the mix:

Product X deters intrusions – If attackers believe you are using Product X, they are less likely to attack you than if they hadn’t believed you were using Product X.

The best thing about deterrence is that Product X doesn’t have to do anything real at all. Hackers simply have to believe it does. For example, you can put an alarm sign in your front yard, and as long as it causes burglars to believe you might have an alarm system, it will likely deter them from breaking into your house. As a result, they will head over to a house without such a sign.

In fact, if Product X successfully deters hackers, then it actually prevents attacks, as deterrence reduces the number of successful attacks, as compared to not using Product X – and that’s what we mean by prevention.

Of course, doing nothing but trying to bluff the hackers is not a cybersecurity strategy that any CISO should ever recommend. Nevertheless, deterrence is an important part of the value proposition of prevention – in spite of its counterfactual nature.

Fundamentally, if Horizon purchases Product X because it promises to prevent breaches, and in fact the number of breaches goes down, we really don’t care whether the reduction in breaches was because Product X was actually working, or because hackers poked around enough to see that Product X was there, and at that point simply chose to move on to easier targets.

On the one hand, this fact might make the security folks at Horizon breathe a bit easier with their choice of Product X, as it might deter hackers even if it’s not working properly, except for one problem: the deterrence value proposition diminishes as more companies use it.

After all, if everyone has an alarm sign in their yard, it won’t take long for burglars to realize that having a sign bears no relation to which houses actually have alarm systems, and at that point they’ll simply ignore all the signs. So, the more bogus signs there are in a neighborhood, the less of a deterrent the signs become.

As a result, if we use Product X and the number of breaches goes down, we actually do care whether that reduction is a result of Product X actually working, or simply the deterrence value of having it. Deterrence wears off over time without requiring the hackers to step up their game – so deterrence alone is a losing battle.

Prevention, Deterrence, and Mitigation, Oh My

Fortunately, prevention and deterrence are not the only cybersecurity value propositions. We must also add mitigation to the list. Here’s how we define mitigation:

Product X mitigates intrusions – If you use Product X, the damage intrusions cause will be less than if you hadn’t used Product X.

Mitigation still includes a counterfactual: with Product X, the damage of an intrusion is less than it would have been, had Horizon not been using the product.

To prove such a statement, we’d require the statistical analysis of the results of a controlled experiment: set up two identical scenarios, except that one has Product X and the other does not, allow statistically randomized attacks to occur, and compare the results.

In practice, however, this counterfactual is virtually impossible to prove, as real-world attacks are unlikely to bear much resemblance to statistically randomized ones, and setting up useful scenarios that are identical in all other aspects is also an unrealistic expectation.

The mitigation value proposition, therefore, has its weaknesses. However, in spite of these limitations, mitigation is the strongest value proposition of the three, because it also includes a factual element: Product X will limit the damage of an intrusion.

Generally speaking, we can prove that this statement is true by analyzing what Product X actually does in the case of an intrusion – just as we would prove the efficacy of any product in any other category. This provability is why factual claims are so much stronger than counterfactual ones.

Furthermore, mitigation can also serve as a deterrent, as effective mitigation reduces the value proposition for the hackers to mount their attack. If attacker believe that even if they are able to successfully breach a system, there won’t be much of value to steal, then they probably won’t bother.

The deterrence value of mitigation is why burglars rarely if ever break into the locked blood sample boxes you see outside doctors’ offices – the ones with signs that say “specimens only, no cash or drugs.”

The boxes are still locked – but the locks alone are ineffective prevention of a breach. Instead, it’s the mitigation value of not putting anything valuable in the boxes, combined with the deterrence value of the sign itself, that protect the specimens.

The Intellyx Take

If you’re a vendor of a cybersecurity product and you’re hammering out your value proposition, you might assume that prevention is a stronger value proposition than deterrence, and mitigation is the weakest of the three. After all, mitigation presumes a successful attack, right?

In reality, however, the reverse is true. Mitigation is actually the strongest of the three cybersecurity value propositions, because it is not wholly counterfactual, and furthermore leads to deterrence and thus prevention as well.

Deterrence is the next strongest, because it doesn’t rely upon working technology, and leads to prevention.

Surprisingly, however, the prevention value proposition is the weakest of the three – which explains why the tiger joke is funny, after all.

For enterprises in the market for cybersecurity products, furthermore, there is an important lesson here. Look carefully for the counterfactuals in the value propositions the vendors present in their marketing.

The weaker a product actually is, the more likely the marketer will espouse counterfactuals – because they’re impossible to prove. Don’t fall for vendors who say they’re keeping the tigers away.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Ross Elliott.

Intellyx’s Three Digital Prognostications for 2016

$
0
0

Ah, yes, the holiday season. A surprising dearth of snow, crass commercialism, yet another skirmish in the perennial ‘war on Christmas.’ And sure enough, the pinnacle of punditry: tech predictions for the New Year.

At Intellyx, however, we’re adding a twist: in keeping with our long-standing tradition at ZapThink, we’ll score our results from the previous year, before gazing into the crystal ball once again.

And since this is our second annual installment of our Digital Prognostications, it’s finally time to see how we did.

2015 Retrospective: Scoring our Predictions

We made five predictions in our prognostications from last December. First, we predicted a disruption in the wearables marketplace, even going so far as listing eighteen entrants to this exploding corner of the Internet of Things.

Of the eighteen, only one – the Motorola MOTOACTV – appears to have exited the market. Disruption? More like a ripple. We can say that the rate the number of new products is entering this space has clearly tapered off, however. Fitbit seems to have turned a corner, while the other new players haven’t yet run out of runway. Will any of them reach the end of the line in 2016? We wouldn’t be surprised.

Next up: hyperscale data centers will become a hot topic for enterprises. For this prediction, our accuracy depends on what we mean by ‘hot topic.’ Enterprises are rapidly divesting their corporate-owned data centers for third party, hyperscale data centers, both for collocation and public clouds – so this transition is a hot topic to be sure.

But don’t expect too many enterprises to be building their own hyperscale data centers – or any other kind of data center, for that matter – any time soon.

Third, we expected a large enterprise customer to call Gartner’s digital transformation advice into question. The good news: my personal crusade to point out the weaknesses in Gartner’s bimodal advice has been joined by a cadre of other people. I rounded up a number of opinions for a recent Forbes article on the topic, and fellow pundit and all around curmudgeon Phil Wainewright from Diginomica grabbed the ball and ran with it.

A number of enterprise professionals have confided in me their dissatisfaction with Gartner’s poor leadership on this topic. Whether any of them will go public, however, will likely depend on the spectacular digital failures that are likely to result if they follow the advice anyway. Whether any have the courage to do so will have to wait until 2016.

Fourth: A shakeup in the public cloud provider market, as IBM moves up and Amazon moves down. We almost nailed this one – but only if you replace Amazon with AT&T. ‘Moving down’ was the furthest thing from Amazon’s mind in 2015 to be sure – but in recent news, AT&T exited the managed application and managed hosting services business, handing over the keys to IBM, who will align these operations with their cloud portfolio.

Managed applications and hosting are not the same thing as cloud to be sure, but these lines of business represented AT&T’s best effort to move up the food chain from ‘dumb pipes’ to cloud nirvana – the digital transformation that telcos have been attempting for years. So the fact that they bailed on this business indicates they were unable to compete in this cutthroat market.

Finally, we predicted a critical mass of interest in user-controlled identity. We jumped the gun on this one. Instead of user-controlled identity, the market has seen a rise in alternative payment mechanisms, in particular Apple Pay and Google’s Android Pay. Now Target and Walmart are joining the mobile payment fray. Our updated prediction: we won’t see huge demand for user-controlled identity until such time as consumers get fed up with how these mobile payment systems handle their personal information.

bitcoinPrognostications for 2016

Without further ado, here are our predictions for the coming year.

Blockchain Eclipses Bitcoin

As an alternative currency with no central clearinghouse, Bitcoin has achieved a modicum of success, even though it is neither the answer to Libertarian anti-banking lunacy nor a cost-effective alternative to credit cards. Perhaps it will gain traction as a currency of choice in the developing world – or perhaps not.

Blockchain, however, is another matter. This novel technology is how Bitcoin is able to achieve its decentralization while remaining secure, but potential applications go well beyond the alternative currency, as it can facilitate any sort of peer-to-peer or multi-party transaction.

Our prediction for 2016: the buzz around Blockchain will surpass that of Bitcoin itself. Perhaps 2016 is the year for a Bitcoin-free, Blockchain-based service to take off, or perhaps only the hype will predominate. But expect to see Blockchain’s star rise as Bitcoin’s loses its luster.

IoT Security Turns a Corner

Everybody’s gone gaga for the Internet of Things – except for one problem: security. Putting sensors and controls into everything from dishwashers to stoplights, and then putting all those gadgets on the Internet, is a hand-engraved invitation for hackers.

This Death Star-sized hole in the IoT has not gone unnoticed by cybersecurity vendors, of course – and there are plenty of products either on or approaching the market to address various aspects of the IoT security conundrum.

Our prediction: overall enterprise sentiment on IoT security will shift from too risky to do more than dabble to we’ve got this covered, so full speed ahead. Security won’t be perfect, of course. I’m sure there will continue to be breaches, well past 2016 in fact. But the perennial risk vs. reward scales will finally shift from risk to reward.

Open Source Web Scale Tech Hits the Enterprise

Hadoop was merely a harbinger. Today we have a plethora of open source initiatives that one way or another distill the best practices of web scale companies like Google, Salesforce, and others. On this list: Cassandra NoSQL database, Kafka message broker, Solr enterprise search server, and Storm real-time event computation system, among others (all from the Apache foundation).

However, one name is intentionally absent from this list: OpenStack. Compared to the projects mentioned above, we believe OpenStack’s best days are now behind it, and this effort will largely stall in 2016.

The four Apache projects above, in contrast, are in varying stages of maturity, with work going on at a frenetic pace. As a result, enterprises have largely taken a wait-and-see approach to them up to this point.

Our prediction for 2016, however, is that we’ll see an inflection point in enterprise adoption of various combinations of these web scale efforts, as well as other open source web scale initiatives that are also ramping up quickly.

The Intellyx Take

The three predictions above all center on inflection points – not just trends, but occasions where trends themselves reach some kind of milestone. Timing such predictions is difficult, but we’re never afraid to stick our necks out.

If we chose to predict the continuation of ongoing trends, however, our lives would have been much easier. Disruption? Yes, and more to come. Digital transformation hits and misses, especially misses? You bet. Crazy money pouring into even crazier businesses? Bring it on. Hype, hype, and more hype? Unquestionably.

One final prediction: we’ll be ramping up the promotion of my upcoming book Agile Digital Transformation. Don’t want to miss the fun? Be sure to subscribe to our Cortex newsletter. If you’ve already subscribed, then refer a few friends and colleagues. See you in 2016!

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: Steve Jurvetson.

Should You Fire All Your Techies?

$
0
0

I recently spotted a five-year-old blog post by Mike Gualtieri of Forrester, where he suggests firing your quality assurance (QA) team to improve your quality. He got the idea from a client who actually tried and succeeded with this counterintuitive move.

The thinking goes that without a QA team to cover for them, developers are more likely to take care of quality properly – or risk getting the dreaded Sunday morning wakeup call to fix something.

flamethrowerGualtieri’s post generated modest buzz at the time, but since 2011 the world has changed. DevOps has turned a corner, representing an end-to-end rethink of how organizations handle the entire software development lifecycle.

Now that 2016 has finally arrived, it’s time to take a fresh look at the question. But why stop with QA? Now that we have DevOps – and digital transformation more broadly – whom else can we fire?

Pros and Cons: Firing Your QA Team

If developers’ butts are on the line, they are more likely to be careful to properly test their own code before deploying it. Furthermore, if you have already been doing test-first development (where developers write the tests), then a separate QA team makes less and less sense as you move to a continuous delivery DevOps culture.

However, for those development shops that still have traditional organizational silos with the commensurate throw-it-over-the-wall thinking, simply crossing an entire silo off the org chart without making any other changes will inevitably cause turmoil.

Waterfall projects have enough problems with quality as it is without squeezing the QA effort further, after all. Better to transition gradually from waterfall to test-first Agile to the fully automated testing that DevOps efforts expect.

Pros and Cons: Firing Your Ops Team

If DevOps empowers us to fire our QA team, then who else can we fire? What about our operations team?

In a DevOps world, after all, ops should be fully automated, where developers (who now have newfangled titles like ‘DevOps engineers’) manage immutable, idempotent infrastructure – without touching any of it directly.

Not that DevOps shops should ever talk about actually firing anyone, least of all the ops folks. Instead we reinvent their roles, where they deal more with scripts and recipes and manifests instead of servers and networks and software infrastructure. Regardless, one way or another, nobody ends up retaining a traditional ops role.

However, this ‘DevOps-rules-the-world’ perspective may work for some web scale companies perhaps, but traditional enterprises have plenty of technology that developers don’t generally monkey with.

What about all that legacy, COTS, and other not-invented-here tech? Someone has to manage all that gear – and that noble task still falls to traditional ops personnel.

Transforming traditional IT to the extent that we no longer need anyone in a traditional ops role may still be out of reach for most enterprises, but ‘traditional’ ops is unquestionably becoming an increasingly minor part of the overall IT operations picture.

Furthermore, as companies proceed with their digital transformation efforts, performance management increasingly becomes the responsibility of business stakeholders, and the application performance management market – now rapidly becoming digital performance management – is reflecting this shift.

In spite of this shift, we can’t expect line-of-business (LOB) stakeholders to manage hypervisor configurations or cloud autoscaling parameters or the like. Just because digital transformation slices across the organization horizontally doesn’t mean that we no longer need individuals with specialized skills. Instead, such transformation requires a rethink of how we organize such individuals.

Pros and Cons: Firing Your Developers

Even in today’s fast-paced, turbulent digital business environment, asking whether or not enterprises need developers at all is an intriguing question. After all, the low-code, declarative model for assembling software is rapidly maturing (especially for mobile apps), and LOB personnel are building increasingly sophisticated business applications using such technology.

The low-code approach has many advantages over traditional coding: it’s more business-focused, more iterative, lighter weight, and supports the business agility needs of the organization better than traditionally coded apps.

Of course, even in a low-code, drag-and-drop world, someone has to write the underlying software. But there’s no reason an enterprise development team should handle this heavy lifting. Instead, vendors should be responsible for building such ‘agility platforms.’

On the other hand, as enterprises become software-driven organizations, developers – real, hands-on coders – become more important, not less. While low-code tooling can serve an important role, enterprises that rely upon software for their market differentiation are unlikely to do away with their development teams.

For organizations adopting DevOps, furthermore, the newly transformed role of a ‘DevOps engineer’ is first and foremost a developer. The last thing we want to do is fire these folks!

Pros and Cons: Firing the Entire IT Organization

Enterprises have been outsourcing huge swaths of their IT organizations for years, of course. But that’s not really the question here. After all, if you’re working in IT and your company outsources the whole shebang, that rarely means you’re out of a job. It’s more likely that your job mostly stays the same, but you simply start getting paychecks from a new company.

The more provocative question, of course, is whether an enterprise can get rid of its IT organization altogether. With all this talk about bimodal IT – where LOBs drive fast, digital efforts, leaving the old guard IT to keep doing things the old, slow way – perhaps the solution is simply to get rid of slow IT completely.

After all, shadow IT is only shadow if there’s regular, non-shadow IT to compare it to. What if all we had was shadow IT? Could that ever be enough?

As enterprises gradually replace their dinosaur enterprise apps with cloud-centric, modern apps, there should come a time that the entire enterprise can run on a combination of such enterprise cloud apps and LOB-written apps using low-code tooling.

Sounds appealing – but I don’t think the big banks or insurance companies or manufacturers or any other large enterprise will be chucking their entire IT organizations, outsourced or not. In today’s enterprise environment, getting rid of IT is an unrealistic goal.

Instead, the challenge is to transform IT to support business at velocity, which means focusing on security, governance, maintaining access to systems of record – but not in traditional, slow ways that provide roadblocks to digital success.

Such change won’t happen, however, unless companies also transform their organizations – starting with the hierarchical org chart. From the customer to the systems of record, new organizational patterns must slice across existing silos.

The Intellyx Take

The end result: we don’t have to fire anybody. Instead, we’re are recommending an end-to-end rework of traditional hierarchical management thinking. After all, the entire premise of this article – fire QA to make quality better, then rinse and repeat – is more about getting rid of a hierarchically organized QA team structure than eliminating the QA people themselves.

Replace the traditional hierarchical organizational structure with a self-organizing, horizontal organizational structure – thus eliminating externally organized teams and the hierarchical management thinking that leads to them.

Eliminating our siloed QA team improves software quality. Eliminating siloed dev and ops leads to DevOps, which improves software deployment and drives software at velocity.

Eliminating a rigidly defined IT organization, it stands to reason, not only solves the bimodal problem – it is also the key to becoming a software-driven enterprise.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image credit: The US Army.

Trimodal IT Doesn’t Fix Bimodal IT – Instead, Let’s Fix Slow

$
0
0

The battle over bimodal IT is heating up. Now that there’s a reasonably broad consensus that Gartner’s advice about bimodal IT is deeply flawed – consensus everywhere except perhaps at Gartner – various ideas are springing up to fill the void.

The bimodal problem, of course, is well understood. ‘Traditional’ or ‘slow’ IT uses hidebound, laborious processes that would only get in the way of ‘fast’ or ‘agile’ digital efforts. The result: incoherent IT strategies and shadow IT struggles that lead to dispersed, redundant, and risky technology choices across the organization.

The battle, however, isn’t over the problem. It’s over what we should do about it. Perhaps we should add a third mode?

That’s the opinion of Simon Wardley, a long time thought leader, business strategist, and all-around curmudgeon who has been beating the Value Chain Mapping (VCM) drum for many years. He and I are of similar mind with respect to Gartner’s bimodal IT advice – so much so that I quoted him in my recent Forbes article, Bimodal IT: Gartner’s Recipe for Disaster.

Where our opinions diverge, however, is how to fix the problem. Following the precepts of VCM, Wardley introduces an intermediate mode, leading to what he calls trimodal IT. From my reading of Wardley, however, I believe that applying his trimodal approach to the challenge of bimodal IT is improperly thought out.

It’s not that his ideas aren’t sound, but rather that how he applies them to this particular problem leads to unintentional misunderstandings and in the end, incorrect conclusions. It’s time to clear things up.

My Interpretation of Wardley’s Value Chain Mapping

Value Chain Mapping (VCM) is an approach to creating a business strategy that focuses on customer needs and the steps necessary to meet those needs, what Wardley calls a value chain. As the VCM for an organization matures, the activities in each value chain coalesce into three main groups of people, as shown in Wardley’s illustration below.

wardley value chain

The organization step in Value Chain Mapping (Source: Simon Wardley)

In the diagram above, the ‘pioneers’ are driving innovation, the ‘settlers’ take the resulting innovations and turn them into products, and then the ‘town planners’ take those products and scale up production, thus driving profitability for the enterprise.

As an approach to business strategy, VCM makes plenty of sense. Enterprises from big pharmas to auto manufacturers have been following this recipe for decades in one form or another, after all. But does it solve our bimodal IT problem?

Wardley thinks it does. He sees bimodal IT as an application of the chart above, only missing the middle, ‘settlers’ section – thus casting the ‘fast,’ digital efforts as pioneers and ‘slow,’ traditional IT as ‘town planners.’

From Wardley’s blog: “The problem with bimodal (e.g. pioneers and town planners) is it lacks the middle component (the settlers) which performs an essential function in ensuring that work is taken from the pioneers and turned into mature products before the town planners can turn this into industrialised commodities or utility services. Without this middle component then yes you cover the two extremes (e.g. agile vs six sigma) but new things built never progress or evolve. You have nothing managing the ‘flow’ from one extreme to another.”

The Problem with Trimodal IT

The problem with the bimodal IT pattern, however, isn’t the need for an intermediary mode – the entire question is what we should do with slow IT. Fundamentally, this problem isn’t a VCM problem at all.

You can see why Wardley made this correspondence between bimodal IT and VCM. The ‘pioneers’ are the innovators, fast-moving and chaotic. If you’re building software than these folks will likely use Agile/DevOps approaches. Sounds a lot like fast mode IT for sure.

The problem, however, is with the slow mode. The trimodal pattern recasts slow IT as ‘town planning,’ which is a poor fit at best.

If you applied the VCM model to IT, and the ‘town planners’ represented traditional IT run in traditional, slow, waterfall ways, that would be different. But in reality VCM characterizes the ‘town planner’ phase with commodity, utility services.

Just one problem: traditional, slow IT doesn’t deliver commodity services as a utility. In the IT context, IT reaches ‘town planning’ when it’s using the cloud. And we all know the cloud is quite different, both technologically and strategically, from traditional, slow IT.

In other words, VCM is a very good fit for describing the evolution of the maturity of IT services generally. In the early days all we had were bespoke applications running on manually configured infrastructure (‘pioneers’). Over time vendors productized the enterprise apps and we developed standard processes like ITIL for dealing with the gear (‘settlers’). Eventually we externalized and abstracted the infrastructure so that we could deliver it as well as software on demand as a service (‘town planners’).

If we bring this vision of VCM to enterprises struggling with bimodal IT, then the cloud bits of what they are doing are the ‘town planning,’ and the pioneers and settlers use the cloud as needed for their own tasks. In other words, ‘town planning’ doesn’t correspond to slow-mode IT at all.

Wardley’s trimodal IT model, therefore, makes perfect sense in the appropriate context – but applying the ‘town planners’ category to traditional IT is a complete mischaracterization. Instead, if we compare trimodal to bimodal’s ‘fast’ and ‘slow,’ the entire trimodal value chain – pioneers, settlers, and town planners – should all fit into ‘fast.’

Industrialization of IT: Flexible or Not?

One of the reasons why the third mode of trimodal is so confusing centers on the choice of terminology.

Note that in the diagram above, Wardley puts the town planners in the ‘industrialized’ band. ‘Industrialization’ brings to mind, say, Henry Ford’s assembly line. Certainly, we could say that Ford’s innovation was ordered, known, measured, stable, standard, dull, low margin, and essential – all characteristics of industrialization from Wardley’s Value Chain in the diagram above.

In retrospect, however, Ford’s market dominance was short-lived, because his business strategy wasn’t flexible enough. His “any color as long as it’s black” philosophy simply didn’t meet customer needs on a long-term basis, and as a result, Ford ceded leadership in the automobile market to General Motors – who not only offered other colors, but also a diversity of brands and perhaps the most important innovation of all, the model year.

When we talk about the industrialization of IT, therefore, do we mean so slow and inflexible that we’ll increasingly struggle to meet customer needs over time? Given that the term generally has a positive connotation, this negative interpretation would not align with most people’s meaning of the word – and yet, such a meaning is in fact what we mean by the bimodal slow-mode.

There is, in fact, no clear definition of the industrialization of IT. Gartner, predictably, excludes any notion of flexibility from its definition: “The standardization of IT services through predesigned and preconfigured solutions that are highly automated and repeatable, scalable and reliable, and meet the needs of many organizations.”

The reason Gartner’s definition above is so predictably inflexible is that it aligns with their ‘slow IT’ worldview. The best we can expect from slow IT, according to Garner, are standardized, repeatable solutions – but flexibility? Perish the thought! If you have a lot of inflexible legacy gear and you’re looking for an excuse to keep it around, then Gartner’s advice will likely appeal to you.

In contrast, take a look at CapGemini’s definition of industrialized IT, which includes new operational models like globalization, offshoring and IT shared service centers, new engagement models including cloud computing, and new technologies that leverage virtualization. Their definition also includes organizational changes like improved financial transparency and the shaping of new IT governance models.

In CapGemini’s worldview, therefore, there’s little room for inflexible legacy in industrialized IT. If you have such gear, then industrializing your IT according to this definition will take plenty of time and money to be sure, and clearly the good folks at CapGemini would be only too happy to help. That being said, CapGemini is unquestionably on the right track here.

The challenge with bimodal IT, after all, isn’t aligning fast with slow. It’s fixing slow. That doesn’t mean simply making it fast, an impractical straw man which underlies the bimodal canard that Gartner has been peddling. But it also doesn’t mean commoditizing slow and turning it into a service, either, as that approach would never support the agility goals of the enterprise.

The Intellyx Take

Any IT strategy that recommends transforming into an efficient but inflexible technology organization simply doesn’t make sense in today’s digital world, as companies strive to become software-driven enterprises.

Value Chain Mapping may be useful for improving business strategies for enterprises looking to scale their product offerings, but characterizing bimodal IT as nothing but trimodal IT without the settlers directs the focus away from the real issue, which is how to properly transform traditional IT organizations to support the enterprise’s agility requirements.

When it comes to such transformative modernization, CIOs have been dragging their heels for years. It’s too expensive or too difficult, they say. It’s too risky. And perhaps it was.

At some point, however, the risk of not transforming traditional IT surpasses the risks inherent in biting the bullet and moving forward with such transformation. Across the globe, today’s enterprises are hitting that critical inflection point now, if they haven’t already.

Making the right decision at the right time about transformative modernization will be a company-saving or company-destroying decision – and remember, no enterprise is too big to fail. Choose wisely.

Intellyx advises companies on their digital transformation initiatives and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers.

Viewing all 3156 articles
Browse latest View live




Latest Images