Cloud Migration done right

Cloud Migration done right

Lessons on How Juniper Networks made the switch in the cloud and what you can learn

There are many steps for a company to transition to the cloud. At the 2016 Structure Conference, Juniper Networks’ CIO Bob Worrall explained how the company is managing the change.

For many companies, transitioning to the cloud is a long process with many steps. At the 2016 Structure Conference in San Francisco, Bob Worrall of Juniper Networks explained his company’s strategy and provided some best practices.

Over the past four years or so, Worrall said, Juniper has closed 17 of its 18 data centers, and moved 85% of its applications to the cloud. By June 2017, he said, the company won’t have any corporate data centers remaining. Almost all of their apps will be running on Amazon Web Services (AWS), but their engineering assets will all be hosted on a private cloud, where they are adding their own tech and IP to make it a showcase for what they can do in networking, Worrall said.

One of the first considerations was the cost of moving to the cloud. On the engineering side, moving to the private cloud was a “no-brainer” in terms its financial impact, Worrall said. But, the transition away from their corporate data centers to the public cloud was a “fine line.”

Worrall said that Juniper, like many others, had to refactor their apps to run on the cloud, which was costly. So, he said, you have to pay attention to the cost model. Juniper designated some IT employees to look after monthly billing and bill statements to make sure they are continually optimizing for storage, and getting the most for their money.

For security, they created a team to think through security and compliance, and to make sure they’re meeting those needs. It took “grinding” to get the legal team comfortable with the cloud, Worrall said. But, they’re also investing heavily in network monitoring, logging, and inspection so they can more readily detect and respond to any issues.

The cloud has “retooled” the Juniper organization, Worrall said. Juniper has made significant investments in the skillsets needed for designing apps for the cloud, and refactoring them to run in the cloud. According to Worrall, the firm has bifurcated its development team to focus on those issues and they have added new employees who understand the cloud environments of Amazon, Microsoft, and IBM.

The most critical skills, he said, are found among the people who know how to make these cloud platforms work together. So, they’ve hired more people in the US and India to look after that connection, and they use Oracle Fusion Middleware to connect all the clouds together.

Juniper’s vendor management team has also been strengthened to better look after SaaS contracts and vendor promises, to make sure their cloud investments are optimized.

While there has been a lot of positive reaction from employees, Worrall said, some people have left the company over time. As a response, Juniper is investing in existing employees to help them grow their skills. Companies should consider the impact that cloud will have on their workforce.

Craig Ashmole, Founding Director of London based CCServe stated, “There are commercial businesses that are taking advantage of the strategic moves that Juniper Networks have taken. This businesses can deploy green field Data centre sites in under a few weeks. This is real value.”

Picture above: Juniper Networks CIO, Bob Worrall, speaking with ZDNet's Stephanie Condon at Structure.
Image: Jason Hiner/TechRepublic

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

5G is Future of mobility

5G is Future of mobility

5G is coming and it is the future of mobile

In 5G, wireless will grow up into a true horizontal industry that provides a support system for literally everything

Scouting the news feeds on where we are going with respect to the world of mobility and up popped this interesting article written by Alan Carlton, who has 25 years in the wireless technology industry spanning 2G, 3G, 4G and beyond and of course the focus at this year’s 2016 Mobile World Conference is 5G and how it will take the use of mobility to new reaches for IoT.

It is fair to say that it is still early days for 5G, but research efforts have been rolling for some time and standardization is expected to start in the next few months. Perhaps the two most-cited requirements in 5G are the 1000x improvement in peak data rates (on LTE 2010) and a big reduction in end-to-end latency. These KPIs are important, of course, and keep us engineers pointed in the right direction. But really, they only tell a small part of the 5G story.

A better way to understand 5G is first through a historical lens. It is astonishing to reflect that this mobile industry adventure really only began a little over 20 years ago with the proliferation of GSM. In those days, peak data rate support was a massive 9.6 kbps! Today, deployed LTE systems have improved upon this metric by 100000x. Presented in this context, the 1000x goal of 5G doesn’t really seem so crazy, does it? GSM or 2G, of course, was not designed for data. 2G was designed really for only one thing: basic telephony applications. 3G raised the bar with a specification that supported more voice users and the beginnings of a mobile internet. 4G took this further with the first real system designed principally to support video. It is this evolution that has driven the 100000x. And further, it is this service roadmap that has also driven latency reduction on a parallel path. At the simplest level, 5G will certainly be about more of this. However, the true 5G vision is a lot more interesting.

In 5G, wireless will grow up into a true horizontal industry that provides a support system for literally everything. 5G is the first generation to target supporting the full array of vertical markets (e.g. Automotive, Transport, and Health) that in themselves will define the so-called Internet of Things (IoT). This is the real 5G challenge, and in this respect 5G and the IoT are simply two sides of the same coin. Think about this challenge: what do a car and a thermostat have in common? They are all part of the IoT! So, how will 5G go about tackling this “everything” challenge?

Think flexibility. Think simplification. Think re-imagination. These concepts will permeate all aspects of 5G from the services supported, how the network is designed, and all way down to the elemental new waveforms that may provide us with some new acronyms and labels for this fifth generation.

5G will be built on a foundation of established IT thinking. The cloud, Network function virtualisation, and programmable networking (aka SDN/NFV) will provide the cornerstones. These technologies inherently deliver flexibility and, at least through the eyes of any IT professional, are a lot simpler than the legacy approach of telecom. 5G will, however, take these technologies to new levels and depths of integration, and in so doing will shape the 5G specification that will be defined in the months and years ahead.

“Reading my previous blog on the IEEE predictions for technology advancement in 2016 both 5G and Network function virtualisation (NFV) are on the top of their list”, states Craig Ashmole, Founding Partner of London based IT Consulting firm CCServe. “The other item coming across is Containers, which hold SW application logic and all of its dependencies, running as an isolated process, and execute the same in any environment. This creates parity between dev and production, and enables developers to confidently break up apps into discreet chunks”.

SDN/NFV in telecom is a hot topic today, but where we are now only scratches the surface of its original vision. The focus of work in this area now is primarily Total Cost of Ownership and OPEX reduction through switch hardware commoditization and the efficient relocation of a subset of network functions.

In 5G, SDN/NFV concepts will be pushed much further, returning to the original value proposition, namely that of enabling true architectural innovation. In 5G, it will not simply be about virtualizing the network functions, but entirely changing the way of its inner working. Network evolution will be the frontline in the realization of many 5G requirements (e.g. low latency). Today’s internet is simply not designed to support low latency. However, through programmable networking, new, more efficient approaches will become possible.

In 5G, virtualization will touch every element in the system, spanning backhaul, fronthaul, and radio access. It is within this flexible, dynamically configurable fabric that system resources will be optimally and instantaneously orchestrated to deliver the next generation experience to end users.

This article is published as part of the IDG Contributor Network

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

Business Changing IT Spend

Business Changing IT Spend

How business outcomes are transforming IT spending

According to a recently released study by Datalink and IDG, business is playing a bigger role than ever in IT spending.

The relationship between business leaders and IT is equal parts necessary and contentious. More and more, though, decisions made on the business side are having an even greater impact on IT.

International Data Group IDG has just released the results of a study commissioned by Datalink, which showed just how closely linked IT investments are becoming with business results. The study polled more than 100 IT executives and senior level managers from large U.S. organisations and took place in Q4 2015.

When asked where they wanted to invest their IT dollars currently, respondents listed the following areas as their top five considerations:

  1. Improving of IT security – 70%
  2. Improving customer/client experiences – 59%
  3. Managing costs – 59%
  4. Boosting operational efficiency – 52%
  5. Mitigating risk – 44%

However, what may be more interesting is not where these organisations are making their investments in IT, but when. According to the report, 70% of respondents said it’s critical that they’re able to link IT investments to tangible business outcomes.

So, if an understanding of IT’s impact is this important, do these organisations feel that they are communicating that clearly enough? Well…not necessarily. Only 47% said that their organisations are doing an excellent or very good job at communicating how a particular IT investment impacted a business outcome. The remaining 53% said their organisation needs a least some, if not significant, improvement in doing so.

Not only did respondents say that identifying the impact on the business was important, but 68% of them said that, when making an IT investment decision, the business goals were more important than any of IT’s operational goals.

Since business is this important a consideration in each IT investment, let’s take a look at what the top running initiatives are, so far, among respondents. Here are the top five:

  1. Security
  2. Disaster recovery/business continuity
  3. IT governance/compliance management
  4. Cloud/virtualisation management
  5. Public cloud (including SaaS)

Of the projects currently in the build stage, the top three were agile development platforms, converged data center infrastructure, and process automation.

This, of course, begs the question of which IT initiatives are actually driving business outcomes. According to the report, process automation got top marks, and security, application performance management, and cloud/virtualisation management all got a nod as well. Although, all four of these were listed as the most difficult to deploy and maintain.

The IT lifecycle as a whole has its challenges, though. In building out an initiative, 41% said the planning stage was the most difficult, 36% claimed the building stage was the hardest, and 32% labeled testing the most challenging.

As businesses seek to tie in business success to IT investment, a few distinct roadblocks come up. Here is how respondents labeled the top five challenges in driving business outcomes through IT investments.

  1. Difficulty standardising/streamlining business processes – 34%
  2. Too many manual processes (need for more automation) – 33%
  3. Difficulty keeping up with demand for new application dev – 31%
  4. Poor communication between IT and lines of business – 29%
  5. Lack of support/sponsorship from executive management – 29%

Moving forward, as more of these leaders seek to connect the dots between their IT investments and how their business fares, most (56%) are looking to streamline the operational processes to make it more apparent. Others are increasing standardisation (38%) or moving away from legacy systems (37%) to accomplish the same.

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

Request For Proposal

Request For Proposal

CIOs Seeking Innovation – Should the RFP process be replaced by the innovative RFS?

There’s an innovative way to build and drive the RFP process as CIOs look to expand service capability and innovation but should the RFP be replaced with the RFS (Request For Solutions).

Many CIOs are tasked with replacing aging legacy systems and implementing efficient IT infrastructures and effective applications that can deliver an edge in a highly competitive business environment. Innovative IT outsourcing initiatives can address this challenge, but many businesses have failed to integrate supplier expertise and achieve real value or fresh ideas from their outsourced or technology relationships.

Rather than leveraging the skills and capabilities of third parties, CIOs find that their sourcing initiatives are often limited to staff augmentation, with suppliers essentially filling the role of pure order-takers and very little innovative ideas being brought to the table. For those corporations that are able to bring in relevant or specific or unique domain expertise, either through a third party or a captive operation, they are then faced with managing price which becomes an issue of unique skills value.

“For their part, outsourcers offer technical expertise but often lack the understanding of actual client business issues needed to offer a compelling solution that addresses a client’s hot buttons.” Comments Craig Ashmole, Founding Partner of London based IT Consulting CCServe. “This is largely due to a lack of understanding the business that their client sits in and looking to be a differentiator to new clients. There’s too much replication of services being provided, to utilise economies of scale.”

Ultimately, clients struggle to articulate their requirements and providers struggle to articulate their value proposition – the result is a lose/lose proposition. The art of really differentiating services is being muddied in the waters.
Part of the problem may also lie in the manner in which CIOs define their objectives and select service providers. In a traditional RFP, clients articulate a specific set of requirements, and vendors respond by filling in the prescribed blanks. Increasingly, all parties are finding that this approach can stifle innovation, as it essentially defines the solution to the problem rather than soliciting new ideas.

An emerging alternative – the “Request for Solution” – takes a more open-ended approach and invites providers to show their creativity. Consider this analogy: A CIO requires the cost of utility services like his BPO admin be outsourced to reduce costs. This is the basic dynamic that characterises the traditional RFP process.

Alternatively, a CIO provides a list of capabilities that need to be addressed with a set of broad criteria: Administration, HR, Recruitment, Payroll, Training and Disciplinary process review for a budget not exceeding x amount of dollars. In this scenario, the Vendor/Outsourcer has the leeway to be creative and offer a variety of solutions and even introduce innovation technology that could reduce staffing levels. This approach more closely resembles the RFS (Request For Solutions) process.

A similar re-think is taking place with regard to contracting. Rather than a highly detailed, voluminous document that take months to prepare, review and complete, clients are seeking more flexible approaches that allow both parties to test the waters and develop the relationship further if it’s of mutual benefit. In describing this concept of “Evolutionary Contracting,” ISG’s Tom Young, challenges the industry bromide that outsourcing relationships are like marriage, and that both require commitment over the long term. Tom argues that, rather than viewing their service provider contracts as wedding vows, clients should think of outsourcing as more of a dating game.

We are by no means suggesting that traditional outsourcing RFPs and contracts are becoming irrelevant. Indeed, they remain essential to initiatives aimed at optimising existing operational models. But we are seeing more and more situations where clients have transformational requirements and face problems that have more than one right answer. Many CIOs struggle to make the most of opportunities presented by mobility, big data and other emerging technologies.

Perhaps it’s time to give the RFS and Evolutionary Contracting a closer look.

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

IDC 2015 Contact Centre Leaders

IDC 2015 Contact Centre Leaders

2015 IDC Worldwide Contact Centre CCaaS Vendor Assessment matrix

The IDC MarketScape study examines the key players in the worldwide contact center infrastructure and software (CCIS) market, analyzing  current capabilities as well as longer-term strategies

The CCIS market includes voice and digital media contact distribution, management, and agent-software clients, as well as self-service solutions for voice, web, and mobile devices used to offer customer service solutions as part of a customer experience strategy. IDC also examine the ecosystem and cloud (public/private) deployment, customer experience solutions, and mobile customer care solutions, as well as go-to-market models used by vendors to achieve these.

Key criteria that contribute to a successful CCIS offering include:

  • The ability to present a strategy that comprises key technologies that focus on the 3rd Platform of IT, including cloud (public and private), Big Data and business analytics, mobility, and social business functionality.
  • Vendors that present innovative strategies around partner management, pricing, and product packaging.
  • Vendors that can provide flexible delivery options for partners and customers as part of their video portfolios (on-premises, managed, hosted, cloud).
  • Business partnerships and sales channels that open up new markets for the vendor’s offering, yet still maintain a high level of support and customer care.

Twelve of the leading worldwide contact center infrastructure and software vendors profiled in the report are:-

  • ALE (formerly Alcatel–Lucent Enterprise)
  • Avaya
  • Cisco
  • Genesys
  • Interactive Intelligence
  • Intelecom
  • Loxysoft
  • Mitel
  • NEC
  • SAP
  • ShoreTel
  • Unify

Some of the key challenges for customers investing in contact center infrastructure and software are the identification of technologies, features, and applications that are most appropriate for their organisations, and more importantly, which source(s) they should turn to for deployment and expertise.

CCaaS leaders 2015

“Although there were 12 key vendors evaluated it is my opinion that the leader of the pack – Genesys – showed more diversification with regard to capabilities and ability to move with market demands, so this report has focused on the overall capability of Genesys.” stated Craig Ashmole, Founding Partner of London-based IT Consulting CCServe.

The three primary sources of CCIS functionality are:

  1. IP PBX/unified communications and collaboration (UC&C) vendor solutions and the enterprise network, such as Cisco, Avaya, ShoreTel, Unify, ALU Enterprise/Huaxin, NEC, Mitel, and Huawei.
  2. Standalone contact center solution environments from vendors such as Genesys, Interactive Intelligence, and SAP.
  3. Hosted/managed and cloud service provider solutions offered by facilities-based providers such as Genesys, inContact, Verizon, and 8×8.

Since there is no one-size-fits-all solution for contact center solutions, customers can choose from an assortment of features from these sources, which may require a little, or a lot, of integration to make the solution run on customers’ network infrastructure and/or within the bounds of their existing services/carrier contracts.

  • Many organisations find CCIS solutions complex and are not sure how they would go about managing and maintaining the environment. Therefore, having a solution managed by a third-party provider would help remove the complexity for them and alleviate the need to make internal investments in hiring appropriately skilled IT staff to manage and maintain it.
  • Businesses are looking at ways to reduce the amount of real estate to lessen operational costs and lower their carbon footprint generated by existing premises-based equipment. As a result, businesses are reducing the amount of hardware equipment they have on-premises.
  • Cloud environments can provide greater levels of automation, orchestration, provisioning, and deployment. Transitioning to the cloud can help organisations reduce operating costs, improve application performance, and better allocate their resources. However, contact centers are generally more strategic than, for example, unified communications (UC) solutions so the transition is slower and the ability for customisation can be less than a system on-premises or hosted by a service provider.
  • Businesses reliant on high levels of security will be more inclined to move existing solutions to hosted and private cloud deployments. In addition, many providers still need to do more work in terms of updating or bringing inadequate security policies to reassure companies that the transition to a cloud-based environment will provide them with the proper level of security.

In Summary:

“The CCIS market includes functionality that runs on standards-based equipment or purpose-built systems such as PBX. It has revived itself over the past three years with vendors active in several acquisitions, divestments, and partnerships,” said Jason Andersson, program director, IDC Nordics. “The movement to cloud is clear as investments in both hosted solutions and cloud solutions are beginning to make global headway.”

IDC expects 9.4% revenue growth in worldwide CCIS in 2015. Although premises-based solutions have garnered high attention in recent years, enterprise evaluations, trials, and ultimately adoption of hosted solutions (single-tenant) and cloud (multitenant) CCIS solutions will contribute significant growth predicted for the global market this year. Revenue growth will be driven by enterprises looking to retain capital, reduce costs, and improve customer experience, as well as by service providers refining their contact center strategies and product portfolios.

The full report covering all the vendors can be found on the IDC website but should you want to see the deep dive on Genesys covering their premise-based platform as well as the Cloud-based Contact Centre offering then you can read that report here.

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

Taming the Internet of Threats

Taming the Internet of Threats

Internet Security continues to plague us with relevations of expanding Malware introduced through advertising on the internet

The "Malvertising" Report

If you want to read the report from Cyphort Labs that shows a dramatic rise in the amount of malware sent through advertising, known as ‘malvertising’  

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

The shocking truth of an unbelievable 325% rise in malware-infected advertising hitting our email, PCs, Smartphones and Tablets.

In a recent report by security firm Cyphort Labs it has revealed a dramatic rise in the amount of malware sent through advertising, known as ‘malvertising’. It is fast becoming one of the most popular types of drive-by attack for cyber criminals, who can easily corrupt the legitimate ad supply chain, targeting consumers directly and infecting their machines with malware.

Malvertising works by hackers placing seemingly legitimate or ‘clean’ ads on sites, and then altering or executing secretly embedded codes that can force a computer to load malicious software. According to Cyphort, cyber criminals are choosing this method because it offers little or no resistance when attacking networks.

Some of these infected ads need to be clicked on in order to release the malware, but an increasing number of cases are appearing where the ads are instead covertly embedded with code that can exploit browser vulnerabilities, thus not even requiring the victim to click on anything before falling under attack.

per centThere is even an element of sophistication in the development of malvertising, as cyber criminals are able to conduct attacks with some degree of selective targeting – much in the same way that legitimate ads can.
During 2014 alone, it saw a colossal 325% rise in malvertising, with cybercriminals costing global advertisers an estimated $6.3 billion this year through the use of automated programs and click-through ads on third party sites.

With the continued increase of websites using cookies to produce targeted ads as well as our own growing online habits, malvertising looks set to rise further still. The challenge then is for ad networks to keep a hold of their ability to control and monitor each and every ad that is being cast out into the cyber-sphere.

So as we move rapidly into the IoT (Internet of Things) as many devices, and even toys we now buy have WIFI, Bluetooth or USB connectivity.

“As the world connects more and more smart devices to the internet, the number of potential vulnerabilities will increase in linear fashion.” Comments Craig Ashmole, founding Partner at London based IT Consulting CCServe. “I’m not one to give ammunition to the doomsayers about the Internet of Things, as I believe that on the whole it’s going to be a major change in what we do and see, but someone recently describe the IoT as the ‘Internet-of-Threats’ ! ”

There has been a period where many smart devices have already been installed with no security protocols. They were originally expected to be used only in a closed, secure loop but now regularly connecting to networks both home, in the office or on the factory floor.

Open by default?

Many smart devices that are ubiquitous throughout the manufacturing and processing industries have in fact turned out to have been installed with no security protocols. They were originally commissioned with the expectation that they would only be used in a closed, secure loop. Recent cyber security breaches have taught us that even the humble industrial (and even office) equipment devices could be subverted for malicious purposes.

Therefore, it’s only fair to suggest that we should certainly be looking to protect the corporate data centre from generic attacks, and the best way of doing that is not to leave the security door wide open.

Internet security advice is so often aimed at IT but we should also be considering other areas. So, for data centre and facility professionals, here are five basic things that will help protect your company and its reputation. Other than time and employee costs, many of these actions are “free”.

Basic fixes

  1. Simplify: Complexity increases the number of attack surfaces. An easy way to reduce this is to turn off default functionality that is not being used, and disconnect equipment that is not in use.
  2. Strengthen: Adopt the view that published default usernames and passwords are 100 percent compromised and should be changed. Eliminate default credentials (passwords, SNMP community strings, etc). Replace them with strong passwords and, wherever possible, use different usernames and passwords for different people.
  3. Partition: Isolate the facility network from the enterprise network. If possible build a separate physical network for the data centre and hide it behind a physical firewall to keep hackers away from mission-critical equipment.
  4. Update: Ensure that all devices have the latest firmware, and revisit this regularly to keep up with security patches. Do not make it easy to exploit known vulnerabilities.
  5. Lock down: Physically secure critical equipment, create an access control plan and be sure to use it. Some protocols used on equipment are 30 years old, developed at a time when we didn’t have security concerns. Putting equipment behind closed doors with access control goes a long way to making them secure.

It is assumed that active scanning tools (network scans, intrusion-detection and penetration logs, email scanners and antivirus software) will have been implemented by IT as part of sensible enterprise protection measures, but if you work in the data centre and are unsure about this, one should definitely be checking.

To read the report from Cyphort Labs that shows the dramatic rise in the amount of malware sent through advertising, known as ‘malvertising’ fill in the form on the left to access it.

Comments also from: Soeren Jensen - VP Schneider Electric.
The IT Clock Speed in 2015

The IT Clock Speed in 2015

How CIOs can raise their ‘IT clock speed’ as pressure to innovate grows

CIOs are facing pressure from the board to roll out IT projects increasingly quickly. How can they do that without running unacceptable risks?

I came across this article in the Computer Weekly based on cutting-edge research among leading businesses, such as CEB – formerly the Corporate Exectutive Board – and other industry experts which sums up the need for IT Speed and Agility very well indeed.” stated Craig Ashmole from IT Consulting CCServe Ltd.

Businesses are under pressure to radically rethink the way they manage information technology and the ability to introduce new technology quickly has become a boardroom issue.

More than three-quarters of business leaders identify their number one priorities as developing new products and services, entering new markets and complying with regulations, a CEB study of 3,000 business leaders has found. And the speed at which the IT department can respond to those demands is critical, says Andrew Horne, a managing director at CEB, a cross-industry membership group for business leaders.

“We are increasingly hearing from CEOs that their biggest concern is how organisations can change faster. The thing slowing them down and helping them speed up is technology,” he tells Computer Weekly.

Agile is not enough

The CEB – formerly the Corporate Exectutive Board – argues that traditional ways of managing IT are no longer able to meet the demands of modern businesses. Even agile programming techniques, which do away with long-winded development cycles in favour of rapid programming sprints, are not enough to get companies where they need to be.

A new model for IT is beginning to emerge, which is helping organisations speed up project roll-outs and free up reserves from maintenance to spend on innovative, high-impact IT projects.

Why the IT department needs a faster clock speed

IT departments have always been under pressure to respond more quickly to business demands.

Ten or 15 years ago, they hit on the idea of standardising their IT systems to cut down development time. Rather than roll out multiple enterprise resource planning (ERP) systems in different areas of an organisation, it made much more sense to roll out a single ERP system across the whole organisation.

That worked for CIOs back then, says Horne, but the emergence of cloud computing, analytics and mobile technology means IT can no longer keep pace with the demands of the boardroom. “Now the environment is so competitive. You have legacy systems, big data, analytic tools, technology for customers. You can’t standardise that. You can’t globalise that,” he says.

The rise of the two-speed IT department

Faced with this recognition, companies have opted for a two-speed approach, carving out specialist teams within the IT department to work exclusively on urgent projects.

The fast teams, often dubbed “tiger teams”, focus on innovation, use agile programming techniques and develop experimental skunkworks projects. CEB’s research shows the idea has worked, but only up to a point. Once more than about 15% of projects go through the fast team, productivity starts to fall away dramatically.

We are increasingly hearing from CEOs that their biggest concern is how organisations can change faster. The thing slowing them down and helping them speed up is technology states Andrew Horne, from CEB

“You start off with agile tiger teams, top people, and they deliver at speed. They are glamorous, they get a lot of accolades from senior management,” Jaimie Capella, managing director for CEB’s US IT practice, told a masterclass for CIOs in October 2015. “And then we hit the valley of despair.”

Fast teams cannot work in insolation. They rely on other IT specialists in the slow team to get things done, and they need to work with other parts of the business that do work in an agile way. Once their workload grows, the teams find themselves dragged back by inertia in the rest of the organisation.

There is another problem too. As Capella pointed out, no IT professional with any sense of ambition will want to work in the slow team. “You create the fast team, give it a cool name, put it in new offices. Then everyone on the slow team wants to be in the fast team. That creates morale issues,” he says.


The emergence of adaptive IT

The answer that is beginning to emerge from this growing complexity is called adaptive IT. It allows the whole organisation to respond quickly to projects, if it needs to. CEB’s Horne describes it as “ramping up the IT clock speed”.

IT teams will either work at a fast pace or a slow pace, depending on the needs of their current project, and they will be comfortable changing between the two different modes of working.

What is IT clock speed?

IT clock speed is the overall pace at which IT understands business needs, decides how to support those needs and responds by delivering capabilities that create value.

“In any given situation the team can make a call whether speed is most important thing, and trade that off against cost or reliability,” says Horne. Building an adaptive IT department is difficult, and needs a radical rethink of the way CIOs manage their own part of the organisation and their relationships with the rest of the business.

Research by CEB suggests that 6% of organisations have made the transition, while some 29% are taking active steps towards it.

Eliminate communication bottlenecks

Companies can achieve the biggest improvements in IT clock speed by systematically identifying bottlenecks in the IT development cycle. Typically, the sticking points occur when agile IT teams collide with other parts of the organisation.

“Sometimes the problem is conflicting timelines. The IT team needs an architecture decision next week, but the architecture team only meets once a month. Or it needs an urgent risk review, but there is only one person in the organisation doing risk reviews,” says Horne.

The next step is to minimise the red tape. As Horne points out, IT has become extremely process-orientated. Techniques such as ITIL and agile development are in fashion. “And for good reason,” he says. “It helps IT departments keep control.”

But how much process do CIOs really need? Adaptive companies have found they can manage with much less than they might think. They make streamlined processes the default. And if developers want something more rigorous, they have to argue the business case for it.

How CIOs can become faster

CEB’s research shows that CIOs can encourage a culture of speed by delegating more decisions. That does not mean losing control of IT, but it does mean setting clear goals and guidelines that allow other parts of the IT department to make their own decisions.

At the same time, CIOs need to think about the way they communicate with their IT teams. That might mean congratulating people on rapid delivery of projects, rather than focusing on the quality of the project. Horne poses the question: “What are the things on top of the IT score card – is it speed or reliability?”

If that sounds like a lot to do, it is. But companies can start off in a small way and still achieve some of the benefits. “It does not have to be big bang,” says Horne. The transition is not always easy. As one CIO put it: “The co-existence of agile with waterfall projects means we can’t devote people 100% to agile. People work two hours on waterfall, then two hours on agile. It’s hard to manage it.”

One mistake IT departments frequently make is to create a fast-track IT process and then “forget” to tell other parts of the business about it. “I can’t tell you how many times I have heard CIOs say they have a fast-track, but don’t tell business about it because then everyone will want it,” says Capella.

In other cases, IT departments have created such an onerous process for businesses to request fast-track IT – often involving answering questionnaires that can be hundreds of pages long – that business leaders simply don’t bother applying.

The benefits of IT triage

Those companies that have been successful at introducing adaptive IT have taken a cue from the medical world and are triaging their IT projects into streams of urgency.

One US company, for example, assesses each IT project against the following criteria:

  1. The value the business will gain if the project is rolled out quickly.
  2. Whether the risk of the project is contained.
  3. Whether it affects multiple areas of the company.
  4. How frequently the requirements are likely to change.

It has been able to speed up projects by creating self-service portals for urgent, frequently requested projects. They allow marketing people, for example, to create new marketing campaigns as they need them, without the need to wait for a developer.


Business professionals can also use pre-defined checklists to set specifications for commonly requested IT projects, which then go through fast-track development, with the minimum testing. Only the most business-critical systems which affect wide sections of the business go through a full rigorous development and testing process.

In another case, a large energy company has decided to do away with formal business cases for all but the most complex 10% of projects. It approves urgent, high-value projects as a matter of course. Non-urgent, low-value projects are simply not approved. A large consumer goods company has taken a different approach. It prioritises only projects that have the greatest impact on customer experience.

Adaptive IT increases efficiency

For those companies that have been able to take up the adaptive model, CEB’s research shows the benefits can be huge. Business can typically roll out a project 20% more quickly – a saving of one month on a six-month project.

They also have more freedom to re-allocate cash to the most urgent projects. Traditional IT departments can re-allocate about 15% of their budget if a new project comes up. For adaptive organisations, the figure is 40%.


“Adaptive organisations are also more efficient. If you have a £100m IT budget, they are spending £2m less than average on IT legacy systems, which means they are spending more on collaboration, cloud and big data,” says Horne.

The pressure for IT to become faster is coming down from the CEO and their fellow board directors. “They are frustrated at how slow the company is changing. When they ask why it’s slow, technology is coming up as the answer,” says Horne.

But to succeed, organisations require a strong CIO, with strong leadership skills. “It needs someone willing to lead change rather than just keep IT stable; someone who can work well with business leaders and communicate with their teams. If you are an old-style CIO, interested in technology, keeping the lights on, you are going to be in trouble,” he concluded.

Having spent a majority of my career working with and supporting the Corporate CIO Function, I now seek to provide a forum whereby CIOs or IT Directors can learn from the experience of others to address burning Change or Transformation challenges.

Craig Ashmole

Founding Director CCServe

Tech TBM drives M&A

Tech TBM drives M&A

Technology Business Management Drives M&A

Throughout the entire M&A lifecycle, the CIO is poised to assess opportunities, mitigate risk and develop and executable IT plan rooted in a multidimensional, 360-degree view of the process.

“As merger and acquisition activity heats up, IT leaders need to be prepared to take a more active role in ensuring that value is properly assessed before the transaction is completed,” Comments Craig Ashmole, Founding Partner of London based IT Consulting firm CCServe. “The key is in ensuring that Boards recognise the need for the CIO involvement throughout the integration process that follows”.

With tens of thousands of M&A transactions expected this year, Steven Hall a Partner, of Emerging Technologies Group at Information Services Group (ISG) makes some clear observations around the focus of Technology within the M&A framework.

CIOs will be called on to help the board determine how the deal will grow revenue, decrease costs, and mitigate risk. The effective use of Technology Business Management (TBM), and the collaboration between IT and lines of business that it entails, will not only inform the M&A transaction during the due diligence phase, but it will also lead to a more efficient transition and a better understanding of the costs and anticipated benefits of the deal. Because TBM is a holistic framework that positions IT in a collaborative business role—one that provides data-driven assessments of its value and role—it is the IT leaders themselves, engaged at the outset of the M&A process, and through its completion, who are best positioned to help fully articulate the M&A rationale and deliver a successful process and outcome.

If one considers several M&A scenarios and their associated business objectives, it becomes clear that IT integration approaches are not one-size-fits-all. In an M&A case driven by cost reductions and improved efficiencies within the context of eliminating a competitor, the absorption rationale—although not without its challenges—is straightforward. The acquiring entity provides all IT systems, but even then the implementation or advancement of a TBM strategy requires attention during the integration. If the primary M&A goal is based on R&D, the parallel but bridged IT coexistence is straightforward, too.

An M&A rationale based on geographic expansion to increase market share also seems straightforward, but IT professionals prepared for this M&A challenge are delivering quality data to best guide the M&A strategy development, as well as execute it. That geographic expansion likely requires a hybrid response to IT systems integration with elements of the absorption model, but it also relies heavily on a “best of breed” approach that retains or recombines superior process or functionality provided by the acquired company. Making optimal decisions requires IT leaders to assess the strengths and weaknesses of the target firm’s IT systems, evaluate the portfolios of both, and deliver cost-effective solutions that drive future growth.

Throughout the entire M&A lifecycle, the CIO is poised to assess opportunities, mitigate risk and develop and executable IT plan rooted in a multidimensional, 360-degree view of the process. The CIO operates as an integral part of the organisational team within that lifecycle, but the critical and active role of IT emerges as the parties move from an acquisition and divestiture phase to executing the separation-integration process. Here, the two issues at the forefront of M&A transactions from that IT perspective—a perspective connected in the TBM framework to all other business units and processes—is to ensure that different platforms work together post-acquisition, and how to optimise the new environment and the costs involved. Doing so is not always easy, and resistance from either or both M&A parties is a given. It becomes easier when the IT leader communicates a core business case throughout the process.

Metrics are the basis for building that case, and for creating a meaningful baseline that anticipates growth and measures savings before the deal closes. But metrics are in service to more than the creation of benchmarks, and quality data is as important a communication tool as it is a standard measure for quantifying success. Data tell a story for stakeholders, and provides a common language for discussing and evaluating the M&A process to both its internal and external constituencies. When, inevitably, at some point in the process stakeholders on either side step in and say, “We used to do it better,” data provides a shared understanding between the M&A parties’ expectations and cultures.

Ideally, that data isn’t meant for looking backward or creating reports that do. Identifying data that supports TBM priorities—and designing that data collection into the toolkit—offers the enterprise an opportunity to revisit how it measures success, and maximises the foresight value of that data.

The traditional approach in M&A circles has been all about percentages, and how to achieve a certain percentage as a measure of cost—but we know now that is impractical, and subject to far too many contingencies to offer the most realistic portrait. Reducing costs to a percentage—as opposed to reducing costs with performance metrics in place for a greater understanding—will only lead to greater risks elsewhere. This strategy, moreover, often fails to accurately evaluate specific, real-life scenarios.

If the costs for services are truly understood through a TBM process both comprehensive and focused, then the “big picture” can supplant the percentage mandate and lead to more intelligent decisions and the M&A discussions that surround them. For example, if the costs are understood but still seen as too high, then the discussion is no longer framed as, “Reduce it by another half percent.” More important, the strategic possibilities and conclusions are no longer derived from conversations that fail to capture critical information that the TBM model makes available. The discussion about where to cut or eliminate becomes more focused, and is a more reliable measure of very specific services, or a better evaluation of operational IT choices to move more on-premise to an as-a-service environment. These approaches deliver value during the M&A process, but it’s important to design them to extend beyond the moment.

Steven goes on to state that: From a TBM standpoint, the M&A process itself is not a departure from “normal business,” rather than an expression of it. The CIO and other organisational team members leading through each M&A phase need to design plans that are reusable, support continuous improvement and remain flexible enough for agile and intelligent responses in the changing business environment of the future. Just as IT leaders are tasked with the systems integration that the M&A process demands, so too is a leadership team whose basis in TBM positions the M&A experience itself within the wider data-driven future of the enterprise.

Technology Business Management

The biggest challenge in M&A, is achieving that common language, and this requires a rapid adoption of TBM principles, knowing what is being spent in both organisations, and how they align. Achieving the desired M&A outcome with a clear shared understanding not just from an IT perspective but within a TBM framework creates the optimal environment for success during the M&A cycle.