108413 éléments (108413 non lus) dans 10 canaux
With businesses becoming increasingly reliant on applications to generate revenue, it’s essential that downtime and glitches are kept to a minimum.
Research from IDC and AppDynamics has shown that infrastructure failure costs $100,000 per hour on average in this 24 hour service environment. Although it’s practically impossible for businesses to prevent application failures completely, the time taken both to predict and fix them is one factor that can be improved.
Despite this, in the same report, 35 percent of businesses estimate that IT failures take up to 12 hours to fix, and 17 percent of respondents say that it can take a number of days to fix critical application failures.
As revenue losses mount up rapidly when applications fail, it is apparent that changes need to be made to improve mean time to resolution for a significant number of businesses. However, rather than sub-standard hardware and infrastructure, it is often the processes that businesses use to identify and fix faults which are the main cause of the problem.
Within legacy, enterprise organizations, where there is a clear distinction between the development and operations teams, the working relationship between these two departments leaves a lot be desired. When an application issue occurs, the responsibility of fixing it initially lies with the operations team who will usually use myriad of monitoring and management tools, searching specific and siloed infrastructure components such as CPU, memory and network, before often discovering that the development teams need to be engaged.
The development teams often use their own diagnostic tools, which lead to multiple sources of data, multiple sources of information and ultimately multiple sources of confusion. The end result is that the time taken to find the root cause of the problem is much longer than it should be and unfortunately the impact to the business is almost certainly a loss of customers and revenue. Moreover, as development teams often work different shift patterns and in different locations, even getting hold of the right person on the development team can prove challenging.
But it doesn’t have to be this way. The adoption of a DevOps culture, which aims to develop a collaborative relationship between the development and operations teams, has been shown to improve flexibility within the IT department, and also speed up the application fixes too.
With DevOps, the responsibility for application performance is shared between both departments, meaning that the development team is more involved from the start. Rather than simply developing code and passing this over to operations team to place into production, the whole process is shared more equally, with actions taken to make sure both teams have visibility of application performance.
By implementing application performance monitoring (APM) for example, and giving visibility of this to all stakeholders, IT departments can pinpoint problems much quicker. In addition, by using a source control system, which records and tracks any changes to code, this allows members of both teams to understand the other’s way of working and why certain changes have been made.
A DevOps approach also allows bug fixes to be found and added to the production environment far more easily and quickly. Within a traditional set-up, where the development environment is kept separate from the production environment, new features and bug fixes occasionally fail to work correctly when released into the real world due to differences between the two environments.
As a result this increases the likeliness of errors once the code is deployed and can increase the Mean Time To Resolve (MTTR) application errors. The differing environments also mean that configuration often needs to be altered by the operations teams before being placed into the production environment which again increases the time taken for it to be deployed. As a result, the inconsistent data between development and production environments slows down the process of deploying releases and changes, which invariably means increased cost to the business.
By introducing automation as part of DevOps adoption however, many of these problems can be overcome. By automating code testing and the provisioning of environments, this allows time spent on these two tasks to be reduced considerably and ensures the two environments are based on the same configurations.
This means developers can write code in smaller chunks and place these into the development environment much more rapidly, which dramatically decreases the time it takes to deploy new features, changes and fix any bugs in applications. In fact, the IDC and AppDynamics report has shown that businesses expect that DevOps will accelerate the delivery of capabilities to the customer by 15-20 per cent, indicating the improvements in speed that the set-up can bring.
However, as DevOps also enables the speeding up of new applications and features to optimize employee productivity and delight customers, it is important that businesses ensure that this increase in change does not mean more application downtime as a result.
Gartner for example has predicted that in 2015 80 per cent of outages will be caused by people and process issues, and more than 50 per cent of these will be caused by changes configuration and release integration.
By having end to end application visibility for both operations and development teams however, businesses can use APM as a way to feed back and feed forward pertinent information into the software development lifecycle (SDLC). This helps to ensure that new releases and changes don’t generate business-impacting problems in live or production environments.
In order to make DevOps work, CIOs will need to overcome a significant number of cultural challenges in both the working style of the business and in terms of acquiring the right IT talent to support DevOps adoption.
For instance, organizations will now need developers that understand or even have the ability to undertake complex infrastructure automation tasks and operations staff that have the required software engineering skills to know, at least the basics of application coding.
Having said that, the necessity of maintaining the smooth running of applications and the cost savings that can be achieved as a result, means these changes are essential for meeting customer needs and generating revenue.
John Rakowski is a solutions evangelist at AppDynamics.
Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.
Photo Credit: auremar/Shutterstock