The ITAM Review

News, reviews and resources for worldwide ITAM, SAM and Licensing professionals.

The business case for IT Asset Lifecycle Automation

This article is a collaboration between AJ Witt of The ITAM Review and Paul King, Senior Consultant at DeskCenter Solutions. In it, we discuss how IT Asset Lifecycle Automation delivers business value, reduces the burden on IT, and delights your users.

Executive Summary

Digital Transformation brings IT assets ever closer to your customers. With this shift comes the need to protect the value chain through IT Asset Lifecycle Management – a critical business function in a digital-first, cloud-first technology landscape. In the physical world organisations would typically have detailed specifications around components making up a product or service: this article argues that a similar rigour should be applied to digital assets and advocates that the best approach to this is to leverage automation opportunities. This article presents a three-step approach to IT Asset Lifecycle Automation – Discovery & Standardisation, Self-Service, and Process Automation – with a particular focus on managing and controlling the digital supply chain and avoiding headline-grabbing cyber-attacks such as those that befell Equifax, TicketMaster, and British Airways.

Introduction

This article is in three sections. Section one looks at Discovery & Standardisation, a critical aspect of lifecycle management. Section two shows how the output from Discovery & Standardisation enables opportunities for Self-Service. This is critical in managing the consumerization of IT, enabling the democratisation of solution choice and relieving the desire for shadow IT.

Once we have discovered what we’re using, we’ve standardised, and we’ve enabled Self Service, we can then look at working with stakeholders such as IT Security on Patching and Process Automation. This is where IT Asset Lifecycle Management and Automation can lift the burden of repetitive management tasks such as patch management and help focus, prioritise, and make sure that the right applications are patched at the right time. We discuss this in Part Three of the article.

1. Discovery & Standardisation

We are in an era whereby substantial IT spend exists outside the IT department, and therefore outside IT’s control. The genie is out of the bottle and yet IT still has governance, risk, and compliance responsibilities. The consumerisation of IT has changed many IT professionals from being hands-on engineers and technical specialists to being process engineers, stakeholder managers, and supplier relationship managers.

In order to regain control IT need to deploy Discovery technology – because you can’t manage what you can’t see. Due to the sheer volume of different platforms and technology approaches Discovery needs to be automated and repeatable just to keep our heads above water.

Addressing Diversity

The diversity of technology platforms – desktop, datacenter, tablets, mobile, cloud, embedded devices, Internet-of-Things, IP-enabled industrial control systems – means we need diverse Discovery techniques. It’s no longer about querying Add/Remove Programs on a Windows PC to find out what’s installed. Now we need to be looking at querying Active Directory, scanning IP ranges, accessing firewall and router logs, ingesting container logs and data from a multitude of other Discovery sources. Multiple sources with different perspectives of an asset can then be combined to provide a rich source of Discovery data. Equally, flexibility is required and if the only record of an asset is an IP address then so be it – that’s still vital information for painting a true picture of your environment.

Optimisation & Standardisation

With broad discovery in place delivering rich information about our environment we can begin to gain insights and a better understanding of our IT estate. For example, we may have discovered certain line-of-business apps that are business-critical and may therefore be part of a standard build or used widely in the organisation.

There are myriad options for optimisation once we have Discovery in place. We might look at standardisation where we discover we have many apps doing the same thing, or many different versions of the same app, or duplicate installations on the same device. We may find apps we don’t want, or apps that potentially add risk. This enables us to rationalise and standardise offerings without impacting service.

We may also discover the outliers in the asset lifecycle – the devices out of support, the software that’s end-of-life. We can build a picture of our known-unknowns – devices that we know are out there, perhaps because their computer accounts are active in Active Directory, or they are in DHCP logs, or we can see them in a workgroup, but that we’ve never been able to inventory.

Future Asset Types

We need to think about assets other than just the traditional operating systems and desktops and data in service. We also need to discover assets that might not necessarily be on our own networks or services. We need to think about how we discover other assets in use in other people’s networks, such as public cloud, Software as a Service, Infrastructure as a Service, Internet of Things, and so on. We need to discover all of them and understand what they are in order to manage them.

Technology Portfolio Management

The final part of the Discovery and Standardisation process is Technology Portfolio Management. The aim of this is to present a standard menu of applications and solutions to the business. One approach is to build a consensus that ends up with a list of services and devices that everyone finds acceptable. The democratisation of the consumerisation of IT, if you will.

Why do this? Technology Portfolio Management significantly reduces the burden on the business. It reduces support overheads because there’s less diversity of devices. It reduces licencing costs. It reduces security issues and patch burden and so on, and it allows us to provide a consistent delivery of services.

The Technology Advisory Board

How do we go about building that list of standard applications and services? One approach is to build a steering group or Technology Advisory Board. The purpose of this group is to decide which apps, devices and services form the technology stack for the organisation. This needs to be a cross-functional effort, and ideally driven from outside IT. It may be a standards group, an architecture group, or it might be areas of the business that all contribute to the decision-making process.

In forming its decision, the Advisory Board needs to look at a diverse set of parameters. You need the licensing cost, the security concerns, the business demand and what the business need but you also need the more traditional IT input; if we move to that version what does that mean in terms of the other applications, services, and infrastructure that work with it to co-create the service offering?

Whilst this discussion and collaborative approach is important, for the Advisory Board to be successful it also needs to be decisive. The output needs to be “This is our standard portfolio of applications and services”. Getting to that output will require policies, processes, and procedures. You need to define how often the technology stack is reviewed, how applications are added and removed, and ultimately who decides and arbitrates. Whether you also have sanctions for using unapproved technology will depend on your company culture. Clearly defined policies and processes that everyone agrees with help here.

Summary – Discovery & Standardisation

In this section we’ve explored how Discovery is the key enabler for IT Asset Lifecycle Automation. We’ve seen that with strong discovery in place we can explore opportunities for optimisation, optimisations that in turn feed into standardisation, driven by a Technology Advisory Board. The next section of this article looks at how a standardised technology stack is an enabler for Self Service.

2. Self Service

Self Service is not just about requesting software. It’s also about requesting processes to be carried out or it might be requesting devices or services. For example, the entire onboarding process for a new employee could be packaged and automated via Self Service. This could include the creation of credentials, ordering of a laptop, installation of software, and provisioning of a mobile phone – all fully authorised by the required stakeholders and authorities.

Having a strong approach to Self Service supports the consumerisation of IT. By giving power to users you encourage them to use the process (which is enabled by and supports Technology Portfolio Management) and this cuts the desire to do things “under the radar” which leads to Shadow IT. A Self-Service strategy lets a requester provision their new employee’s IT footprint remotely – at 3am in their pyjamas if they wish. It has to be seamless, easy and intuitive to use, and as frictionless as possible. Making it frictionless doesn’t mean abdication of control – Self Service can still have checks and balances in terms of workflow approvals for example.

Self Service is really self-preservation for the IT Department, because it shuts down alternative routes of provisioning. Your users want things done efficiently. They don’t want to jump through lots of hoops, and they want an audit trail to show progress against their request. Think of how online retailers provide customers with tracking information on the status of their order. Self-Service within IT can provide that same level of assurance that your shiny new Macbook Pro is on its way. Having a repeatable and reliable process available discourages users from shortcutting the system because they know that there’s no faster way of getting the tools they need to get their job done.

Self Service & Technology Portfolio Management

Technology Portfolio Management backed by Discovery & Standardisation enables self service to be fully automated. It does this by ensuring that the overhead for deploying applications is minimised to the standard list of apps. It also reinforces the desire in users to use those applications because they know that they can get a standard app almost instantly in comparison to having to wait for perhaps several weeks for a non-standard application to be approved. That almost instant delivery can be enabled through software delivery tools which automatically install applications once the request has been approved. Those installations will be of the appropriate version of the software, depending on hardware, role, and user profile. Because delivery tools require applications to be packaged – a time-consuming, skilled, and resource-intensive process – standardisation ensures that the “to be packaged” list is kept to a manageable length.

Packaging, automated deployment, and standardisation also reduces the burden on the patch management process, a process critical to the security and reliability of your IT stack. If an application has been packaged and deployed automatically it can also be patched automatically. We know which version of that application we’re running, meaning that there’s less burden on the IT Security team to govern the patching of different versions. This helps them to focus on assets that are known to be high-vulnerability or high priority. More on this in section three of the article.

Automation of these routine tasks also frees up resource in terms of repetitive delivery tasks, meaning those staff can focus on more productive things. Routine, repetitive tasks tend to be error prone – and that’s before we consider factors such as whether our team members are enjoying their jobs. Automated tasks reduce error rates because there is no thinking to be done or checklists to follow. Spending the time to create repeatable, accurate, consistent processes is time well spent, because they can be automated without having to ramp up lots of resource to deliver them.

3. Working with IT Security

Parts 1 and 2 have laid the foundations for this section looking at how ITAM-led processes can reduce the burden on IT.

We’ve discovered what was in use across diverse platforms. We’ve begun to standardise, so that we’ve got a consistent menu that we’re offering our business that’s relevant and easier to manage. Then we’ve introduced Self Service, so users can pick items from an Enterprise Approved List and get things they want as quickly and efficiently as possible, and that considerably frees up IT resource because it’s automated. It also enables close collaboration with stakeholders. One such key stakeholder is IT Security and this is where ITAM Lifecycle Automation can reduce threats to the business.

ITAM Lifecycle Automation & IT Security

The Equifax, TicketMaster, and British Airways cyber-attacks have seen these household names have their networks damaged, their business and brand damaged and share price damaged because of security breaches and a lack of technology portfolio management. Where ITAM helps is once we have done our Discovery, Standardisation and Self Service, we’re in a lot stronger position to say, “Well, these are the apps and services that are actually in use, and we’ve standardised these, and this is the patch level and these are the priorities,” and so on.

The IT Security team will have worked in unison with the ITAM team as part of the Technology Advisory Board where they will have considered and answered questions such as “If we standardise on this app, how do we go about patching it? Where is it in its support lifecycle? With standardisation, say, on Dropbox for file storage and sharing, you only need to monitor and patch that application rather than patching Dropbox, Box, OneDrive, Google Drive, and so on.

Standardisation and automation supports the patch strategy too. Patching becomes less of a Wild West – we stop responding to who is shouting the loudest for something to be patched and equally we avoid that “tumbleweed” moment where vulnerabilities are being overlooked. Rather than reactively responding to threats as they occur we can prevent them in the first place with accurate patching. To continue the Wild West theme, we’re locking the stable doors before the horse has bolted. Standardisation & Automation enables this strategic vulnerability management. We know which apps are standard, and the value they bring to the organisation. We know how many users they have. And we know the level of sensitive data accessed by those applications, and we can patch accordingly. Similarly, we can purge non-standard apps that are a potential security risk, or at least make an informed decision as to when it is patched. Taken together, all these outcomes enabled by ITAM Asset Lifecycle Automation protect the business from external threats.

Process Automation

The ISO standard for IT Asset Management is comprised of a number of processes. Specification we have already explored in the context of the Technology Advisory Board. You also have acquisition strategy, acquisition process, development process, release, deployment, operation and retirement to explore from an automation perspective.

One such candidate for process automation would be the employee onboarding process, as discussed above. On the flip side, another candidate would be offboarding. For example, for offboarding we need detect that an employee has left the organisation, we need to identify the assets they were using, we need to know which SaaS apps they have logins for, and we need an audit trail to track all of that to ensure that we don’t accidentally leave them with access to Salesforce instances now that they’ve moved to a competitor.

All this comes back to where we started with Discovery. If we’ve standardised an application for a particular capability, and discover non-standard applications in use, we can request that those applications are removed. Once removed Discovery closes the loop by detecting that it has been removed. This is why Discovery is an ongoing discipline, not a point in time action.

Conclusion

This article has explored how IT Asset Lifecycle Automation frees up scarce IT resources to deliver greater value whilst improving supply chain security. We’ve demonstrated how rich discovery, coupled with standardisation, is a key enabler for self-service, which discourages Shadow IT and delights your users. And we’ve seen how automation is possible for other ITAM processes.

Recommended Next Steps

Take a close look at what you have in place. Are you using multiple tools giving different results, or do you have anything in place at all? Do you really, definitively know what’s on your network and what your users are using?

Can you be confident you fully understand the risk inherent in your environment, and that you are able to react to challenges and risks? Have you maximised the benefits and cost saving opportunities for standardisation and automation?

What do you need to differently to what you do today to build that confidence?

Further Reading:

For more from DeskCenter on these subjects please see the following articles:

Getting Started with Discovery & Inventory

Discovery & Inventory, what next?

Working with IT Security

About Paul King

Paul King is a Senior Consultant at DeskCenter Solutions with 25+ years’ experience in many IT roles. Starting as a software developer on IBM mid-range, Paul has worked for end user organisations, software vendors and consultancies in senior management and operational roles focused on ERP implementations, eCommerce solutions, infrastructure projects and software development and more. It is this broad understanding of the challenges faced by business and IT that brought Paul to DeskCenter. When not working with customers, writing papers or looking for new ways to leverage the power of the DeskCenter Management Suite, Paul’s main aim in life is to support and enjoy his family. Based on the South Coast of England, positioned between the South Downs National Parks and the Coast, brings sanity back into his life and a welcome break from technology.

Leave a Comment