The ITAM Review

News, reviews and resources for worldwide ITAM, SAM and Licensing professionals.

Keeping Your Asset Data Alive

Is your Asset Database 'Match Fit'?

The ISO SAM Standard refers to the concept of ‘Trustworthy Data’.

As the name suggests – it is data that the organization trusts.

Whilst no enterprise level asset database is 100% accurate, we need to have a sufficient level of confidence in our asset data so we can make the right decisions.

We want to trust this data have the confidence that it won’t come back to bite us later.

A few weak reports with dodgy results and teams across the organization begin to doubt the validity of your data.

In order to keep asset data alive, and therefore trustworthy, we need to think about what makes it get stale in the first place. How does the data get bloated full of duplicates, riddled with errors and discrepancies?

First of all lets look at the inputs and outputs, what is likely to be flowing in and out of your asset database?

Inputs

New additions to your asset database are likely to be new software installed, software upgrades, and new devices connected to the network, rebuilt machines, upgraded hardware and configuration changes.

Outputs

Outgoing from your asset database is likely to be software uninstalled and devices placed in storage, stolen, retired or otherwise removed.

In addition to these network based changes, you then might have your financial, contractual and time based changes:

  • FINANCIAL: New purchases are made, new invoices are received
  • CONTRACTUAL: Terms of agreements and contractual arrangements change
  • VIRTUAL: virtual machines and logins spawn like rabbits, disks get full, space and services get consumed
  • TIME: time goes by and maintenance contracts and leases expire
  • POLITICAL: Users come and go, change departments, acquire second devices etc.

Housekeeping Plan

An inventory and discovery solution will go some way to take away a lot of this heavy lifting in terms of changes. However inventory and discovery technology is never a plug-and-play experience. Even the most sophisticated automated technology requires ‘baby sitting’ and ongoing upkeep.

Critically, the technology needs to be on the same page as the people and processes. You have to remember that an inventory and discovery solution is network based. It can only tell you what is happening on the network. It can tell you Bob’s machine was last audited last Tuesday, it can’t tell you Bob left the company on Wednesday and took his laptop with him. For example one of the most common reasons for a bloated database full of out of date data is a disconnect between staff leaving the company and their assets which are redistributed or lost. Or machines being rebuilt and reconnected to the network without the old record being updated.

Regular software auditing and reconciliations for your major vendors will help keep you up to date with a lot of the software changes. In terms of hardware, you may wish to consider the following:

Sample house keeping duties and routine checks:

  • Have we captured all devices? Do we know them? Can they be identified and allocated to the appropriate department or cost centre?
  • Are they responding in a timely manner? Have any devices gone AWOL?
  • Are they reporting accurate data? Can we cross- reference our asset data with other sources in order to identify anomalies? E.g. Active Directory
  • Which machines are not responding or not communicating properly?
  • Which machines are duplicates?

Finally, your team needs to maintain the health of the actual asset database infrastructure by checking server logs, backups, checking alerts and messages to ensure the whole system maintains a healthy heartbeat.

Your view? How do you maintain the level of accuracy in your asset data system? What is a good level of accuracy to aim at? 

About Martin Thompson

Martin is owner and founder of The ITAM Review, an online resource for worldwide ITAM professionals. The ITAM Review is best known for its weekly newsletter of all the latest industry updates, LISA training platform, Excellence Awards and conferences in UK, USA and Australia.

Martin is also the founder of ITAM Forum, a not-for-profit trade body for the ITAM industry created to raise the profile of the profession and bring an organisational certification to market. On a voluntary basis Martin is a contributor to ISO WG21 which develops the ITAM International Standard ISO/IEC 19770.

He is also the author of the book "Practical ITAM - The essential guide for IT Asset Managers", a book that describes how to get started and make a difference in the field of IT Asset Management. In addition, Martin developed the PITAM training course and certification.

Prior to founding the ITAM Review in 2008 Martin worked for Centennial Software (Ivanti), Silicon Graphics, CA Technologies and Computer 2000 (Tech Data).

When not working, Martin likes to Ski, Hike, Motorbike and spend time with his young family.

Connect with Martin on LinkedIn.

2 Comments

  1. Cary King says:

    I conceive that there are two dimensions that companies might wish to consider – completeness and accuracy.

    There seems to be good cause for companies to evaluate their needs based upon risks – corporate and operational.

    It seems to me that not all data items are of equal value. I apprehend that, based upon the risk analyses, companies can choose the data items that need high completeness and accuracy, and those items that do not.

    Our experience has been that customers should seek to build a system with not less than two sigma accuracy for the important data items.

    For some data, like SAM data, companies may wish to consider the advantages of proactive, or preventative processes. SAM is, alas, often organized around after-the-fact counting and true up. Those companies that institute preventative controls on distributions, associated with software license entitlements and service-request fulfillment seem to be more prepared to demonstrate end-to-end controls that avoid both underspending and overspending issues.

    We see that that those organizations that are strictly clear about which organization is responsible for which data item’s accuracy do better. Those organizations that maintain measurement records (with work orders) of the individuals who are accountable for the data item update do better still.

    Asset Management as a service is often a centralized part of the integrated IT business office. Much of the actual work is completed with “sub contracted” work or systems (even if internal). When IT organizations consciously organize their services to be modular, with loosely-coupled integrations and measured responsibilities, they achieve higher effectiveness and are better prepared should they decide to outsource some pieces of the work.

  2. Yes, the issue of keeping your data alive is critical and there are statistical methods that can ensure that the Asset Data zombies (PCs that shouldn’t be counted!!) are kept out too!

    One method to increase PC representation – and determine the reason for lack of device attendance – is to overlap multiple ‘high frequency’ inventories. To explain via an example where we *know* we have 100 PCs (and there are issues where you simply don’t know what the absolute count *should* be):

    Inventory #1: you get 90 PCs, and 10 are ‘AWOL’. (Absent Without Official Leave- the PC wasn’t found during the inventory; maybe turned off if the user is on vacation or leave, or laptop on the road)

    Next day, the inventory gets 90 PCs again, but you found 6 new PCs, and 4 of the PCs from the previous inventory are now AWOL. Overlay the two inventories and you see 96 unique PCs.

    Day # 3:, you find 95 PCs, of which 3 are new ‘unique’ PCs. You overlay the data and get 98 unique PCs.

    In this very simplistic example, you keep finding new PCs – and you sometimes ‘lose’ PCs too – but the rate of finding new PCs starts to drop. When the discovery of NEW PCs drops to zero, you’ve established what your stable ‘operational’ population is (and the absentee count in those subsequent inventories becomes your ‘death rate’). All of this follows a simplistic approach of statistical sampling and data analysis of a ‘moving’ population.

    And now about the Asset Data Zombies….
    As we deal with longer periods of time, some of those PCS that are AWOL become persistently AWOL. It’s up to the SAM team to initially include AWOL PCs into the Asset inventory, but the SAM team must also set the length of time where AWOL gets converted to MIA or retired so that they can confidently *not* count it. Obviously, a short time like two weeks would create havoc (lower than ‘real’ count); you don’t want to exclude a PC when employees take two week vacation!

    But a long time -say 1 year – would creates costly, higher than ‘real’ counts, especially if the PC lifespan is 2 or 3 years; your inventory count could be 33% to 50% larger than it really is. Isn’t statistics fun?

    If you don’t set a rule of when AWOL PCs are removed from the count (and we at AssetLabs suggest 45 days for our clients), then those AWOL PCs become zombies; counted as alive when they shouldn’t be.

    We have found that 8 weekly inventories – limited to devices that are AWOL less than 45 days – generates a confidence level greater than 95%. The 8 inventories creates an incredible overlap where the PC count can stabilize, and the 8 week period (56 days) helps filter the zombies via the 45 day limit.

    The good news is that if your inventory data is coming from a product like SCCM, Altiris or other network management product, chances are that your network administrator has already resolved this issue and knows how to build the query that will show the PCs that have shown up within a certain date-range (we offer this query to the SCCM team of our clients). The network administrators have to deal with dead PCs, re-imaged PCs and AWOL PCs on a daily basis (IMAC, patch deployment, etc), so they typically have the issue of ‘alive’ & ‘zombie’ data resolved. Your SAM team should ‘exploit’ the scenario where the IT team is resolving the same fundamental issue.

    But if you’re running a 3rd party inventory tool – and you want to replicate this approach that we use at AssetLabs – you have to ensure that the tool can retain multiple ‘date-stamped’ inventories of each PC (compared to just recording ‘deltas’ or only retaining the most current instance of device inventory). If the inventory tool can do this, you can replicate the ‘query’ approach that a SCCM admin has already done.

    To recap, this is an agnostic, statistical approach to create a high level of confidence of device ‘population’ only. It’s what we consider the first step in creating ‘Asset Data’ confidence (but not the last).

Leave a Comment