ARTICLE: Agent Vs. Agentless?
I recently spoke with BDNA about the upcoming release of BDNA Insight.
BDNA offer some interesting technology around their core IT discovery.
Their ‘BDNA Catalog’ offers business intelligence around the IT assets discovered such as detailed product information and power consumption. ‘BDNA Maps’ manages application dependencies, virtualization relationships and storage dependencies.
They seem to be working towards a CMDB meets IT Asset Management offering, which I assume will dovetail nicely into the service management focused PS Soft product line which they acquired late last year.
Unlike most IT Asset Management vendors BDNA do not use an agent. At some point during an IT audit project the issue of agents will arise. That is, if we want to collect an accurate inventory of the assets on our network we will need to discuss how we collect the information required for each machine.
Using Existing Systems Management Tools
Some organisations might choose to harness their existing systems management tools to collect the data such as Microsoft SMS. This is a good shortcut since the infrastructure is already in place to collect the data required but commonly fails because the tool was not deployed with this in mind.
In my experience audit data from such systems is either inaccurate, takes far too long to be generated or is not fit for purpose. This is often a political debate within an organisation as much as technological one as people fight the cause of their respective tools.
Using Agents
When it comes to collecting data using a dedicated audit and inventory tool, I believe it is fair to say that the majority of inventory tool vendors have chosen to use agent technology. Whereby a central system monitors and collects the audit results from remote agents which are deployed on networked desktops, servers, laptops or any other networked device you wish to audit.
There are arguments for and against using agents, some IT Asset Management vendors offer an agent, some offer agentless, some offer their clients both. It could be argued that agentless technology is a good tactical tool and using an agent is a better long term solution, but equally it could be argued that the reverse is true. Ultimately it boils down to what will work for your organisation, what sort of information you wish to collect and the unique challenges you are facing.
FOR AGENTS
- Depth of Inventory – It is said that using an agent can offer a ‘Deep-Dive’ in terms of depth of data. For example it might be difficult to record a daily account of what software is being used on a machine without an agent in place.
- Remote Machines – It is argued by agent driven vendors that it makes more sense to deploy to a machine that only periodically connects to the network if you wish to maintain an accurate inventory
- Network Bandwidth – This will depend on the strength of your network connections and remote locations, but it is argued that is more network friendly to have an agent transmitting it’s audit results over the network
FOR AGENTLESS
- Less Political Hurdles – The main benefit of going the agentless route is that there are less political hurdles to leap in order to get the system deployed.
- Less Change Management / Build Process Concerns – Code is not being deployed to machines, no changes to builds are occurring so less overheads are required for the deployment.
- Non Intrusive – Agents do not reside on the local machine and deployment commonly does not require administrative access to machines (a common hiccup in agent based deployment)
Have I missed any benefits for either method? What are your experiences of deploying agents or using agentless technology?
Related articles:
- Tags: Agentless · Agents · inventoryreview2011
About Martin Thompson
Martin is also the founder of ITAM Forum, a not-for-profit trade body for the ITAM industry created to raise the profile of the profession and bring an organisational certification to market. On a voluntary basis Martin is a contributor to ISO WG21 which develops the ITAM International Standard ISO/IEC 19770.
He is also the author of the book "Practical ITAM - The essential guide for IT Asset Managers", a book that describes how to get started and make a difference in the field of IT Asset Management. In addition, Martin developed the PITAM training course and certification.
Prior to founding the ITAM Review in 2008 Martin worked for Centennial Software (Ivanti), Silicon Graphics, CA Technologies and Computer 2000 (Tech Data).
When not working, Martin likes to Ski, Hike, Motorbike and spend time with his young family.
Connect with Martin on LinkedIn.
One other pro for agentless discovery is that it shows and types all devices that have an IP Address (like VOIP,
Network, Storage and Printer equipment, besides the traditional Windows/Unix/MacOS discovery).
And for the Windows farm it shows also
the machines which are not part of the domain (eventhough it won’t get a total inventory without the right credentials, the machines do
show up for further investigation).
Regards,
Ruud Hartog (BDNA Specialist, The Netherlands)
It’s horses for courses. I would recommend both, but they need to be deployed in appropriate situations eg agent-based
for desktops and some servers, agentless for servers where the complexity of the change management process makes it impractical.
The trap many organisations fall into is believing it is all or nothing. It isn’t. You can and should use both.
The biggest single one that I get all the time is that the Agentless approach actually delivers
business value…. faster!.
In the current economic climate, with enterprise vendors pressurising CFO’s for license
compliance, fast and accurate information is required in days, not months…. Agent-based is a folly of the boom times .. the tool that
delivers the fastest and largest business value wins, and that’s agentless!
The biggest single one that I get all the time is that the Agentless approach actually delivers business value…. faster!.
In the current economic climate, with enterprise vendors pressurising CFO’s for license compliance, fast and accurate information is required in days, not months…. Agent-based is a folly of the boom times .. the tool that delivers the fastest and largest business value wins, and that’s agentless!
Great article, Martin. I appreciate the fair and balanced perspective.
As a vendor of agent-based SAM technology, I’d like to respond to Pat Durkin’s comments: First, he implies that there’s
something inherently “slow” about agent-based technology. This is a common misconception based on a long history of large, bloated
agent-based framework solutions. Fortunately, most agents of today are small, streamlined, and run silently on end-users’ systems. And
while every situation is different, once clients are deployed (yes, this can take some time), the collection and delivery of data is no
slower than agent-less methods.
The more important point relates to the delivery of business value–we have yet to encounter
an agent-less technology that offers accurate software usage statistics. Accurate application usage data is nothing short of critical
when it comes to driving down licensing and support costs, and ensuring that money isn’t being spent on software that’s not being
utilized. Speed, while certainly important, does not necessarily equate with business value–it’s the *depth* and *quality* of
information that delivers the most tangible and significant business benefits.
All that said, there is no “right” answer. The
right technology solution depends on your goals, resources, and environment. Just make sure to do your research! Good luck!
My view is that agent based solutions only benefit where the asset is not
connected to the infrastructure. The problems caused by code incompatibility and unexpected side effects; the need for lab testing
against core builds and change control issues added to the resource impact to the users of the client machines and that of the network
handling the transportation of “returned data” files passing to the server (not to mention the deployment hit); added to the fact that
these data files must be staggered in delivery to avoid heavy network impact; the consequential lack of a definitive “Now” in answer to
the question “what is the status now”; the tendency of agents to fail; the requirement to reboot the host; the labour intensive and
potential business disruptive nature of the deployment; the agent failure on “full to capacity C drives” etc etc makes agent based a
thing of the past.
But of course the agent based vendors will always draw attention to “the lack of detail” from agentless
solutions or, as has been mentioned, their inability to provide real time use statistics; The detail issue I disagree with as there is no
reason why agentless cannot gather the detail; other than it takes time and would slow down what is already considered to be a slow
process ie agentless auditing.
However these questions and issues are what motivated us many years ago to create a hyper speed
agentless solution. A solution which can access in excess of 250 network nodes per second and return data at the rate of 6,000+
computers per minute. Due to this speed we can run audits agentlessly and repeatedly on a per minute basis and due to this we can not
only gather any information that an agent based solution can but we can also create real time use statistics. All without deployment,
change control, lab testing , impact on users or impact on network. In addition the speed gives us a very precise definition of “Now”
To my mind, and of course I am biased, this would seem like a magic bullet solution; however I must confess, that if the asset is
not connected to any network anywhere then we cannot audit it; but if it is we can, anywhere in the world; at over 1 million boxes per
hour.
Personally I do not understand why customers would invest in hardware to provide maximum performance in business tasks, only
to then handicap that investment by lobotomising it by driving various agents into its memory. In the constant pursuit of maximising
assets one would think that any solution that provides the same results without a performance cost or degradation of that investment has
to be the better option; but then, as I say; I am biased.
For me the question is not agent based or traditional agentless; both
are passé.
Some agentless offerings are now surpassing the completeness of data produced by the agent solutions.
We evaluated centennial, express metrix and altiris for agent based, and for agentless we evaluated spiceworks, xassets and service now.
The altiris agent was so big it screwed our entire network. the other agent based products were also problematic the users were getting
ridiculous crash reports like “TrueUpdate 2.0 client encountered a problem and needed to close”, and we couldn’t get them off when the
eval finished – nightmare. The agentless products were much better. Centennial actually thought itunes was licenceable – the agentless
products all seemed to get this right as a free product.
So forgive me, but if you are still considering agent based solutions for
the completeness of data, thats wrong, the agentless companies know who they are competing against and they are doing a better job of
it.
Thanks Bill, I appreciate your input.
I would be interested to know which product you finally chose, and if you have a few minutes an end user review would be great
too. You can find the link here:
http://www.surveymonkey.com/s.aspx?sm=5M4OlHcfRFM6QFjayDFdhQ_3d_3d
Hi Bill,
How are you collecting usage data? Our technology (Express Software
Manager) has the option of being deployed as an agent-less solution, but most of our customers prefer to deploy the client so they can
collect comprehensive software usage statistics and control application launches. As a vendor, I’m interested in hearing which agent-
less solution you chose, and what kind of usage data you’re able to collect? Thanks!
Jeff Kelsey,
VP of Products and Services
Express Metrix
The biggest weakness of agentless tools is laptops. What happens when an agentless tool scans the
network and 30% of your PCs are off the network and only connect on an irregular basis? If you want an audit of every computer then you
need an agent.
To reply to Ruud Hartog above. There are agent based tools that can report on all types of IP devices also.
The biggest weakness of agentless tools is laptops. What happens when an agentless tool scans the network and 30% of your PCs are off the network and only connect on an irregular basis? If you want an audit of every computer then you need an agent.
To reply to Ruud Hartog above. There are agent based tools that can report on all types of IP devices also.
I totally agree with Bill Patterson (although as an agentless vendor I am
biased). When you weigh in the balance the potential risk by adding yet more code to the endless mix on the users desktops against the
(debatable) marginal data detail benifits against products like ours, I fail to see the benefit of agent based.
We offer solutions
that will scan thousands (unlimitted) of nodes at the rate of 250 nodes per second. Browser based that do not install on any machines on
the network, not even the host. They cannot conflict with anything as they are not installed anywhere on the network AT ALL!
Where is the future for agent based? Where can the technology go? Regardless of how you do it you still have to push code around the
network in large contiguous blocks just to deploy. The returned data is moved likewise. The argument that it (agent based) gets results
in real time is entirely bogus on the basis that just because the agent gets the info in real time is irelevent; how long does it take to
get the data to you, where it gets acted upon? Who wants to hear “I could have told you that an hour ago”? We provide data at your
fingertips in seconds, and only generate a 1.5% hit on a 10 base T (and who has that bandwidth these days).
Then there’s change
control and compatability testing. After all if you are going to deploy code into your estate you have to be sure it wont do what it did
to Bill Patterson above. With agentless, no code deployment, no incompatability; eliminate the whole change control/ testing issue and
the costs that go with that.
The days are numbered for Agent based solutions, whereas agentless is still in its infancy; At 50,000
nodes in 10 minutes over simple broadband from a laptop over wifi (in tests), we can audit the internet!
Sorry did I get carried
away then?
Interesting dialog with good points of view from many. I would support the need for having
both agent and agent less technology to ensure varied in-depth information (that can be trusted over a period of time) can be collated
with confidence. It would also be an idea to think about the future and the technologies that will address the short comings of today’s
solutions. Will one type of technology (agent) do better than the other (agent less) across different assets – I don’t think so, hence
the need for both. The future in my view will be driving consistent, safe automation via business rules/policies that will drive an
action. Example: software assets not used for a specific time period needs to be identified and the action would be to remove the
installed software and place the license record back in the pool of spare licenses. Sounds simple, but a solution would need to have good
discovery, inventory, software metering, software delivery and License Management functions. If these were all supplied by different
vendors I am not convinced that the example above could be realised. Integration between different vendors would be too difficult, thus,
in my view having separate products today may hinder the opportunities for the future. Just one example I know, but hopefully food for
thought re future proofing any solution you choose. Good Luck…Cheers….John
John (Lunt), I could’nt agree with you more, there is a need for both.
Unfortunately, for some time now, agent based has been viewed as the solution with agentless the “also ran”. I think that, as there is
nowhere for agent based as a solution, to go, to evolve to, whereas for agentless there is, that in the future the roles will reverse.
All this debate over what is best is somewhat acedemic as there is a place for both; but the continued improvements in agentless
technology reduce the overhead of development, deployment and use, and as a consquence costs drop. No longer are there justifications
for the fantastical sums charged by many an agent based vendor when companies like spiceworks and ourselves provide free use solutions.
As for John’s comments about fully integrated “inteligent” solutions ie whereby the solution brings all parts of the puzzle
together and actually manages the situation. This is exactly what we have been pushing to achieve for the past 10 years and have
recieved nothing but opposition from nay sayers within the industry. John, watch this space, we are doing it.
This article and these comments were a vry good read. Regardless of what you choose the bottom line is what works for you. So far I have used Spiceworks, Altiris, and Asset Navigator. None of them are giving me back accurate data. There are always duplicates or extra computers that find there way into the inventory.
My question is more about how do you use these solutions to make sure your inventory is accurate. If all of your machines are on your domain, is active directory the best inventory? That is of course if you are using AD. Also, if you use AD to police machines for your inventory, what steps do you take to make sure AD remains accurate?
Agentless vs. Agent doesn’t matter if the data they are both pulling does not remain accurate… that’s what I am going through right now. I’d love to hear comments on how to maintain the accuracy of the data.
DBA here. While troubleshooting a problem I found that the SNOW inventory agent had been pushed to our SQL Servers and was consuming approximately 231MB per instance. Multiplying that by the number of SQL VMs in our database estate, I see about 40GB of overhead. That’s just plain unacceptable.