It’s a dream stemming from the very beginnings of computers: complete automation. Early science-fiction foretold of computers that would take care of every droll workaday task and perform any request, no matter how complex, instantly. Such automation has also been the goal of developers the world-over, and throughout the last sixty or seventy years they have worked diligently to bring it about. Forty years beyond the original Star Trek series, we’re watching reruns and wondering when we’re ever going to reach that level of computing–but many great strides have been made and many more are to come.
In the corporate world, where computers have long-ago taken a central role, automation has made serious progress. In most places, databases have replaced the endless filing cabinets and clerks of yesteryear. Customer service and order processing can now be fully automated. Complex tasks in the financial market are now performed by computers, and even automated office building security and environmental management are commonplace.
Meanwhile back in the datacenter, it’s not quite so easy, for all such processes must be implemented, tested, debugged, and maintained. Unseen by the end users, it takes highly trained and skilled humans working endless frustrating hours to make it happen. It’s a constant battle, and it includes another factor that users take completely for granted: the computer platform itself. That platform consists of hundreds or thousands of electronic components which must be maintained in working order, and dozens of system processes and software applications which must work flawlessly all the time.
Knowing the burden they must carry, software developers and IT directors and staffs have not neglected their own work when it came to automating tasks, and have seen to the automation of many system routines. System and network analyses, reporting, updates and other elements can now be automated so that datacenter personnel can concentrate on bringing the latest and greatest computing services online.
One such task seems to have been left behind, however, and it is such a mundane and routine task that it might not be believed. That task is defragmentation. Unbelievably, many sites are still utilizing a scheduled approach to defragmentation–meaning an entire site must be analyzed for disk traffic and defragmentation must be scheduled so that access to volumes is consistently fast. It not only needlessly burns up IT hours, in today’s frantic computing environments it is no longer effective; fragmentation continues to build up and impact performance in between scheduled runs, and in some cases of very large volumes isn’t even defragmenting at all.
Datacenters should add defragmentation to their list of completely automated tasks–and now they can. Completely transparent and automatic defragmentation, requiring no scheduling and working in the background utilizing only idle resources, is now available.