It’s staggering to think that just over 20 years ago, a standard corporate server had a hard drive of around 100 Megabytes and an astonishing 8 Megabytes of memory. Much of the time the system drive and the data drive were one and the same unless the system manager was somewhat progressive–hard drives were just too expensive to be adding them willy-nilly. And the discovery that file fragmentation–an invention for the purpose of fully utilizing disk space–heavily impacted performance was so new that one major hardware manufacturer, Digital Equipment Corporation (DEC), officially denied that it was even a problem.
System managers were way ahead of the “official denial” however, and were spending nights and weekends handling fragmentation the only way they could: by backing up and then restoring their drives. Backup and restore was a long, arduous task and the techie community began clamoring for some sort of solution to fragmentation.
Enterprising software developers were listening, and several rushed defragmenters to the market. They were highly unsafe, corrupted data, and made it a bit of a battle for other developers who a short time later arrived on the market with safe defrag programs that actually did the job. But the battle was short–when system managers found that there were safe defragmenters around, sales went through the roof. Even though the first defragmenters had to be run manually, it was still a far better solution than backup and restore.
Development kept progressing, and shortly released defragmenters that could be scheduled. That meant that poor overworked system administrators could finally go home at the end of the day, and didn’t have to come in on weekends!
While defrag technology itself made great advances over the years, scheduling held on as the standard for running it. As microcomputers (PCs) replaced mainframes and mini-computers, and disks became far cheaper to obtain, the number of disks proliferated. Instead of employees having terminals that connected to a single computer, they each had their own PC with its own drive. This now meant the system administrator had to schedule defragmentation on each drive on each computer. It was far more work, but it was the only way to proceed.
We’re now at a point where standard drives have well over 100 times the capacity of those of 20 years ago. In addition to each user with a computer and one or more drives, server storage methods have changed: farms of these drives are now commonplace for server data and applications. And thanks to the number and broad variety of devices, as well as globalization of commerce, access to these volumes is constant and frantic. Where does that leave fragmentation and its solutions?
Fragmentation itself occurs more than ever before. It is not uncommon for a file to be in hundreds or even thousands of fragments, slowing performance to a crawl. Scheduling defragmentation has become more difficult than ever, as IT time to perform the scheduling is at a premium and the time slots during which users aren’t on a system have all but vanished.
Thankfully, technology has now arrived which allows defrag to occur fully transparently and automatically. Peak performance is maintained, and is never impacted by the defragmentation process itself. And scheduling is no longer required. Defragmentation has now caught up and is keeping abreast with the amazing progression of technology.