Monday, November 4, 2002 | |
Jaguar Cache Cleaner rocks! Ugh. I was stuck in "IE really really insists on being the default browser" mode. Nothing would unstick the damned thing; couldn't use OW or Chimera as my default browser. The solution was simply to run Jaguar Cache Cleaner and clean all caches in the 'lite' mode. I made sure that the browser I wanted was set as the default browser prior to cleaning the caches. No more IE! Wheeee!
JCC does cause the system to reboot and does so from a low level such that you will not be given a chance to save unsaved changes. Fallout from a feature; it has to do the reboot at this low of a level to prevent any in memory caches from being written to disk as a part of the normal logout and reboot process. |
Fall Desktop images While working in the yard [therapeutic bush removal], I found a few things that I thought would make nice desktop patterns. Unfortunately, the Sony F505 [first in the 505 / 707 series] did not do a good job of focusing in the way that I needed.
However, a few of the pictures came out nicely enough to act as an icon placemat. I'm currently using one of the leaf shots as my desktop. |
OS X and Fragmentation... After writing about prebinding, a number of folks responded with questions about what you can do to optimize OS X. In particular, I received many questions as to whether or not optimizing [defragmenting] the hard drive using a tool like Norton Utilities or DiskWarrior (which includes PlusOptimizer). I happen to own a copy of Disk Warrior and if there was ever a machine that should be suffering from performance degradation due to fragmentation, my TiBook is certainly it. Since rebuilding it with 6c115, it has gone through multiple updates (System, dev tools, etc), has a number of commercial Apps on it that are actively used, and is used in a very active development role that involves recompiling a number of very large source trees on a repeated basis. In other words, my system perpetually writes and deletes thousands of tiny files while also launching and running several monolithic [bloated] applications. Before diving in, a little about the system configuration. It is a TiBook 667 with 512MB of RAM and a 30GB 4,200 RPM hard drive. The drive is partitioned into a 9.1 GB partition that contains OS X and Fink and a 19 GB partition that contains my user account. The system also has WebObjects installed. The system partition occupies 4.7 GB and my user account weighs in at 13 GB. None of the 13 GB is actually music files-- it is all either source code, disk images, or backups of various other things (wife's iMac, for example). The Fink installation is a from-cvs installation and, as such, everything is compiled and packaged on the fly. This involves unpacking, compiling and deleting hundreds of thousands of files on a regular basis. Scenario set, now how to test? I decided I would go with a really basic test script that could be easily repeated and wasn't too terribly complex. The steps: Reboot: starting from the login screen, time how long it takes to return to the login panel after clicking the restart button. Login: time it takes from the moment enter is pressed in the password field of the login panel until the Finder displays the entire contents of a single window pointed at my [rather bloated] home directory. Launch Terminal: time it takes to launch terminal, display the first shell window, and display the shell prompt (including the cursor). Launch Word: time it takes for Word to launch and display the templates GUI to the point where it actually accepts a click on the 'OK' button. Launch iCal: time it takes to launch iCal and display the calendar, including the window becoming key Compile a Java Project: time it takes to use the pbxbuild command line tool to compile a fairly decent sized Java based WebObjects project. The build products are actually written in /tmp/. Copy a 250MB disk image: time it takes to copy the disk image from the Data volume to /tmp/ (on the boot volume). I rebooted the machine once without timing simply to ensure that the machine booted into a clean state. During the boot process, the system cleans /tmp and the old virtual memory swapfiles. This takes a bit of time and I didn't want to skew the results (not that the results aren't likely skewed anyway -- it isn't like I gave a lot of thought to this). So, the results. All times in seconds. Optimization was done using Disk Warrior and Optimizer Plus.
Interesting. Reboots are slightly faster, but not enough to make a difference. However, almost 60 seconds of the reboot was consumed by the spinning disk and Apple logo screen-- long before the filesystem really comes into any kind of real play. So, subtract 60 seconds from all of those numbers and the drop is much more significant. Login times drop fairly significantly. I suspect that the 45 second number was a bit inflated due to other issues (apps that launch and rebuild a cache periodically). App launches are obviously faster with an optimized system. Compilation didn't change but, in hindsight, compiling a Java based WO project was not the best choice of tests as the build process is so grossly inefficient via JAM that it was likely CPU bound. Too late to do a different test now (I probably should have built something like Python). However, in compiling a couple of projects that I'm intimately familiar with, the defragmented system definitely feels faster-- but that may just be a placebo effect. The image copy didn't change -- not surprising as it is just one huge file and the writing of the file is so much slower than the reading that the reads probably have plenty of time to deal with fragmentation. The defragmentation process took well over six hours. Conclusion Fragmented filesystems suck time for disk I/O intensive tasks. Unless an app has the world's worst caching algorithm, an app shouldn't go back to disk anyway. For disk intensive applications that frequently have to read files, it is likely that performance will degrade over time as more and more of the system becomes fragmented. Given that the system rewrites a slew of system files during upgrades-- both as the upgraded files are written and as the system updates the prebinding information-- the degradation will be worse than if the system were never touched. Of course, optimizing the filesystem requires that the system be taken completely offline during the optimization process. This is a major disadvantage for a system in a server role. Makes me wonder how well UFS performs on a day-to-day basis. Maybe I can store the handful of broken applications (Office, for example) on HFS+ disk images. None of this was a terribly scientific analysis. Take it with a grain of salt and if anyone wants to fill in gaps in my thinking/analysis, please do. 12:19:17 AM |