I was initially against using Blosxom to generate static pages as it would have meant duplication of information on my harddrive. Why should I have countless HTML and RSS files in addition to the TXT versions of my weblog entries? I then thought that it would be nice to generate static pages which played nicely with HTTP HEAD and the If-Modified-Since header. I still didn’t want the space overhead when, chances are, people were only interested in conditional downloads of the HTML and RSS pages in the category directories. Why not do both static and dynamic downloads?
I have wiped out my older rewrite rules in my Apache configuration for this weblog in favour of the following monster rule:
RewriteRule ^(([a-zA-Z0-9]+/)*[0-9]+(/[0-9]+)*/?(index\..*)?$) blosxom.cgi/$1 [L]
That first part matches up archive entries by year, month and date in every category, possibly followed by an index page. (True, someone could post a URL which repeats a string of slash-separated numbers and it would still be processed, but it would have done so under the old rules as well. Maybe I will change that to force a “YYYY/MM/DD” format to the archives at a later date.) The “blosxom.cgi/$1″ dictates than anyone submitting a request matching the first expression is served a dynamic request, while the “[L]” portion tells the rewrite engine not to process anymore rewrite requests.
The above rule means that anyone requesting category pages not archived by date will receive a statically generated page. (I now have a script which will statically generate any modified pages and then remove any pages archived by date in order to save space.) Anyone requesting archives by date will receive a dynamic page.
I’m certain the above rule can be further optimised, but it’s good enough for now. If you use it and optimise it, please let me know.