On Monday individuals from the four websites who collected TTC website suggestions from the Toronto community through their weblogs released their findings to the TTC and the general public. They did so in the worst way possible; using a spreadsheet. These are four fairly successful websites, all running popular weblogs and collecting their suggestions from local web users in their comments sections, also on the web. Wouldn’t it have made sense to also release their findings on, oh I don’t know, a web page? Apparently that would have made way too much sense.
Let’s ignore that not everyone has Microsoft Excel on their machines. (I don’t and instead had to wait for that ugly behemoth, NeoOffice, to sputter to life and display the data.) Let’s also ignore that if a spreadsheet was the answer, it could have been released in CSV format. Instead, let’s look at the data. (Those of you without Microsoft Excel will have to follow along using the image below.)
The open letter to Adam Giambrone describes the spreadsheet as “easy to use”. When you say something is easy to use, it had damn well better be. Personally, I had to stare at the spreadsheet for a few minutes before I could make heads or tails of it. Describing something as easy to use when it isn’t has the adverse effect of making anyone who doesn’t find it intuitive feel like an idiot.
First, the data is not a table and the first column seems to have absolutely no relation to the matching rows in the other columns. I have no idea what that first column is supposed to represent. Is it the cities from which the comments originated? Who knows.
Second, the acronyms and numbers you see (SPC 3, RT 4, etc.) are supposed to describe the website and comment number which made the suggestion in that column. There’s no legend, so I can only assume that “TI” means Torontoist.com and “RT” means Reading Toronto, “SPC” means Spacing and “BT” means blogTO. No website addresses or comment URLs are given so when this spreadsheet is printed or passed around the internals of the TTC, nobody will know what the heck those letters and numbers mean and won’t know where to go for more information.
Third, the spreadsheet isn’t even using spreadsheet functionality. It’s a table, but it’s not. The comment references are comma-separated across multiple rows, so there’s no way to do anything meaningful with this data such as sum the columns and graph the results, which is what everyone wants to do with spreadsheets. To find comment totals, you will have to manually count the references.
Fourth, you can’t see it from the image but my website is misspelled in row 22. It’s crazedmonkey.com, not crazymonkey.com. Thanks, guys.
How would I have captured the website findings? First, I wouldn’t have made the mistake to release it in spreadsheet form. Instead, a simple HTML page would do the trick. The data can be expressed as a list using CSS styling wherever appropriate:
<ol> <li>Suggestion 1 <ul> <li><a href="link to comment">website and comment author or number</a></li> … </ul> </li> … </ol>
With the above format, every comment has a link to check. The file or link can still be passed around with no loss of information.
All of this begs the question, why even bother with a findings document at all? The comments are in an open forum which everyone can read and from which they can draw their own conclusions. Releasing a difficult to understand requirements document does nothing to help, but actually serves to inhibit. Who is going to take seriously anyone who creates a spreadsheet like that and passes it off as helpful? It’s well-intentioned but there’s a reason why the requirements gathering process, particularly in the software and usability fields, is left to the experts, or at least those with domain experience.