Having been a Mac user for slightly more than a year, and working on several different monitors with quite different specs has opened my eyes to the absolute necessity to test your design on a variety of monitors before rolling it out (leaving aside the challenges posed by designing for mobile web devices).
I’ve lost track of the number of times I’ve been on a website that I can tell instantly was not field-tested in this way, a problem that manifests itself most obviously (and annoyingly) in the prevalence of unreadable font/background-color combinations. I may be (barely) able to read that #f7f7f7 font on #ffffff background on my Mac laptop, but there’s no way I’m seeing anything on the 25-inch CRT + Windows XP + non-ideal lighting conditions I’m using at my day job.
Moreover, I find generally that anything that looks like just the right size on the Mac laptop will probably look just a bit too big on most PC monitors. And that light-green background for the sidebar? Chances are that if it’s going to look anything like green on my work monitor, it’s going to more closely resemble a gray-green on my Mac, unless I get the screen at just the right viewing angle
The way I generally try to overcome this is to use a similar approach as I do with web development: develop in the best-possible-world scenario, and test extensively in the real-world-scenario. For my web development, this means developing with Firefox and/or Opera and testing with IE6 and 7 (include Safari somewhere in there).
For design, this means developing with a 19-inch LCD screen intended for PC use plugged into my Mac (with obligatory DVI adapter). You get a rough and ready idea of how your design is going to look on two different and fairly common types of monitors. Then, of course, test again under less-ideal conditions (older CRT monitors with some bad office lighting).
I’m sure there are much more rigorous approaches to this sort of testing, but this is a good baseline that my years of web browsing suggest is not nearly as common as it should be. Strangely, the worst offenders are usually design-oriented websites by people who should know better. I’ve never understood why “looking cool” is more of a design priority than “being usable”.
But first a word about plugin security. Unfortunately, WordPress plugins have a bit of a reputation for being insecure, due largely though not exclusively to the lack of proper sanitation of user input. Neglecting to check whether a user has entered malicious code into an input field into a form, for example, or tacked it onto the end of a query string can leave your server vulnerable to SQL injection and similar attacks. With that in mind, it’s prudent to check around for any security issues with a plugin before you install. If you have the PHP skills, you perhaps check the plugin yourself for any code that might leave your system open to being compromised.
But that aside, there are many secure and well-tested WordPress plugins, as well as many (perhaps most?) that do not introduce any user-interaction features beyond the WordPress core and thus aren’t even really candidates for opening up additional security holes. The following is a list of just a few.
Three images, drawn from research by Elaine Toms (citation in PDF above, all images taken from PDF above) comparing the “recognizability” of three different version of the same document, which in this case is a Chinese restaurant menu. The first two versions were recognized most often by study participants
However, the third, while recognized less often, was recognized twice as fast by participants.
In another experiment by Toms that Lombardi touches on, content from one genre (e.g. content from a menu genre) was formatted in a fashion typical for a different genre (in Lombardi’s example, as glossary entries).
When participants were asked to identify the genre they selected the genre of the format, not the content. So in this case they would have said this is a page from a glossary. This again reinforces the impact that form has on our understanding of a document.
The take-away for web design is that when the information you’re presenting has a “native shape” — one that users will be familiar from the real world — don’t overlook it as a powerful and intuitive way of conveying meaning.
I’ve had a passing interest in the semantic web since I first heard the term a few years ago, but hadn’t explored it much beyond using the hCard microformat for contact info on a few websites I’ve done. It sounded like an interesting idea, but in the absence of significant, working applications beyond the academic world, it didn’t really capture my attention. Not to mention that there were (and are) some very vocal opponents of the idea, with a wide-ranging set of criticisms (some well-taken, some just strange).
But it recently popped back into mind when I chanced across a post entitled Semantic web comes to life from Joel Alleyne’s blog, Knowledge Rocks. Given that I still have access to the e-journal database of one of my former Universities, I logged in a grabbed the full Scientific American article Joel had linked to, The Semantic Web in Action. The article spills a good deal of ink highlighting various real-world applications of semantic web (or at least semantic web-ish) technologies, mostly from the medical field, where practical applications abound. (The stories shared by the article authors reminded me of a story I read a while ago about a researcher without any actual health training who made a significant cancer treatment-related discover just by linking together existing research that hadn’t yet been put together — I wish I could remember whether it was on the web, in Harper’s, or possibly a CBC Ideas episode).
So I’m on a bit of a semantic web kick now… I’ve FOAF-a-matic‘ed myself, and am reading all I can. It’s a fairly timely rediscovery as my workplace (Canadian Education Association) moves towards implementing a new website. We’re sitting on a goldmine of content (particularly from our magazine, Education Canada) that really needs indexing and some good metadata, and it will be interesting to see if RDF or something like it can fit into the emerging picture.
It seems like the meaning of terms like “information architecture” (IA) and “user experience” (UX) have been contested since their introduction, with the result that web design neophytes intrigued by the fancy titles “information architect” or “user experience designer” and eager to learn more, are typically exposed to a bunch of loud and sometimes fairly unprofessional debates that shed more heat than light on the topic.
Which is why I was glad to come across two visualizations recently that help make it easier to explain IA and UX.
Drawing an analogy with a similar chart in Geoffrey Moore’s book, Living on the Fault Line, Morville characterizes IA as a deep, layered field with the holy trinity of “Users, Content, Context” at the bottom (something readers of his Information Architecture for the World Wide Web will recall), and the more tangible deliverables like wireframes at the top.
The other visualization, from Peter Boersma’s blog, is even more compelling (for me) because it clearly and somewhat contentiously demonstrates the difference between UX and IA, without drawing an artificially rigid boundary between the two.
This revised T-model lead to the coining of two new terms: “armpit IA” (for someone who works at the intersection between shallow IA and UX) and “shoulder IA” (for someone who bridges UX and business IA).
As you go deeper in the IA column, you get into really technical, nerdy things like controlled vocabularies (how do you define when “pool” refers to a swimming pool or a game played in a bar?), while a bit higher you have the kind of IA that every decent web designer engages in (coming up with link labels and content organization schemes). If I had to place myself somewhere on this chart, it would probably be in the armpit. Being in the armpit is more glamorous than it sounds (but only slightly).
Amazon Web Services invites you (founders and leaders of start-up/early-stage companies and VCs) to join us on the upcoming AWS Start-Up Tour. Learn how to cut fixed infrastructure costs while increasing reliability and scalability by using the AWS cloud and hear from other successful start-ups about their use of AWS. The half-day event lasts from 2-5pm ending with a cocktail/networking reception from 5-7pm.
I was at the last official AWS presentation in Toronto (last fall, maybe?) and it was decent, though I didn’t stick around for the thing as it was mostly typical “this is why you need to worry about scaling” big picture stuff without a lot of the technical details, but I’m betting that this session will go deeper into solutions and architecture, so I’m looking forward to it.
It’s especially good timing as I’m justing starting to dig into EC2, though I’ve been working with S3 for more than a year.
Reading 37signals’ Signal vs. Noise blog today, I came across what I think is a great example of how to deal with trolls on your blog (would also work on a forum/bulletin board).
The problem with one of the most common responses to trolls (i.e. banning them), is that they just come right back and return to grabbing people’s attention with inflammatory messages and starting flame wars. This is a much more elegant solution: rather than getting rid of the offending user, clearly identify them so others won’t feed the troll, shame them with the dunce cap, and make their message slightly harder to read.
Apparently phpBB, a common web forum package in PHP, comes with some similarly creative ways of dealing with trolls, which include silencing them so that it looks to the troll like all their posts are going through, but no one is reacting. Apparently in the above case, the offending user was being impersonated, so I’ve blurred the username (not that it really matters much), but I don’t imagine a troll would stick around very long under such circumstances.
And as an aside, the SvN blog post itself was pretty interesting, discussing how 37signals had taken advantage of an interesting RoR “performance management solution” called New Relic to optimize their server configuration and cut response times drastically.
If only there were something similar for some of the PHP frameworks (maybe there is?).