As the web gets smarter, will our anonymity evaporate?

One of the most exciting things going on in webland today, I think, is the myriad of technologies, user experiences, and computer-to-computer interactions that typically pass under the monikers of “Web 3.0” or “the semantic web.” There isn’t a lot of general agreement on what precisely these terms mean (though I think the latter is more concrete), but what many people envision as the future of the web is an online environment in which data, text, and various forms of information and media are structured in ways that are machine-readable (if not machine-interpretable), leading to all sorts of new possibilities for interoperability between websites, new forms of user-agent interaction, and generally a web experience that is less characterized by “dumb” websites.

All of this, in addition to the manifest benefts, of course probably would present new opportunities for abuse, invasive marketing techniques, and threats to users’ privacy.

A glimpse of this last concern was provided recently by a paper from some Google researchers (“Could your social networks spill your secrets?”) that details how data from two different social networking sites (e.g. LinkedIn and Myspace) could be linked together to reveal the single person behind two different public profiles, despite the profiles being relatively anonymous and not directly linked. From the NewScientist article:

That approach is dubbed “merging social graphs” by the researchers. In fact, it has already been used to identify some users of the DVD rental site Netflix, from a supposedly anonymised dataset released by the company. The identities were revealed by combining the Netflix data with user activity on movie database site IMDb.

December 2009: As an addendum to this article, I direct your attention to “project gaydar”.

If it can’t be shared, it doesn’t count

Kevin Kelly on the future of the web, which he sees basically in terms of a movement towards the semantic web, or a web of linked data.

[blip.tv ?posts_id=1448228&dest=-1]

Kelly unfortunatley comes across a bit naive, as he discusses our inevitable dependance upon, and surrendering to, the envisioned “web 10.0” without any critical hesitation or indication of cause for concern.

Consequences of bot-mediated reality

I have a lot of catch-up listening to do with regards to The Long Now Foundation‘s excellent Seminars About Long-term Thinking (SALT) lecture and podcast series. I’m a charter member of the Foundation, which gets you a sweet membership card and access to video of their lectures, among other less tangible things like knowing you’re helping inject some much-needed awareness of long-term thinking and planning into public discourse.

One of the lectures I’m particularly looking forward to downloading is the recent Daemon: Bot-Mediated Reality by Daniel Suarez, which I think has particular relevance given the recent and rather large f-up in which Google’s news crawler inadvertently “evaporated $1.14B USD”.

Unfortunately, I think that in the near future, as more and more processes are automated, we will see more such screw-ups of this scale. I can’t help but think that this might have been avoidable, though, if the indexing engine had been able to take advantage of semantic data rather than relying on scraping and evaluating natural language.

Some reflections on Aurora, browser of the future

Let me say first that this is some amazing conceptual work. Coming up with something that is genuinely new (or, depending on your metaphysics, at least seems so) is difficult work. It is rare that something comes along in the world of desktop software in general and web browsers in particular that can be called revolutionary, but I think Aurora fits the bill. I don’t want to get all hyperbolic–Aurora isn’t going to change political systems or rid us of our oil dependency–but I think you have to give respect where it’s due, and the team at Adaptive Path have clearly done some top notch work on this project of coming up with the browser of the future.

Rather than try to explain it, here’s part one of the video (link rather than embed because Vimeo’s embed code isn’t valid XHTML).

What I like most about it is how it clearly demonstrates the power of the semantic web. Data tables, event listings and so on are all (presumably) marked up to be computer- and human-readable and Aurora is able combine them with data from other user-defined and automatically-generated relevant data sources.

The visual effects are undoubtedly sweet, but it’s the interaction design choices that really make the video interesting.

Continue reading “Some reflections on Aurora, browser of the future”

Using WordPress as a CMS – Part 3

In the first two “wordpress as CMS” posts, I discussed the benefits of WordPress as compared with other free, open source CMSs and how to take advantage of recent WordPress improvements when using it as a CMS. In this installation, I’ll go into detail regarding a few plugins that are a “must” if you want to use WordPress as a CMS.

But first a word about plugin security. Unfortunately, WordPress plugins have a bit of a reputation for being insecure, due largely though not exclusively to the lack of proper sanitation of user input. Neglecting to check whether a user has entered malicious code into an input field into a form, for example, or tacked it onto the end of a query string can leave your server vulnerable to SQL injection and similar attacks. With that in mind, it’s prudent to check around for any security issues with a plugin before you install. If you have the PHP skills, you perhaps check the plugin yourself for any code that might leave your system open to being compromised.

But that aside, there are many secure and well-tested WordPress plugins, as well as many (perhaps most?) that do not introduce any user-interaction features beyond the WordPress core and thus aren’t even really candidates for opening up additional security holes. The following is a list of just a few.

Continue reading “Using WordPress as a CMS – Part 3”

The semantic web gets friendly

I’ve had a passing interest in the semantic web since I first heard the term a few years ago, but hadn’t explored it much beyond using the hCard microformat for contact info on a few websites I’ve done. It sounded like an interesting idea, but in the absence of significant, working applications beyond the academic world, it didn’t really capture my attention. Not to mention that there were (and are) some very vocal opponents of the idea, with a wide-ranging set of criticisms (some well-taken, some just strange).

But it recently popped back into mind when I chanced across a post entitled Semantic web comes to life from Joel Alleyne’s blog, Knowledge Rocks. Given that I still have access to the e-journal database of one of my former Universities, I logged in a grabbed the full Scientific American article Joel had linked to, The Semantic Web in Action. The article spills a good deal of ink highlighting various real-world applications of semantic web (or at least semantic web-ish) technologies, mostly from the medical field, where practical applications abound. (The stories shared by the article authors reminded me of a story I read a while ago about a researcher without any actual health training who made a significant cancer treatment-related discover just by linking together existing research that hadn’t yet been put together — I wish I could remember whether it was on the web, in Harper’s, or possibly a CBC Ideas episode).

So I’m on a bit of a semantic web kick now… I’ve FOAF-a-matic‘ed myself, and am reading all I can. It’s a fairly timely rediscovery as my workplace (Canadian Education Association) moves towards implementing a new website. We’re sitting on a goldmine of content (particularly from our magazine, Education Canada) that really needs indexing and some good metadata, and it will be interesting to see if RDF or something like it can fit into the emerging picture.