Archive: web

An infuriating security question: "Your favourite shape"

Is there anything more annoying than those security questions you need to login to certain websites? I cannot understand how they are supposed to make websites more secure.

I understand that passwords can be cracked and the security question is a safety net. But let’s face it. All the advice on passwords is that they are not to be real words. You should insert numerals, use mixed case, special characters; the works. If a password like that can be brute forced, a “security” answer made up of dictionary words, and based on known facts about your life, will be a piece of cake.

Facts like my mother’s maiden name, my hometown or my first primary school are not exactly secret. They can be easily answered by anyone with the slightest knowledge about me.

As far as I am concerned, it is the security equivalent of sticking a Magic Eye puzzle in your porch just in case someone manages to break down your door.

Worse still, a bad security question can lock you out of a website for good. I have seen a security question that was actually impossible for me to answer because it was asking about a life situation that simply did not apply to me. It was offensive as well as being shockingly unusable. I decided not to register for that particular website after all.

What am I supposed to do in that situation? Maybe I could just make an answer up. But how could I remember it? The only way is to write it down. Then it will only get lost in an obscure drawer, or maybe some criminal hacker’s pocket.

Then there are those questions on topics that you simply don’t care about. One certain website that I tried to login to recently left me stumped. It’s the sort of website I might only login to once every few years. So my answers to questions like these really could be anyone’s guess:

What was the surname of your favourite teacher?
I’m not sure I had a favourite teacher. Certainly, the person that immediately sprung to mind was not who I would call my ‘favourite’. And who was my favourite teacher five years ago might not now be the person I remember fondly now. My favourite teacher back when I was still a school pupil is probably totally different to the person I consider the best one now. As it is, I have absolutely no idea how I answered this question.
What is your most memorable place, but not where you were born or live?
What on earth? What is a ‘memorable place’? Not only do I struggle to have any interest in such a question whatsoever, but I cannot even tell what sort of place it might be. Could it be Edinburgh? The local park? Behind the bike sheds? No idea.
What is your favourite musical instrument?
To play or to listen to? It depends on so many things. It could be piano, marimba, vibraphone, Omnichord… It could be anything, depending on my age or mood.

When you add in the fact that answers are case-sensitive, and that you don’t get repeat attempts at the same question, it soon became clear that I wasn’t going to get access to this website. There is no way for my password to be reset.

Apparently my only recourse is to use the electric telephone. But unless they subject me to a similar barrage of obscure questions, I don’t see what advantage this offers from a security perspective. I can picture it now.

“You are Duncan Stephen?”

“Yes! Yes I am!”

“And you have changed address?”


“OK! No problem at all! On the basis of this phone call we will now send your new password through the post!”

I was pretty excited to learn this week about Domesday Reloaded. The Domesday project aimed to take a snapshot of British life in 1986. 25 years on, the BBC are looking to update it to document the changes that have taken place since then.

I have been interested in the Domesday project for a while. The idea that a snapshot of Britain was taken, in the form of maps, photographs and text. Yet, the data was unavailable to most people.

The Domesday project was as much an ambitious experiment with technology as anything else. The technology was just about available, but a lot of pioneering work had to be done, and the hardware required for it was prohibitively expensive, leaving many of the contributors somewhat miffed.

Since then, it has become one of the most famous examples of digital obsolesence. This was due to a combination of the technology required to read the discs becoming increasingly rare, and idiosyncratic code.

The Domesday project came at a time when the technology was available, but the standards were not yet there to make it stable enough for long-term preservation, or even easy access in the short term. It’s a reminder that digital technologies are hugely enabling, yet frighteningly fragile.

Then there are the copyright issues surrounding both the content and the technology.

Joys of browsing Domesday Reloaded

The BBC should be applauded for finally managing to open up some of the data to the public on the web. The Domesday project was created before the web was invented. This isn’t how the content was designed to be viewed, so navigation is a bit cumbersome.

But aside from this gripe, the Domesday Reloaded website is turning out to be a fascinating resource.

I was born in 1986, the same year in which the Domesday project disc was published. So the Britain described here is a place that I don’t remember. But enough of it is familiar for it to feel incredibly relevant to me. It’s almost like being given a little upgrade to my memory, so that I can have snippets of knowledge from just before I was born.

Take the photographs for D-block GB-328000-690000 — the centre of Kirkcaldy, my hometown (D-block being one of the 4km by 3km areas the UK was divided into). It took me a little while to recognise “Kirkcaldy’s busy High Street”. But once I spotted British Home Stores, I was right there.

Yet, despite the familiarity, it is almost a completely different world. My memory of the High Street before it was pedestrianised is very limited. But it is just within touching distance of my memory for me to feel a strong connection with it.

The text entries are also fascinating. Most of the contributions were provided by primary schools. A decision was taken by the Domesday project not to edit the contributions, so the quality and style of writing varies from area to area.

As such, what strikes me the most is that it informs you as much about the prejudices of the school pupils and their teachers as it does about the area. It also retains their poor spelling and strange grammar.

For instance, an entry from Dundee (D-block GB-336000-732000) called ‘Traffic in and out’ is a basic survey of vehicles travelling on a road, with guesses as to where the vehicles are going and why. It lacks the academic rigour you would ideally want from a historical document.

But while some of the entries may seem banal, it was designed to be this way. The aim was to genuinely document society by capturing childrens’ curiosity with everything. This way it wouldn’t leave out what adults perceive as being obvious, when it wouldn’t necessarily be so obvious to someone in 1,000 years.

Missing D-blocks in Dundee on Domesday Reloaded

The really big shame is that not every part of Britain was documented. I could understand remote rural areas not being included. But sadly some highly populated areas have also been missed out. For instance, two D-blocks that cover the centre and east of Dundee lie blank, as does much of London.

But what exists is a joy. Even in the little amount of scanning I have done, I have already learned new information about the area I live in, which has set my mind racing and inspired me to investigate further.

Challenges for the modern day equivalents

What also struck me is how we actually already have readily-accessible modern-day equivalents of the Domesday project, almost by accident. The BBC is asking for users to update the content for D-blocks that were documented in 1986, to take an equivalent snapshot of 2011. I may go out and take some photographs for that.

But this sort of local information is staggeringly well documented already. We have Wikipedia, which can be edited by anyone but retains an academic approach that the Domesday project lacked. As such, it is a treasure trove of local information that can probably be relied on more.

Meanwhile, Google Earth and Google Maps provide masses of images of all corners of the country. It absolutely dwarfs what’s on Domesday Reloaded.

But the big question, which can’t be answered at the moment, is whether the wealth of information available on the web can be packaged up into a Domesday-style snapshot and preserved forever. The challenges of web preservation are massive.

Like the Domesday project, we could find the digital information almost slipping through our hands. The BBC know that themselves. With a stroke of a pen, it was decided that a significant chunk of British web heritage will be removed when the BBC removes some of its archived pages from the web.

According to the BBC News website, there has been a nasty outbreak of “man” in the Tayside and Central area.

Screenshot of the BBC News website with four headlines in a row beginning with the word 'man'

Google has teamed up with some awesome organisations to offer budding young scientists some great prizes for coming up with neat ideas.

Along with Cern, Lego, National Geographic and Scientific American, Google is asking for people aged between 13 and 18 to submit their projects to the Google Global Science Fair. They describe it as “the world’s first online global science competition”.

The scheme also harnesses the power of the internet, with entrants being asked to submit their projects by building a website using Google Sites.

With some great prizes on offer, I would probably be tempted if I was eligible! And what a cool video too.

Note: I am being paid for this post, but I still think it’s a great scheme so it’s all good. :-)

University of St Andrews web team logoI have now begun writing for yet another blog. The University of St Andrews web team blog is being relaunched after an almost two year long hiatus. So if you’ve ever wanted to learn about what I do at work (!) then you are in luck!

My first post is about the Institutional Web Management Workshop 2010, which I attended along with my colleague a couple of weeks ago.

We are planning on blogging more in the near future. Certainly, it won’t be almost two years until the next post.

Incidentally, if you are interested, you can also follow the University of St Andrews web team on Twitter @stawebteam. My tweets are marked ‘^dbss’ (my University username).

I don’t often write about myself here these days. Despite the fact that I went to all the effort to set up a personal website, I do think it is a tad self-indulgent to bang on about myself. However, some readers may be interested in recent developments in my life.

Regular readers will know that I haven’t had the best year when it comes to work. After graduating from university last year, I struggled to find employment. Then I lost my part-time job when Woolworths closed down. I had done bits and pieces of freelance work, but not much else.

A few months ago I decided to bite the bullet and look for unpaid work. I saw an internship at the office of Willie Rennie MP advertised, and went for it. It made sense in a lot of ways. The Liberal Democrats have long been the party I sympathise with the most.

Plus, Willie Rennie’s constituency of Dunfermline and West Fife is just next door to mine, so there is the local connection too. I liked the fact that he beat Labour in an area that is so left wing that it was once represented by a Communist MP — a great achievement.

I spent a few months helping out there doing a variety of tasks, and I enjoyed it so much that I will still help out from time to time. It is worth pointing out, in the interests of transparency and what-not, that I have joined the Liberal Democrats.

But I no longer catch the bus to Dunfermline to work there. That is because I have finally found a proper job — one that involves being paid and everything.

I am now working as the Web Editor at the University of St Andrews. When you read this, I will have started my second week there. As you may imagine, I’m really pleased to have got the job.

Despite the recent navel-gazing about the value and future of blogging, which I wasn’t very positive about, getting this job is a vindication of the time and energy I have spent running websites.

All the knowledge that enabled me to get the job was gathered as a result of my hobby running websites. I have no other background or qualifications in editing content for the web. Mind you, I gather that this is no barrier.

There is another way in which this blog helped me get the job. I was originally alerted to the position by a reader of this blog. Then, despite expressing my initial reluctance, she encouraged me to apply. That person has proved difficult to get in contact with since. But if you happen to still be reading, you know who you are — thanks so much!

I am not yet sure what this means for the future of this blog. While I have been busier over the past few months, my already-infrequent updates have become even less frequent. I will spend the winter months experimenting to see what works.

Hopefully I will be able to continue updating, but maybe with a different different focus. Less about sin taxes, and more about syntax? Less about dealing with the DSS, and more about dealing with CSS?

Whatever, stay tuned. I’ll be back with more posts soon.

In my previous article, I argued that the problems that are hitting journalism are more to do with the quality of the content than with the fact that it’s difficult to charge for content these days. “Why pay to read Telegraph Digg-bait when you can read BBC churnalism for free?”, I asked.

I am sure plenty of journalists realise this if they stop to think about the situation. The fact that so many professionals blame bloggers for the industry’s ills says it all. Despite journalists’ qualifications, experience and resources, their entire business is supposedly being dismantled by a bunch of hobbyists who spend the odd hour of their spare time opining on the internet.

A few weeks ago I met a journalist at a party and I engaged him in a conversation about the future of his industry. He told me he hates bloggers (whoops! — I kept schtoom). But he told me that in his view the biggest problem was people scooping him on web forums! If the professionals see online discussion forums as not only competing with them but doing better than them, that surely must make them wonder if the product they are asking people to buy simply is not good enough.

Anyone who thinks that bloggers and the mainstream media are competing is wrong. If they are competing, the media simply isn’t doing its job properly. Let us face facts. For the most part, bloggers don’t have the contacts, the resources or the expertise to do, for instance, a big investigative story.

If the media is worried about amateur bloggers, it is a pretty bad reflection on the professionals. Perhaps to distinguish itself, the media should be focussing on those aspects of content production that bloggers cannot do.

The supply of mediocre content is too high. Too much of the same sort of content is as readily available to news junkies as sea water is to beach-goers. In effect, for the past decade or so newspapers have been driving up to the beach with a tankful of sea water, then pumping their water into the sea. Later they started stretching out their hands like beggars wondering, “why won’t these beach-goers pay us for all this seawater we’re providing them?!”

So what is the answer? In my view, less is more. What newspapers need to do is offer something distinctive and different. They should specialise more and differentiate their content from everyone else’s. They need to offer less, but better, content.

Newspapers should forget about reporting all the same hard news as every other outlet is. It is a crowded marketplace so there is no money to be made there. Instead, they should work on more exclusives, investigative reporting, analysis and features.

Actually, there is a problem with that idea, which is that it won’t save all newspapers as we know them at all. It points to a future where many daily newspapers may wither. But weeklies, monthlies and specialist publications are more likely to thrive. It wouldn’t stop the press from having a difficult period of job losses and paper closures. But it would mean those who could get it right would be able to charge for content quite comfortably.

Evidence suggests that this shift may already be happening. Speaking personally, there is not one daily newspaper that I would be happy to pay for. But up until recently I was perfectly happy to pay for the weekly Economist (and in truth, I only stopped because I didn’t have the time to read it). As for specialist publications, I still like to read the monthly F1 Racing if I get the chance.

It may be the same for other people too. Recent evidence seems to suggest that many specialist publications are doing well at the moment, even amid all the turmoil in the press and the worst recession in living memory. According to Malcolm Coles, 216,000 people are perfectly happy to pay £7.75 per month for an online subscription to Which?.

Yesterday I also read about two major news websites relaunching — with less emphasis on news. On the new LA Times website, Hamilton Nolan at Gawker wrote:

Scroll down from the top of page at the new LAT site and you find: Health, Food, Education, Technology, Sports, Blogs, Columns, Opinion, Photos & Video, Summer Hot List, and “Your Scene, Your Comments.” Did you miss the, say, ‘International news’ section? It is way up at the top in tiny tiny type. Below the top fifth or so of the page, there is no “hard news” at all.

As for the new Newsday website… well, just take a look.

Someone still has to do the worthy news stories though. Maybe that can be better left to agencies or major broadcasters. But maybe a simple reduction in the number of newspapers would suffice. Iain Hepburn recently estimated that as many as 17 major media outlets are all aiming at the same audience in Scotland. We make do without 17 major supermarket chains — five or six different ones satisfy most consumers. So do we need more than five or six major news outlets?

A merger here, a takeover there and even the odd shutdown or two might be a good thing. Fewer outlets can have a higher market share, more resources, more of the best journalists — and they’ll produce a better product as a result. Five or six excellent news sources would be much better than 17 so-so ones, which is more or less what we’ve got at the moment. Surely that is what’s needed to make news a viable business going forward.

ScotWeb2 I’ve recently been doing little bits and pieces helping out with the organisation of a very interesting event called ScotWeb2. It will take place on 31 October from 1000 to 1600 at the Holyrood Campus of Edinburgh University.

It will be an informal barcamp / unconference-style event. It’s being organised by Alex Stobart who works at the Scottish Government. Dave Briggs is also helping out and the event will be backed by BT.

I’ve mostly been trying to drum up interest among bloggers because it could also be a good opportunity for some bloggers to meet up and talk shop a bit. But it will be about much more than that. It will be about the application of web 2.0 technologies in general, in government, in the private sector and in the ‘third sector’.

Among the speakers will be Simon Dickson of Puffbox; Ross Ferguson of Dog Digital; Iain Henderson from MyDex; Stewart Kirkpatrick, former editor of, now at w00tonomy; James Munro of Patient Opinion; and someone from BT to talk about Tradespace.

The best news is that attendance is open to anyone who is interested and it is free. All you have to do is sign up through Eventbrite and print out the ticket.

If you’re interested, keep an eye on the ScotWeb2 website. It’s not quite finished yet but it will be fleshed out soon enough.

More information from the Eventbrite page:

Web 2 seminar hosted by Edinburgh University, supported by BT and for all those interested in learning about Web 2 from practitioners, government and business users.

An informal, bar camp style event allowing participants to listen, network and share experiences with those who have designed and are managing Web 2 services.

Speakers and workshop leaders from Health, Business, Web design, Colleges and Universities, Social Enterprises, Social Media, Journalism, Government and Civic Society…

Other from Web 2 companies, Web 2 social enterprises, Web 2 designers ( public and private sector ), Not for Profit organisations, Academia, Business and the public sector will be there to run work-shops and explain their experience of Web 2…

There will be talks, opportunities to break out into discussions and to mix with those speakers who have used and built web 2 applications, and who are wishing to see change in the way users interact with their service providers and elecetd representatives.

There is an e-mail list here

If you are interested in web 2 as a subject covering communications, marketing, consultation, participation, engagement or service provision then this event will be of interest.

(Yes, every post I write about RSS must contain the hilarious “‘RSS’ sounds a little bit like ‘arse'” pun.)

I have a request for those people who publish RSS feeds. Make them full feeds!

I know there is a supposedly a debate about whether partial or full feeds work best. Well, that is not really the right way to put it. Everybody knows that full feeds work better than partial feeds. I mean, it is like saying that a sandwich is better than the crumbs. It’s just obvious.

But some website owners are, for some reason, sniffy about full feeds. Some people publish partial feeds for relatively superficial reasons, for instance because they can’t bear for any readers to be reading it in an environment other than their lovingly handcrafted web page design. Others have more serious suspicions: that full feeds rob them of page views and rob them of advertising revenue.

Earlier this year, the rather good Freakonomics blog moved to The New York Times website. At the same time, the full feeds were snatched away from the blog’s many readers. Apparently, it is NYTimes policy.

Immediately there was an angry reaction from readers. It (mostly) wasn’t from readers concerned about NYTimes itself or even due to the fact that the URLs had changed, that there was an entirely new navigation system to accustomise to, or anything like that. They were almost all from people who were angry that the full feed had overnight turned into a partial feed. Many readers even said they were unsubscribing.

The comments to the initial post were just the start of it. Several subsequent threads descended into similar “outraged of Bloglinesville” mobs, and it has become a recurring topic on the blog ever since. This is one plus side — at least the authors are open about the problems and the reasons why they can no longer offer a full feed.

While I wouldn’t go so far as to get angry, I would guess that I have read a lot less of the Freakonomics blog since the move. This is entirely down to the fact that it no longer offers a full feed.

I am aware that a lot of people simply cannot believe that (or understand why) full feeds generate as many clickthroughs as (or sometimes even more clickthroughs than) partial feeds do. It doesn’t seem to make sense, right? If people can read the entire content without leaving their RSS reader, why on earth would they visit the website?

But it doesn’t work like that. FeedBurner say so — and they would know. To me, it is just common sense. I have been reading RSS feeds for a few years now, so I think I have a pretty good idea of the reasons why partial feeds just do not work.

Think about why people use RSS feeds as opposed to visiting the different web sites all the time. It’s obvious: people who use RSS feeds do so because it makes it easier and quicker to read everything they want to read.

So immediately we have run into the problem with partial feeds — they do the precise opposite of what the reader wants. They make it more difficult and slower to read what you want to read. If you have begun reading and want to read the rest of the content, it involves clicking through and waiting for the (probably bloated) web page to load. It is a needless, unwanted, time wasting, inefficient hassle.

That explains why readers generally don’t like partial feeds. But what about the clickthrough rate? First of all, it is worth pointing out that page views are falling out of favour as a meaningful web metric thanks to the increasing use of Ajax and other kinds of magic. In a funny way, more page views usually means it’s a worse website. (Ask users of MySpace and Facebook about the navigation of those sites, and see which site has the happiest users.)

But let us say that page views (and certainly visits) are a good thing. So why should you use full feeds? Once again, for me it is down to convenience. I use RSS feeds because it allows me to squeeze more reading into a shorter space of time. Imagine sitting there in front Google Reader. You have a list of items waiting to be read. So you get on with it and start scrolling through, scanning for anything interesting.

By now, you may have realised why partial feeds do not automatically generate clickthroughs. It is because there is less of the content for me to scan-read and evaluate. Typically, a partial feed will contain the headline and the first couple of dozen words. This simply is not enough to give me as a reader an idea of how good the rest of the article is. Neither is it long enough for the author to sell the article.

There is one site that falls victim to this more than any other if you ask me. Tim Worstall, one of the most widely-respected British bloggers. His RSS feeds simply do not do his blog justice.

I will sit there with Google Reader and scroll through the many posts he has written that day, and all too often I find myself not being enticed by a single one of them. That is not because they are not interesting. It’s because his partial feeds simply do not give me any confidence that clicking through to read the rest of the post will be worth my time.

If Tim Worstall writes ten posts in a day (which is my conservative estimate of what he averages), he is asking me to read ten summaries, click ten times, wait for ten web pages to slowly load, then read ten full posts. What a waste of time!

This is especially annoying if the partial feed stops in the middle of a sentence, which is almost every time. When the partial feed stops at the end of a sentence, then there is the confusion over whether I had read the full post (just a really short one), or if it was just a fluke that the feed finished in a neat position.

If Tim Worstall provided full feeds in the first place, I could have just read them all there instead of going through all of that hassle. Who knows, I might even have clicked through and left a comment. I might have bookmarked one of his posts in Delicious, letting other people know how good the post is. I might even have blogged about it. I might even have clicked on an advert!

As it is, I just scroll through the summaries and ignore them all. I have, in the past, unsubscribed from his blog because of the frustration over this. I recently subscribed again, but can’t say I read a good deal more of his blog as a result.

Some other blogs provide “summaries” instead of partial feeds. This is where, instead of the first few words of the post, the author has instead specially written a summary designed for the feed. The problem with this is that sometimes it is made up of a random paragraph taken from the middle of the article. Even worse, it might give away the conclusion before I have even read what it was the conclusion for!

If I am enticed by such a summary, I will click through and find myself reading the post and thinking, “This isn’t what I thought I was reading.” Then I will come across that paragraph in the middle. Ah, and that introduction in the summary? I have found out that it was actually a conclusion. It is like forcing somebody to read the last page of the novel before reading the rest of it!

There is another more fundamental reason why people should offer full feeds. It is just plain rude not to. RSS subscribers are your most dedicated readers. They are people who have decided that your content is good enough to have it effectively delivered straight to them on a regular basis.

Yet, how are these dedicated readers paid back? By getting a mangled fraction of the content that they asked for. It is like subscribing to your favourite magazine only to find the publisher sending out cuttings rather than the whole magazine. What a way to treat your regular readers!

I can hear the howls already: “What about all of the beautiful adverts that I have lovingly placed on my blog / newspaper / whatever? If I offer full feeds, nobody will look at the adverts and I won’t make any money!” Again, there are several responses.

I have already explained why full feeds do not lead to a reduction in clickthroughs. So people will see your adverts just as much as they always did.

There is an even more obvious answer: what is stopping you putting adverts on your feed? Plenty of big websites already do this. It is perfectly possible. People who are refusing to offer full feeds because “they don’t contain my adverts” are simply shoving their heads in the sand.

Even if there was a legitimate concern about adverts, it has to be remembered that your regular readers (the sort who would subscribe to your RSS feed) are the very people who are the least likely to click on the adverts anyway.

Let us not forget also that a lot of adverts are not even designed for human eyes as much as they are designed for SEO. These kinds of adverts would not even mind not being seen (just as long as Googlebot sees it).

Maybe you are concerned about stats. Let’s face it, as bloggers we all are. We want to know how many people are reading. What would be the point if you had no way of knowing if people were reading or not. Gordon McLean (whose recent post on RSS is an interesting read) falls into this group.

Admittedly, this is one downside to RSS as it becomes impossible to find out precisely how many people are reading. Mind you, web stats are not generally the most reliable things anyway. Run four different stats counters and you are bound to get four different — sometimes wildly varying — figures. RSS further muddies the waters.

As it happens, I recently moved over to having this blog’s feeds provided by Feedburner (combined with the absolutely vital FeedSmith WordPress plugin), partly because it would give me some fairly accurate (but not precise) statistics. I was pleasantly surprised to find that around 140–150 people are subscribed to this blog. (Hello to you good people. I hope you are enjoying the full feed!)

Beforehand I had vague ideas of who was reading this blog’s webpages and why. But I had no idea of how many people were actually subscribed to this blog’s RSS feed. But now I do have some fairly interesting and meaningful stats about my RSS feed. So even the stats issue with RSS feeds is resolved to an extent.

All of this is not to say that partial feeds do not have their place. For instance, they are perfect for news websites. This is because of the way they work. We are used to just scanning through a front page containing only a headline and a (very) brief summary of each story. From here we choose which stories we want to read. This is how news websites work, and partial feeds can reflect this.

Blogs, however, do not work in this way. Very few blogs offer just a summary of each post on the front page. The blog format does not usually lend itself well to this approach. Rather, the vast majority of blogs’ front pages contain either the full content of the most recent posts, or at least a huge chunk of them.

As far as I can see, there is no reason why the vast majority of web sites should be forcing their most dedicated users to put up with shoddy, sub-standard partial feeds. For me, the fears that website owners have surrounding full feeds are mostly unfounded.