The Problem With Digg


Digg is hip, digg is fun, every geek likes to compare their list of dugg stories, and gets a thrill from a submitted story hitting the homepage. But Digg has a problem. As it’s user base has risen, and the site’s design become more refined (latest iteration released today), the perceived quality of the stories reaching the Digg front page, and of the comments individual diggers leave regarding stories, has declined. The most common explanation given for this decline is the dilution of quality attendant to increased popularity; as Digg becomes less exclusive, it attracts a broader, less technically literate, and younger audience. Digg’s democratic structure leaves it open to collective dumbing down (and deliberate spamming) in a way that Web 1.0 social new sites (Slashdot etc), were not. However, let me suggest another potential explanation for the variable quality of news on Digg.

Digg has an identity crisis

Lets compare Digg to delicious, another popular social bookmarking site, one with a much smaller emphasis on the social, and much greater emphasis on bookmarking. As a regular user of both sites (in the case of Digg primarily through RSS aggregating intermediaries like Netvibes), I find myself using Digg and delicious in radically different ways. Digg I treat as one news source among many, checking it daily along with dozens of other such sources (TailRank, Techcrunch, Boingboing etc) via a tab on my Netvibes page. I use Netvibes rather than competing page aggregation services (e.g.: Pageflakes) because it allows me to read the guts of a story (or at least that portion of it contained in it’s RSS feed) before deciding whether to go directly to the source site. If the story is something I think I’ll need later, or just something intrinsically interesting or of note, I’ll frequently add it to my delicious bookmarks.

By contrast, as a casual Digg user, I rarely digg stories. I’d like to avail of the much more sophisticated social features included in Digg (the digging meme itself, integrated comments on each post, richer social networking, better user statistics), so why don’t I? Two reasons..Firstly, The nature of Digg means that by the time I’ve read a story I’ve left the page dedicated to it (which cuts out my motivation to ‘Digg’ it). Secondly, the operations involved in using Digg have greater costs – originality, search, voting, and exposure to voting. Lets look at the process of adding links, as a logged in user of either delicious or Digg.

Submission to delicious

    • Hit tag button (an extension or bookmarklet)
    • Tag (with many fields autotagged based on my previous bookmarks)
    • Save

Output: An online collection of bookmarks, social in the sense that they are aggregated with the bookmarks of other users, and can be copied or shared with other delicious users, but primarily distinct and isolated in the space of my individual delicious page and it’s attendant RSS feeds. Lets contrast this with Digg.

Submission to Digg

    • Search for duplicates to the url I’m submitting
    • Submit the link including description, sans tags, but including one exclusive topic.
    • Save

Output 1: Story posted to, where it can be commented upon, ‘dugg’ or ‘buried’ (though only from inside the post itself, rather than any of the overall site views) by logged in users, potentially promoting it to the Digg front page.

Output 2: Alternately, my submitted URL is rejected. Should the URL I am posting already have been uploaded to Digg, my post will be rejected, providing me with the error message “This URL has been reported by users and cannot be submitted at this time.”

Conflicted Roles

Digg fulfills two distinct roles, roles that in its current iteration are in conflict. The first as a social news site, and the second as a social bookmarking site.
I can use my posted digg stories as bookmarks, laboriously searching for duplicates before I post, and digging rather than posting if they already exist; or I can decide to post only notable stories which I hope will be original. If I do the former, I add several steps to my bookmarking (with diminished navigability due to the lack of tagging), if I do the latter, then my motivation for submission is unclear, and my reward (successful posting, popularity of post) uncertain. Submission of notable stories might be done out of social honor, in search of popularity, as a product or service announcement, or in support of a meme, organisation or product. To time pressed adults (rather than the pimply uber geek / tech teen contingent), spending time on such submissions – rather than posting a blog entry, or writing a story for a more fully developed news site (such as Newsvine) simply doesn’t make sense. Hence we see a small number of dedicated hobbyists supplying the majority of news on Digg, and a much greater number of ‘casual’ readers who ignore the sites social features.

The problem is that Digg’s identity is indistinct. As I’ve tried to demonstrate, Digg is ill suited for use as a social bookmarking site, due to its insistence on novel posts. Digg is also not designed to allow for detailed discursive posts. The site is a news platform, with a great incentive (in terms of traffic and exposure) to be linked from, but a small incentive to post to. As a social network, Digg rewards frequent successful posters, but does little to build community around individual topics or users. As Digg’s popularity increases, a decreasing proportion of its growing user base are likely to contribute to the sites content – due to the increasing difficulty of posting original content, and the increasing likelihood of successful posts failing to be promoted to the front page as the overall rate of posts increases, resulting in a transition from a Slashdot like authoritative (sic) news site, to a Fark like entertainment site.

This is fine as far as it goes. There’s a lot of advertising revenue to be made from being the Web2.0 Fark or the tech College Humor. But it’s a disappointing outcome. Digg has the potential to be more, to compete as a delicious replacement, and to provide sterling competition to sites like Newsvine and Tankrail as a hub for news and current affairs discussion, and sites like Reddit for rapid news discovery and dissemination.

What’s the alternative?

A few minor changes could improve the quality of links submitted to Digg, while keeping its core discursive structure intact.

  1. Tag support
  2. Rather than spitting back “guidelines to make digg a better place” when a previously submitted link is posted, automatically provide users with the option to Digg the previous submission of a non novel submitted link
  3. Allow article submission in addition to link submission
  4. Encourage submission and digging via bookmarklet and extension
  5. Increase the personalization and connectivity features of individual profiles

At a stroke digg could compete as a social bookmarking site, social news site, social network and even basic blogging platform; building on their existing connectivity and popularity engine, and highly granulated news selection features, and allowing more detailed discussion of individual topics. These features are so complimentary they gain in utility through aggregation. If Digg does not become the place to offer such feature cross pollination others (anywhere from Facebook to delicious) may. Finally, if users are provided with the opportunity to use digg to store bookmarks, in the way they currently use services like delicious, the ranking of links ‘dugg’ will have a much greater relationship to their utility than at present – akin to the difference between asking people what products they like, and monitoring the ones they actually purchase (minus the confound of economic scarcity); and the number of novel Digg submissions will rise, by virtue of the increased amount of postings to the site.

Disappearing Future

After re-listening to many of the excellent podcasts from 2005’s Accelerating Change conference, available from IT conversations; I got a hankering to read Charlie Stross’s highly recommended, and Hugo award nominated, post singularity novel Accelerando. The book is available to download under a Creative Commons license. Or rather, the book was available for download. is down, and although the site itself can be accessed for now via Google’s cache, the PDF of Stross’s novel is unavailable. So too is the site which originally seeded the novels torrent, and the torrent itself. Cue whaling and gnashing of teeth re: the unsustainability of torrents.

Bittorrent, a protocol which provides an excellent method of ‘appropriating’ the latest episode of Lost, sans advertisements direct from the USA, is rather unsuited to maintaining the availability of media on the long tail. A naive, non programmer’s explanation of why this is the case follows… For a file to be available to download via Bittorrent, at least one seeder must maintain availability of a complete copy, dynamically providing portions of the file to a potential downloading ‘swarm’. Additionally, for a file to be practically quick to download, pieces of it must be available from a wide range of sources (so that individual clients can trade them directly, greatly accelerating the process), and must additionally be listed on a Bittorrent tracker server, which brokers communications between clients, and between clients and seeder.

Dispersed hosting is a weakness and a strength of Bittorrent as a distribution medium. Say what you will about the printing press, it takes far longer for paper based novels to disappear completely than for their digital equivalents to become network isolated, or become unreadable due to the march of incompatibility.

There’s a lot of buzz right now about building Bittorrent (or torrent like) functionality into consumer devices, set top boxes and the like; and little awareness of the bandwidth costs that such distribution transfers to the end user.

There have been a variety of attempts to establish an open directory of Creative Commons works, but as of right now no exhaustive list exists, and existing search methodologies are ineffectual. This is not a criticism of CC per say, which I find both useful and commendable, both as a creator (almost without exception, everything on this site is made available under a creative commons license), and an ethical (sic) user, but rather of the assumption that the internet automagically provides publishing methodologies equivalent or superior to those of traditional media.

Right now, as far as I can tell, it is essentially impossible to find a (PDF) copy of Accelerando online, as far the the internet is concerned, the novel no longer exists. Similarly, the archive of episodes of Technolotics will effectively disappear forever in the ether, if I ever fail to pay a hosting bill (already rather overdue I’m afraid).

Update: After some further searching, I did manage to find a lone floating copy – download here – of Accelerando, which neatly solved my immediate problem. Astute readers will note that this doesn’t invalidate my original point. To ensure the novels continuing availability (I’m going to go out on a limb here and assume’s servers have been consumed by some sort of singularity), I’m hosting the file myself. Download link, and copyright notice, after the break.

Download: Accelerando – by Charlie Stross.

This work is Copyright © Charles Stross, 2005.

This text of this novel is made available, with the kind consent of the publishers, under the terms of the Creative Commons deed, Attribution-NonCommercial-NoDerivs 2.5: You are free: to copy, distribute, display, and perform the work under the following conditions: Attribution. You must attribute the work in the manner specified by the author or licensor.

Noncommercial. You may not use this work for commercial purposes.

No Derivative Works. You may not alter, transform, or build upon this work.

For any reuse or distribution, you must make clear to others the license terms of this work. If you are in doubt about any proposed reuse, you should contact the author via:


The Unavoidable Future of Entertainment


Anyone interested in the future of television, or more accurately, the post televisual future of web distributed original video content, would do well to check out Channel 101. We’ve reviewed the site before on Technolotics, but I think its worthy of a more in depth look, as it seems to be currently flying under the radar.

While sites like Youtube and Guba, may or may not have a future primarily as redistributors of broadcast content, they’ve done little to foster the creation of original work. In fact, by restricting the length and size of files which can be uploaded (ostensibly to reduce copyright infringement), YouTube have diminished their chances of becoming a hotbed of original content. Google video, although bravely eschewing any restrictions on the length of uploaded content (whilst foolishly restricting video quality to an extremely low bit rate), does little to foster the community creation or pooling of talent needed to inspire the development of original shows and films. Note, it’s far from clear that it was ever Google’s intention to become a generator of new IP, so Google Video shouldn’t necessarily be seen as a failure – however, judging from the inability of Orkut to develop beyond a cookie cutter (and rather primitive) social network, it’s not certain that Google ‘gets’ web2.0.
Finally, Apple’s iTunes store, whilst encouraging the success of individual (usually pre-existing) IPTV shows like Diggnation, TWIT, and Tikibar TV, has and will continue to see itself primarily as a marketplace for network television in portable (DRM’d Mpeg4) formats.

To Channel 101. What is it, and what makes it important? 101 is far from your everyday nascent net-tv channel. Despite the name, the site is primarily the web distribution element of a monthly LA film festival, where participants primarily from LA (though the contest is open to anyone), of greatly varying talent and experience, submit brief (5 minute max) pilots, the best of which are selected to compete by the sites founders, and subsequently killed or given life as returning shows, according to the whims of a live audience. Audiences selected pilots are then titled ‘Prime Time’ shows, and return to compete again. Here’s how the sites creators explain the process. The system is entirely democratic, as even initially rejected pilots can be submitted to the festival (with the likelihood of audience derision), should the creator chose to call ‘a Chauncey’. The clincher is, all videos that make it to the festival are subsequently made available to download, with full RSS support – so unpopular pilots and cancelled shows can have a second life as downloadable hits.

It’s not so much that Channel 101’s show are good – as with broadcast television the majority are unwatchable – but the format provides an incentive in terms of exposure, creative cross pollination, and the excitement and pressure of a live event, for the creation of top quality shows; and top quality is the only description for some of Channel 101’s most successful offerings. Shows like the hilariously deadpan, and subliminally confrontational ‘House of Cosbys‘, or the disturbing and original fusion of CG animation and pantomime that make up ‘Twiggers Holiday‘; are original and brave in a way that network television (either side of the Atlantic) hasn’t been since David Lynch’s eponymous ‘Twin Peaks‘ back in 1991. The channel has attracted some major talent (see below), and helped foster a couple of interesting careers – Rob Schrab, the writter and star of ‘Twigger’s Holiday’ (and a co-founder of Channel 101), has gone on to co-write ‘Monster House‘ the latest Robert Zemekis produced potential summer blockbuster.

I’m not suggesting Channel 101 itself will ever be a major player in internet video creation (although I wouldn’t rule such success out), or that it’s model will become typical, but it certainly represents one methodology with creative, and perhaps as importantly, financial potential. The site currently subsists on Merchandise and DVD sales, but could easily be modified to a subscription first model (ala the delayed public release of Revision 3‘s Diggnation), , or to include advertising. My hunch is that something like 101 is big enough to keep its creative momentum going, while in the long run vidcast / vodcast / IPTV shows without an affiliation will disappear due to view / creator apathy, and the low signal to noise ratio of casual YouTube and Google Video style services.

Channel 101 Points of Note:

1) Intersects with real world creative and audience community
2) Each show is an actual vidcast – with RSS feed and iTunes listing
3) 100% original IP
4) Regularly scheduled content updates
5) Not Web 2.0 – Not a community driven web presence in the traditional sense
6) Blurs the line between short films, and regular series
7) Shows are freely distributed – but not under a creative commons or similar licence.

Channel 101 Highlights:

* House of Cosbys (hosted offsite since Cosby lawsuit)
* Chad Vader
* Twiggers Holiday

Celebrity Appearances:

* Drew Carey
* Jack Black
* Sarah Silverman
* Jimmy Kimmel (I hear he’s sleeping with someone talented)
* Cute blonde from Scrubs

Notable Creative Geniuses

* Rob Schrab
* Justin Roiland

The Need For Feed

Techcrunch has an article up on the state of online feed readers, which I think are as interesting for what they lacks as what they include. None of the feed readers reviewed seem to have feed grazer functionality. That is to say, while most will import and export OPML, none allow the direct surfing of publicly available OPML feeds (with inclusions). Each web based feed reader seems, to a greater or lesser extent, to be attempting to create a proprietary RSS walled garden.

Tech Crunch have a nice little graphic table, indicating the capacities of the existing web based services. Lets see if I can go one better, and look at the capabilities of future methods of RSS aggregation. There are several potential methods of aggregating RSS content, and I’ve tried to consider them all. Open up the screenshot below, apologies for the size, but it should just fit in a firefox tab at 1024*768. Take a gander then continue below.


Welcome back. Astute readers will notice that what I’ve described as a feed syndicator does not yet exist. It would contain elements of an online OPML editor a la OPML Manager, or OPML editor; elements of social bookmarking like; media support like a podcatcher, and could optionally include social networking or even P2P elements (but that’s for another day).
The important part is, that as well as providing an additional social navigation paradigm, which could (depending on implimentation) make possible the navigation and summation of many more RSS feeds than is currently practical, remove the need for separate podcatcher applications (at least for those 80% of us who are not transfering content to portable devices), such a model would break down the walled gardens created by current RSS aggregation models.

In the feed syndicator model, the aggregation is two way, with user or service hosted, user modifiable OPML feeds providing the basis for both live aggregation and sydication. With countless potential methods of collecting and navigating feeds (check out Rowen Nairn’s OPod for the first steps toward one), there’s room for many such feed syndicators, whether at the browser, extention or web level, all interoperable via RSS and OPML.

Link: Previous Post on the future of the browser. OPML


Wouldn’t it be cool if you could access an OPML of your tags? This would let you navigate feeds not as lists of links, but as tag defined outlines. Danny Ayers has already created a neat mashup, using a to OPML xsl, and the W3C XSLT parser to create reading lists. But this only hints at the flexibility which would come from OPML navigation of user tag clouds.


Update: Dan at Yabfog has done this at a local level, which proves its practicability, all that remains is for or a third party to provide this functionality as a live service.


More: EirePreneur points out another method, making a static dump through OPML Utils. Right now it just creates an flat of links, but a new version due soon intends to outline by tag.

A Novel Paradigm for the Web


I think I just got it. For the past few weeks I’ve been puzzling over what the OPML, RSS, AJAX alphabet soup will ultimately end up tasting like.

I’ve intuited for a long time that the whole gestalt is far more significant than most programmers or technology commentators realise; and of far more ultimate utility than as a succinct method of information categorisation. I now realise, OPML (or an OPML like outliner standard in XML) underlies the future of both the browser and the web.

Firefox 3, or its equivalent, won’t function primarily as a traditional link / url -> page display browser, rather, users will navigate through outline directory trees to reach their ultimate content destination – which may be any of a whole variety of open document types, inclusive of audio, video, and traditional text / graphic / interaction models.
Nodes will be linked dynamically, and updated at numerous trusted hubs (the’s of tomorrow). Such links will create sub webs, navigable and discoverable through reputation systems, tagging and recommendations.
Further, users will not merely navigate such OPML trees laterally, but through any of a whole variety of interface paradigms.

Where today each link on a site sits in relative isolation, the browser of tomorrow will aggregate all links on a given page in real time, construct and meaningfully ‘geographically’ categorise link feeds, which will provide both an additional outliner navigation layer, and a new means of scanning the content laid out within a document. This will be the hardest element to get right, as it departs most radically from out the web works today. My guess is that the ultimate solution will be something like newsvine, dynamically constructed, parsed through link, feed, and generator templates (e.g.: blogging engine, CMS) from any given page site or outliner – both in real time by the browser, and by next generation sitemaps (in reality linkmaps). Think google news, for every site on the web (and its linked sub pages and sites).
Todays feed grazers could be the templates for tomorrows browsers. Such browsing paradigms may finally provide an advantage for three dimentional interfaces – though my guess is two dimensions will remain more comprehensible and intuitive.

A few more interface ideas before I lay down the crystal ball. Pre-cached feed branches displayed as graphical document previews in a mouse over ‘mind map’. A home feed bucket which rises from the browser bottom to catch feeds, pages and documents dragged and dropped (think OS X’s dock, with icons representing not programmes, but outlines in your home OPML). Or how about a dynamically generated zooming interface like Jeff Raskins Archy project.

The best part is, such novel methods of navigation could be implemented today in AJAX as proof of concept, sitting on top of the web as a hotkeyed interface, which is arguable what the Flock guys are positioning themselves to do; but ultimately such technologies are unlikely to be fast enough to produce a robust solution.

RSS, OPML and Feed Grazing

Grazr 1

Inspired by Tom Raferty’s recent interview with EirePreneur’s James Corbett at the Irish Blog Awards, I’ve been messing around with OPML this evening. OPML is an ‘xml format for outlines‘, in laymans terms a sort of meta-feed, allowing the consolidation of URI’s and RSS Feeds.

As we all gradually transition from getting our news and information from a series of site visits, to subscribing to tailored feeds of postings, postcasts, vidcasts and media streams, methods of rapidly, accurately, and inclusively navigating the morass of information will become increasingly important.
Already I’m finding it difficult to track my feeds through a unified single window web service. There are lots of alternatives: Netvibes, and Page Flakes will allow you to keep a live front page of headlines – but pageflakes cant yet browse deeper into the feed, and netvibes takes up too much space displaying headlines to allow more than a dozen feeds to be easily tracked.
Bloglines allows you to create a publically accessible page listing all your feeds (check), and lets you easily keep track of numerous feeds (check) – but won’t display storys linked by enclosure clips within the ‘frame’ of its interface. No (online) service yet seems to be everything I’m looking for; essentially a less ugly version of Feed Show, which lets me offer a public front end, eats its feed live from my own ompl XML, and can display audio and video content (ideally with live conversion to flash) – why should I have to log in just to read my feeds, and shouldn’t I also be able to easily present a link to them on my website?

None the less, aggregating feeds has gotten easier. With a service like OPML Manager, you can (for the moment painstakingly) create an opml feed containing all your RSS and URL links, ready to be thrown into the feedgrazers which are almost ready for prime time.

Thanks to OPML Manager, and the insanely cool OPOD javascript OPML viewer widget, you can now view my opml feeds live on this site (see sidebar – below Digicasts) [Link via Eirepreneur!].