Joost not good enough?


The Venice project, a mysterious beta application from Kazaa and Skype creators Niklas Zennström and Janus Friis, has being variously hailed as the future of television, and the application which may finally bring to its knees the aging last mile bandwidth of the internet itself.

Retitled in mid January to ‘Joost‘, TVP is an IPTV application. That is, a program designed to carry high resolution video direct to consumers via the internet, rather than through satellite, cable or terrestrial broadcasts.

On the technical end, Joost uses both the UDP (to stream video direct to viewers) and TCP/IP (to share shows between users) protocols to create a hybrid Peer-to-Peer and Streaming, MPEG 4 H.264, (currently) free, on-demand TV network.

While Joost does live up to its promise to deliver full screen, uninterrupted streaming video at a watchable quality, a variety of potentially insurmountable challenges stand between the company and its goal of subverting broadcast television.
1. No Premium Content

Whilst Joost’s founders perhaps aim to ultimately provide licensed ‘A Plus’ broadcast content, current offerings are thin indeed, and there is little to suggest that the company has the industry connections or necessary expertise to lure rights holders into making such content available.

“Watch for sci-fi shows, rock videos, sports, comedy — anything with a testosterone angle. Deals are in the works with the three music majors, plus top US broadcasters and cable channels. For the rest of the world, there’s a modified PBS strategy: classic reruns, documentaries, and independent dramas.”

Wired Magazine

While there’s clearly a market for cheap, low quality television, creating a new distribution channel for low value content is unlikely to bring about the paradigm shift in multi-hour-a-day TV watching articulated by Joost’s founders in the latest issue of Wired Magazine.

As Zennström points out..

“You have to put together a whole consumer offering, a great instantaneous experience. A simple service that fills an obvious need and can be offered for free.”

Right now there’s no compelling reason to think that Joost can deliver on the ‘obvious need’ element of such aspirations. Kazaa and Skype were technological and infrastructural achievements, not content deals.

2. No User Content

It may be too early to tell, but early indications are that Joost, like the iPhone, will be a closed platform. Although the developers, unlike Mr. Jobs, welcome independent extension development, it appears they will avoid distribution of user uploaded content entirely. With online only content, and content production companies, from the Revision 3, TWIT, and Podshow networks to Channel101, growing in quality and diversity all the time, this has become the golden age of high quality, short format, web friendly media. At the other end of the scale, YouTube’s survival in the wake of massive takedown notices and competition from far more piracy friendly alternatives, signifies a huge unanticipated interest in the lower end of ‘user’ generated media; from inventive indy band videos, to post modern soap operas, to emerging comedians. According to Wired, the Swedes have no desire to tap this particular well spring of talent.

“Content that few people want to see — what Leiden engineers call “the too-long tail” — crimps a P2P network’s advantage.”

Whether you accept the sincerity of this reasoning or not (in fact, shows like ‘Diggnation‘ and ‘This Week in Tech‘ regularly beat the viewership of much cable television), if Joost’s founders follow their stated plan, they will fail to do for syndicated internet video what apple managed for the podcast, and in the process potentially waste their biggest advantage over ‘content providers’ like Apple, Microsoft and the major vertically integrated media corporations – rich, freely generated ‘sticky’ media.

3. Computers don’t deliver the TV experience

In contrast to active clip grazing or movie watching, computers are ill suited to the casual background parsing of TV. Theres a missing piece of the IPTV puzzle that Joost cannot in its current form solve. A link between the expanding, thinning, television and the computer. The key here is that connectivity must be bidirectional. It’s no use connecting your laptop to your Plasma via a composite cable (and in the process distracting your computer from any useful task), if changing channels necessitates a return to the keyboard. Technical solutions to this problem abound these days, with devices from companies like Slingmedia and Cisco prepared to carry high resolution audio and video to next generation HDTV’s. These are however, likely to be niche products when compared to the omnipresence of media centers in the form of the Microsofts Xbox 360, and Apple TV, both of which offer the potential to bring pay (and pay per view) programming to the television; in the form of all important big media licensed content. Arguably these players are in a far better position to ‘migrate broadcast television’s mass audience to the Web‘.

4. User Bandwidth

With grave doubts proliferating, as to the ability of the internet as it now exists to manage the load created by the growth curve of streaming video, and considering the effective and explicit bandwidth limits placed by ISP’s on their customers, it is by no means obvious that Joost can succeed in the mass market.

According to Joost’s detailed FAQ (beta customers only), in one hour’s use of the service “approximately 320Mb data will be downloaded and 105Mb uploaded”.

In Ireland, the broadband penetration poor man of Europe, common download and upload ‘allowances’ start out at around 10 GB download / 1 GB upload per month. That’s less than 10 possible hours of Joost watching, hardly enough for the program to replace broadcast television. Whilst this represents the worst peak of corporate bilking (leading to one of the worst broadband takeups in the EU), bandwidth limits are an uncomfortable reality throughout Europe and daytime and application targeted bandwidth throttling common in the US and UK.

5. Too Complicated

The Joost interface itself is a model of parsimony and slickness. It is also however, neither familiar nor obvious. Control of the program is via a video recorder / PVR interface metaphor, but in it’s current form more complex and ‘mystery meat‘ than either. Elements of the interface, like the channel library, used to manage channel subscriptions, and the ‘mychannel’ link used to open specific program items, remain unintuitive. A higher level interface design may prove necessary to ease the public into a world of fullscreen online television.

6. Advertising Specificity

Whilst Joost will offer advertisers a variety of viewer demographics – geographic, temporal, and viewing profile – this is not the kind of information which is necessarily most valuable to mass market advertisers. Whilst broadcast television ratings provided by companies like Nielson are deduced from representative samples rather than IP headcounts, they provide rich demographic information that includes income, employment and interests, for each data point. The difference, crudely put, is between ‘what kind of person’, an ‘what time and place’. Additionally, while Joost may allow advertisers to know which programming is viewed when, and which ads are ‘flicked’, this is a capability shared by existing PVRs, and one Zennström and Friis may well choose to leave disabled.

7. Visual Quality

Thanks to James Corbett, I’ve had a chance to sample the Venice Project over the past few weeks, and I can report that under XP at least, the visual quality is not up to the DVD standard claimed. Whilst the resolution may be as high, some channels have a washed out muddy look, more often associated with Flash video.

Check out a screen shot here (NOTE: This is a 2meg uncompressed BMP file!), a static camera, full resolution shot from the Green Day documentary ‘Bullet in a Bible’, you’ll notice it’s much lower quality than the reference images provided by Joost (check out a second screen grab here, from a different video, exhibiting higher quality). While this may be a function of the low end laptop I’m using, its a noticeable reduction in quality compared to watching a DVD or other H.264 content on the same machine.

For comparison purposes here’s another screenshot, a frame from an apple trailer played fullscreen in 480p, roughly equivalent to DVD resolution (again 2meg!). Perhaps image quality is increased on a higher speed connection? Our home machine is connected to Digiweb DSL XTRA (theoretical 3 meg down, 384k up), and about two miles from our exchange.

In summary, picture quality is certainly more impressive than rival streaming web formats, but not quite up to DVD. Does this matter? Well that depends a lot on point three. Sitting two feet from the screen its O.K., but I wouldn’t want to see a movie this way.

8. Not a Magic Bullet

Much like Zennström and Friis’s previous projects, Joost is not a radical advance in the state of the art, but rather a more reliable implementation. The technology to distribute high quality video via IP already exists, but is currently poorly implimented. What governs future uptake in this area is likely a mixture of legal (IP and regional distribution restrictions), economic (distribution costs), and psychological (ownership preference, tolerance of DRM) issues, rather than a best of breed technological race.

All in all there are many things to like about Joost, it’s interface, while likely too complex for casual users, is efficient and geek friendly; its expandability and image quality, almost instantaneous channel streaming, and the promise of ads as short as ‘one minute per hour’, are all commendable. However it’s ability to compete against free, burnable downloadable content, and branded services with a hook to the television, remains to be seen. There do exist however, at least two scenarios which could spell huge success for the upstart company.

1. Pay per view

If Joost can snag studio support, their service could provide a terrific (if lower resolution) alternative to ‘legal’ movie downloads, which if marketed correctly (read rented cheaply) could create a whole new market for lower resolution video – or cannibalize existing DVD sales.

2. Selling Out

The rapid skipless streaming technology, and piracy resistant (sic) encryption behind Joost, could make it an attractive purchase for use by a worried major TV network, suffering media giant, or net enabled set top box maker. Perhaps licenses to ala carte device or service tailored versions of the software and the backend network which supports it could be sold to multiple providers?

In the absence of either circumstance, Joost may well end up the Betamax of streaming video.

Why eMusic doesn’t suck


I try to stay in touch with the quirky world of American indie music via the excellent Brooklyn Vegan blog, normally it’s dishes of Alt Country, New American Weird, Post Punk and other musical gumbo are served just the way I like them. However, a recent post has left a bad taste in my mouth.

Brooklyn Vegan links to an article on Axehole (another music blog) entitled “Why Bloggers Don’t Run Record Companies“. The article is itself a response to the excitement surrounding the announcement that eMusic (a DRM free digital music store) have reached 100 million songs sold; and suggests that this figure is irrelevant next to the awe and majesty of music sales through the iTunes music store.

I feel it’s important to tackle many of the points made in this article, because they represent the same misconceptions that are held around online music by the mainstream media. Before I begin, I’d like to point out that I have no affiliation with eMusic.

Axehole’s main points (and my rebuttals) are as follows..

1. Most music fans don’t care about DRM – only Linux geeks do.

DRM is currently not a political issue, because consumers aren’t currently banging their heads against its limitations. As Joe Sixpack migrates to his new MP3 player (e.g.: Zune), which fails to play tracks from his old online marketplace (e.g.: iTunes store / Plays for sure) it becomes a problem. The apparent consumer acceptance of DRM is primarily based in ignorance. With even Bill Gates this week admitting that DRM is essentially borked and recommending people rip CD’s instead, and the major labels and movie studies attempting to reduce end user privileges and increase prices in their renegotiations with Apple, DRM is appearing increasingly cumbersome.

2. eMusic doesn’t really sell songs, they sell subscriptions

Only they don’t; Because unlike every major label backed subscription service, if you stop subscribing to eMusic, you keep your music.

3. Labels aren’t going to get rich from eMusic

The majors are only important for their back catalogs, and their ability to promote artists. Artists get a smaller cut from iTunes sales than from CD sales, so it’s in their interests to divest themselves from the majors unless they’ve successfully transitioned to being large enough to renegotiate their cut. Indeed much of Canada’s recording artist community, have just <a href=”severed their ties to the international recording industry. Of course artists still need promotion, but thats creating market pressure toward the development of leaner ‘middleware’ style deal brokers, focused on the provision of services from recording, to promotion, to tour booking, rather than the acquisition of a stable of artists.

4. eMusic’s sales are less than 1% Apple’s

Just as iTunes downloads dwarf eMusic downloads, they are themselves dwarfed by P2P downloads (impossible to calculate specificially, but with 10 million concurrent connections to P2P networks, discounting Bittorrent, likely enormous), and *drumroll* CD album sales. True CD album sales figures are difficult to estimate, as the recording industry counts units shipped rather than units sold, and releases only a limited amounts of sales data to the press. None the less, according the US data from the RIAA’s own figures [PDF] CD albums made up 87% of consumer music purchases in 2005, pretty constant as far back as 2000, as compared to 5.7% for digital music sales; in terms of units shipped [PDF], CD albums are still selling double (705 million units, 10.5 billion dollar value) digital singles (367 million units, at a 363 million dollar value, or 498 million dollars including digital albums), at 14 songs approx per album, thats 28 times more songs sold on CD album than digitally.

Conclusion: The market for digital music is far from mature, and has not yet replaced the high street. It’s perfectly possible that the majors could in future cripple iTunes with wireless music distribution points in shops, vending songs to Zune 2’s and 6th Gen Wifi iPods. Alternately, simple open DRM free music stores run by smaller labels and artists themselves may overtake the iTunes model – which owes much of its success to the ubiquity of the iPod, which certainly won’t remain in vogue forever.

5. eMusic’s traffic has flatlined over the last year
(with attendant Alexia stats)

Alexia rankings are a joke, no one clued in enough to be an opinion leader runs the Alexia toolbar on which the rankings are based. Alexia is both easily gamed and subject to systematic biases. Of course as both Apple, and eMusic, primarily sell their songs through client software, their popularity cannot be tracked by Alexia’s antique metrics.

According to Forrester Research, iTunes sales growth is slowing. Forrester are currently trying to distance themselves from that reports implications – but the figures still stand. Here’s the diamond quote “The ability to obtain pirated music is now so widespread the DRM looks to consumers more like a problem than a benefit.”

6. Musicians aren’t going to get rich at eMusic

This is a direct result of broad flat music sales on the long tail, and is one of the reasons music is increasingly a service rather than a product – it’s not really possible to make money from CD sales as an artist, unless you have a major following – if you do, they’ll buy your music from wherever its available; which will like as not be at your website or concerts, or direct from your band to iTunes / eMusic etc.

Let me conclude by quoting the Register on Forester Research’s iTunes figures again..

“Some 3.2 per cent of online households (around 60 per cent of the wider population) bought at least one download, and these dabblers made on average 5.6 transactions, with the median household making just three a year. The median transaction was slightly under $3.

No one in music gets rich from a stampede of interest like this.”

Gmail receives email from other accounts


Google have just begun rolling out a terrific feature, which allows users to grab email from other accounts (work, yahoo etc) via POP3. This could be a godsend for users glued to horrible proprietary corporate email accounts with ineffective spam filters, or anyone tired of multiple simultaneous email logins, who for whatever reason (multiple desktops, mobile access etc) need to use web email rather than a stand alone client. Combining this feature with Gmail’s existing ‘Send mail as’, allows your Gmail to now be used as your central email.

To access the feature, log into your Gmail, click ‘Settings’, and open the ‘Accounts’ tab. This feature is not yet available on my account, so don’t be too surprised if you don’t currently have it enabled.

Via: Techcrunch.

Addendum: Normally I wouldn’t repost a story from such a widely read source, but I actually received news of this in an email and had the post written before I did a citation search, so what the hey!

TCD email users may still forward their email, ridding themselves of the horrible kludge of Trinity email altogether.

Data migration on the web as platform

Screen Shot 2013-08-04 at 19.55.14

Discussing online communities today with one of Trinity FM’s up and coming editors, the problem of data migration came up. Web 2.0 services are fantastic, but what happens when we want to leave their walled gardens? As it stands right now there exists no feasible way of say, carrying an identity from Bebo to Myspace, complete with user information, photographs and more importantly ‘friends’ (correct me if I’m wrong, but such a service would undoubtedly violate the TOS of one or both sites).

So far, so minor, a problem easily soluble, or at least survivable – on the user end through the duplication of accounts, on the social network provider end through competition. Just as Yahoo and Google trump other web based email services by allowing free forwarding and address book export (effectively providing a jumping off point for users – one I found convenient recently when switching from the increasing bloat of the new Yahoo mail to Gmail, privacy be damned); and just as websites which link to other sites gain links in return – so a truncated form of universal Darwinian selection will eventually cull social networks based on the criteria of connectivity and selectivity. To rephrase, only the most flexible networks, allowing (and encouraging) interconnectivity with other social networks, mashups, linking with mobile / cellphone accounts, and connecting users to their butcher, baker and candlestick maker, will survive in the long term as social networks become truly mainstream. [Thankfully the web currently lacks implimentation of the bandwidth protectionism (read Net Neutrality) which has allowed Murdock Media to become an exclusive supplier of Digital Satellite television in Ireland and the UK. But don’t be surprised down the road, if services like MySpace lobby to restrict access to ‘unpoliced’ social networks]. Think about it, who’d use a telephone which could only call one set of friends? By contrast, services which provide selective connection will also find an evolutionary niche – humans love elites, and expertise is quantitatively valuable.

However, this prediction doesn’t apply to services which gain their value from user generated data. Famously, Gracenotes, the ubiquitous database powering music searches from iTunes album cover provision on down, effectively stole the labour and data generated by its original user community, as user satisfaction became the marginal utility of all that juicy data. Similarly, services like Flixster, Last FM and Allconsuming, built around the value of user generated data, have little to gain from interoperability, and no enterprise users to insist upon it. The open source community, though capable of replicating the functions of a service like, lack the drive to recreate such services under a more open model; services which are gradually becoming more ubiquitous and useful. So..What happens next? Do we all gradually slip into shiny Ajax powered Web2.0 sink wells, whilst waiting for the most popular social networks / web software providers to absorb them or adopt their functionality (the last alternative fails to solve the problem, as it doesn’t get at the knot of preexisting data, of enormous value to individual users)? Does customer demand and ‘outrage’ ultimately trump economics, forcing companies to wear their most valuable assets on their sleeves? Such an outcome is constantly predicted as the fall of DRM, but there can be no Bittorrent equivalent for web service databases. The irony here is that that the very limitations of the traditional software model, and the limits imposed by isolation on the capability of such software, kept data in users hands and ownership.

One could argue that services which don’t allow access to ‘base data’, as Tim O’Reilly refers to it, aren’t Web2.0 in the true sense of the term, but that’s irrelevant, as in the web as services model few companies fit all such criteria. Certainly one may export a document from Writely (now Google docs), but try exporting all your documents at once. What happens when you’ve got a hundred, or two years from now a couple of thousand? What about your revision history? What happens when your office suite sits online, allowing collaboration as never before, but with value added services tied to your current provider? What happens when the web becomes OS? As O’Reilly points out ‘The race is on to own certain classes of core data’, and in this race users may be the ultimate losers.

The Next Bebo


The omnipresence of broadband, combined, at least in Trinity, with a paucity of computing resources, have resulted in a massive increase in student ownership of laptops, combined with resultant student familiarity with the web; ditto the use of email as a communications tool. Five years ago you’d never have arranged a party with an email – you couldn’t have been sure if any of your guests would check their messages that month, never mind the geeky stigma clinging to the medium. These days, you can post an event invitation on Bebo, or rival social networks like Facebook, and send a group email – and be guaranteed an RSVP by the end of the day.

As I mentioned in my previous Bebo article, social networks like Bebo can act as addiction paradigms. This was most especially effective during Bebo’s recent burst in popularity, as it lacked any search engine, meaning that users had to click through the ‘friends lists’ of their friends, or manually surf hundreds of pages of students from their school or college, in order to attain the ‘reward’ of finding a friend (or a cute stranger) – effectively conditioning repeated clicks. In another way too, Bebo acts to condition it’s users, as the ‘response cost’ of not checking the latest comments, hits, and ‘white board’ drawings added to ones profile, acts to weaken the inclination to spend time logged off. This explains why Bebo LLC has been so proactive in working to stop users receiving emails when they get ‘hilarious’ forwarded message through Bebo’s internal mail system – each annoying message decreases the likelihood that users will bother to check next time; in the words of the godfather of behaviorism B.F Skinner “behavior is followed by a consequence, and the nature of the consequence modifies the organisms tendency to repeat the behavior in the future”.

From an evolutionary perspective, services like Bebo already provide a mechanism for the measurement of peer group status, mate seeking display, and the attainment of social regard – teenagers love this stuff – they can count their hits, proudly display conversations and compliments which indicate their popularity (or the popularity / attractiveness of their friends), while delegating negative comments; and can and list their interests and friendship allegiances – which of course define them (sic).
As fun as this is, it will all become cliche rather quickly, we’ll soon see more distinctive and highly developed web presences – with increased social connectivity – integrating video clips of nights out – uploaded from mobile devices, games – and unified identities users can carry across sites, start to emerge. As it stands, social networks are difficult to monitarize, and relatively unattractive to advertisers. The smart thing would be for mobile providers to produce such services – in open, cheap and omnipresent forms. They however are in love with expensive limited access walled garden approach, with which they hope to pay off their vast investment in 3G licenses. So the next step is unlikely to come from them.

This week’s LA Times reports a bill has been introduced to the US congress, seeking to ban social networking sites like Bebo from public libraries and school computer labs. Simultaneously, the FCC, always eager to expand its remit to the internet, have released a consumer alert about the dangers of social networks. These responses are reminiscent of the boy with his finger in the dyke, and as likely to succeed. The nature of the web is changing, returning to the interactivity and content creation potential envisioned by Tim Berners Lee – sites like, Wikipedia, Flickr, and Creative Commons are handing the knowledge generation process back to users. Social networking sites (including Bebo) are but the least exciting examples of this trend, which is fast becoming ubiquitous. Tabloid panics be damned, best of luck banning social networking as every site moves to contain its fundamental elements.

Bebo, MySpace and the like are primitive second generation social networks, of trivial impact to a few students, by comparison to the networks we’ll see developing in the near future. Bebo’s successors will make use of the omnipresence of the GSM mobile networks, to ‘hook up’ users based on real world location. Already services like ‘Meetro’ are bringing a knowledge of real world context to online messaging – letting users know about like minded individuals within a given proximity. Imagine this functionality paired to GSM triangulation and the location reporting built into 2G and 3G mobile networks – concepts already tested by Finland’s project CELLO. According to Demos, ‘Location awareness’ will be a significant feature of future mobile phone services, and it’s easy to see why; the usefulness (and potential dangers) of automatic updates of the locations of friend and offspring are instantly apparent – never mind the numerous business opportunities for “What’s on nearby” services.

If social networks like Bebo are raising hackles, imagine the tabloid hue and cry when parents realise their offspring are broadcasting their physical locations!
In reality such mobile social networks will use permissions to restrict location data to close friends and family – the real problems (aside from the obvious privacy implications) will be in the negotiation of location data between parents and their children.

It’s likely services will emerge using statistical analysis of proximity and messaging data to produce much more sophisticated friend of a friend networks. Imagine receiving a ‘gossip subscription’ message to your integrated Bebo – mobile service, ‘Jill and Jim have been spending a lot of time together this week…’. Lists of the post popular (read interconnected) users, most likely couple etc, will make social networks vastly more addictive and omnipresent than at present. Once location aware social network data becomes a common social currency, the ‘response cost’ of not being a member will rise accordingly.

D.U. Digicast Society


Starting Wednesday, we’re making a brave and foolhardy attempt at setting up a digicast society in Trinity. All are welcome to come to the inaugural meeting, and non Trinity students are more than welcome to join – if and when the society gains approval from the college’s Central Society Committee. Here’s the spiel.

The proposed ‘D.U. Digicast Society’ would….

1) Provide students with an opportunity to learn how to produce podcasts, vidcasts, and blogs.

2) Work with other groups to distribute their creative work to a world wide audience.

3) Create an online forum, where visitors from around the world can hear, see, read and critique great original content.

Essentially the society would commission and host podcasts and vidcasts – providing members with the training and equipment necessary to enable them to both produce their ideas and bring them to a wider audience.

What’s in it for you? If you are interested in getting content online, we’d like to talk to you. It could be a recital of out of copyright music, a performance of original songs, a written article, or a documentary or feature film you have produced..The list is almost endless.
While we aim to produce our own content; We’re also very enthusiastic about opening up access to the publicity and distribution resources enabled by the internet.

Our initial meeting to gather input and interest in the society will be on Wednesday 3rd of May at 7.30pm in the Swift Lecture Theatre, in the Arts Block in Trinity College Dublin.

Open educational models


Someone needs to build a decent open source 3d brain model in shockwave, VRML or a stand alone OpenGL application (not pseudo 3d quicktime). As far as I can see, none exists. Believe me, when you’re trying to understand functional neuroanatomy, such a thing could not be more useful.
This is the sort of thing that education software does better than books or lectures, greatly accelerating the practical comprehension of complex 3d systems. Educational institutions would do well to develop such software for engineering, medicine, and physics modelling.

Start-ups, currently thinking about building yet another social bookmarking app or firefox extension, might find a market for an advertising sponsored (or paid institutional subscription) folksonomy application, which could allow lecturers to build three dimensional tours to accompany their lectures, or mail students links which would open specific interactive 3D representations of models discussed.

The whole copyright in education thing really came home to me in first year, when one of our lecturers (a brilliant speaker) delivered out lecture notes in PDF form, with all of the accompanying diagrams (most of complex brain regions) removed; making the notes literally useless for exam revision or essay writing. This is an area that needs to be urgently addressed, both legally (with intelligent copyright exemptions for educational use), and through the development of common open learning platforms.

Also, isn’t it obvious that universities should do their best to enable the creation of educational resources by their computer science / education / psychology departments – not in a service provision role, but a pragmatic brain trust style solution development role; by encouraging and financially facilitating interdepartmental projects where divergent expertise could be used to mutual benefit – sort of like an intra-faculty open source movement.

Down the Tube


Didn’t notice this till now, and apparently Michael Arrington hadn’t noticed it either in his recent coverage of video sharing sites. YouTube has (presumably under pressure from copyright paranoid’s in hollywood limited videos to ten minutes in length.

Way to kill the future of your site guys. Seriously, this is a moronic move – they are doing several very foolish things. The first is effectively bowing to Hollywood on the upload of ‘pirated’ material to their site, opening the door to further picking up the soap and litigation down the road – think Napster. The second is discouraging the few users of the site (like Technolotics), who upload wholly original content – already constrained by the 100meg file limit. The third is positioning themselves as the myspace of digital video clips (brash, unfriendly and money grabbing), a bad move, as what differentiated youtube from say Ebaums were two things, its flash transcoding, and its identity as a ‘credible’ community hub for original content, the Flickr of video.

But if as Ars Technica impute, YouTube exists purely as a fly by night, hype and flog operation, their behaviour becomes a lot more comprehensible.

Update: Thanks to Dave of Dave’s Rants for pointing out that YouTube have launched a ‘Directors’ programme, allowing the upload of longer content and a variety of other benefits, to those willing to sacrifice anonymity. It seems a very reasonable compromise, and though I’m no lawyer, the terms of the agreement seem fair too.

In other news, Rowan Nairn, creator of the worlds first (and best) feed grazr ‘Opod‘ was kind enough to stand in for me on this weeks ‘Technolotics’, may I be the first to congratulate him on being a threateningly effective replacement.

Inspired by Aric McKeown’s hilarious ‘Make Me Watch TV‘, I’ve become addicted to watching the curmudgeonly House M.D., played with immense satisfaction by long time Stephen Fry collaborator Hugh Laurie. House is the Chuck Palahniuk novel of tv drama, grotesque, charming, stylish, and utterly formulaic; perfect downtime television.

Also, can’t stop reading this poem.

Ug, back to the grindstone, or maybe watch a final episode of ‘House’.

The Need For Feed

Techcrunch has an article up on the state of online feed readers, which I think are as interesting for what they lacks as what they include. None of the feed readers reviewed seem to have feed grazer functionality. That is to say, while most will import and export OPML, none allow the direct surfing of publicly available OPML feeds (with inclusions). Each web based feed reader seems, to a greater or lesser extent, to be attempting to create a proprietary RSS walled garden.

Tech Crunch have a nice little graphic table, indicating the capacities of the existing web based services. Lets see if I can go one better, and look at the capabilities of future methods of RSS aggregation. There are several potential methods of aggregating RSS content, and I’ve tried to consider them all. Open up the screenshot below, apologies for the size, but it should just fit in a firefox tab at 1024*768. Take a gander then continue below.


Welcome back. Astute readers will notice that what I’ve described as a feed syndicator does not yet exist. It would contain elements of an online OPML editor a la OPML Manager, or OPML editor; elements of social bookmarking like; media support like a podcatcher, and could optionally include social networking or even P2P elements (but that’s for another day).
The important part is, that as well as providing an additional social navigation paradigm, which could (depending on implimentation) make possible the navigation and summation of many more RSS feeds than is currently practical, remove the need for separate podcatcher applications (at least for those 80% of us who are not transfering content to portable devices), such a model would break down the walled gardens created by current RSS aggregation models.

In the feed syndicator model, the aggregation is two way, with user or service hosted, user modifiable OPML feeds providing the basis for both live aggregation and sydication. With countless potential methods of collecting and navigating feeds (check out Rowen Nairn’s OPod for the first steps toward one), there’s room for many such feed syndicators, whether at the browser, extention or web level, all interoperable via RSS and OPML.

Link: Previous Post on the future of the browser. OPML


Wouldn’t it be cool if you could access an OPML of your tags? This would let you navigate feeds not as lists of links, but as tag defined outlines. Danny Ayers has already created a neat mashup, using a to OPML xsl, and the W3C XSLT parser to create reading lists. But this only hints at the flexibility which would come from OPML navigation of user tag clouds.


Update: Dan at Yabfog has done this at a local level, which proves its practicability, all that remains is for or a third party to provide this functionality as a live service.


More: EirePreneur points out another method, making a static dump through OPML Utils. Right now it just creates an flat of links, but a new version due soon intends to outline by tag.