Berkman’s TagTeam is an open source feed aggregator and tagging system

TagTeam is an RSS / Atom / RDF aggregator with the ability to filter and remix its input feeds with a high degree of flexibility.

Items can be added directly to TagTeam “bookmarking collections” via the provided delicious-like bookmarklet, and these items can be remixed and filtered like any other item.

TagTeam can aggregate content from anything that emits RSS, Atom, or RDF. This includes delicious, zotero, WordPress, twitter, mediawiki, connotea, blogger, github, and too many other applications and services to mention. It uses the feed-abstract gem, written as part of this project to create a better way of dealing with structured feeds. feed-abstract understands some generators and does magical things – like turning twitter hashtags into actual tags on aggregated items.

TagTeam can import Delicious and Connotea backups directly into a bookmark collection, and will support more formats soon.

Remixed feeds are available as RSS 2.0, Atom, and jsonp output and can be viewed directly in a hub. Feeds, FeedItems, and Tags can be added and removed from a Remixed feed contextually within the application.

tagteam/README.rdoc at master · berkmancenter/tagteam · GitHub

5 open source RSS feed readers | Opensource.com

When Google Reader was discontinued four years ago, many “technology experts” called it the end of RSS feeds.

And it’s true that for some people, social media and other aggregation tools are filling a need that feed readers for RSS, Atom, and other syndication formats once served. But old technologies never really die just because new technologies come along, particularly if the new technology does not perfectly replicate all of the use cases of the old one. The target audience for a technology might change a bit, and the tools people use to consume the technology might change, too.

But RSS is no more gone than email, JavaScript, SQL databases, the command line, or any number of other technologies that various people told me more than a decade ago had numbered days. (Is it any wonder that vinyl album sales just hit a 25-year peak last year?) One only has to look at the success of online feed reader site Feedly to understand that there’s still definitely a market for RSS readers.

Source: 5 open source RSS feed readers | Opensource.com

BYAR! Building Yet Another Reader!

With the looming demise of Google Reader (unless, of course, they change their minds) I’ve been casting about for a reader. For a number of years I ran my own aggregators, including UserLand Radio, Feed-on-Feeds, and currently Dave Winer’s River2 in the OPML Editor. Each had (or has) its own pros and cons that I’m not going to get into. I had moved to Google Reader because it had features I liked and it was available everywhere at any time. I don’t think it was perfect and there were things about it I found annoying (again not getting into that) but on the whole it was more useful than not.

What the choices for replacing Reader? Well, there are several, the best probably paid services that provide you with a host of features now tailored to remind you of Google Reader, good or bad. Interestingly the self hosted or desktop choices have not evolved much in the past 5 years, This is most likely due to to the ‘Google effect’. Once Google moves into a space the general sense is that that is it, Google wins and innovation tends to cease or at best slow to a crawl. So it seems to have been with news aggregators.

With Google now moving out of the space there is some sense of opportunity in the reader/aggregator space. The void left by Google Reader’s exit will no doubt spur some innovation as developers begin to look at RSS news feeds as a field worth exploring. I know I’m thinking about it.

What I’m thinking of is a system I can run from one of my Linux servers, that I can access from any device in a reasonable format and share the reader with friends. On the backend PHP using cURL and SimpleXML to access an OPML store of feeds, retrieve, parse, and archive items and display in a responsive frontend built on JavaScript and Bootstrap. Display options include a river of news format or various sorts. Items are marked to archive permanently or held for a certain number of days. Publishing features allow for the sharing of items via social nets like Twitter, Facebook, and LinkedIn. Multiple personas allow for profiles setups that will let users create collections that can be handled separately. User management is handled internally or thorough various social net APIs.

Of course it all gets open sourced. Hey, watch for it on GIthub. The goal here is to create something that lets me read the info I’ve decided I want to see, and share it as I will. I guess we’ll see what happens.

The Report of Current Opinions: Santa Comes Early to the Open Law Movement

Public.Resource.Org will begin providing in 2011 a weekly release of the Report of Current Opinions (RECOP). The Report will initially consist of HTML of all slip and final opinions of the appellate and supreme courts of the 50 states and the federal government. The feed will be available for reuse without restriction under the Creative Commons CC-Zero License and will include full star pagination.This data is being obtained through an agreement with Fastcase, one of the leading legal information publishers. Fastcase will be providing us all opinions in a given week by the end of the following week. We will work with our partners in Law.Gov to perform initial post-processing of the raw HTML data, including such tasks as privacy audits, conversion to XHTML, and tagging for style, content, and metadata.

via The Report of Current Opinions – O\’Reilly Radar.

On Sunday Dec. 19 Carl Malamud made the startling announcement quoted above. And you did read it correctly: “The Report will initially consist of HTML of all slip and final opinions of the appellate and supreme courts of the 50 states and the federal government. ” To say that this is huge would be the understatement of the year.

From personal experience I can tell you that the “slip and final opinions of the appellate and supreme courts of the 50 states and the federal government” have never all been freely available in HTML before. Not even close. At best you could probably wrangle 75% of these opinions in PDF using a mountain of code to scrape sites and parse feeds. To have all this available as a single feed is a game changer.

As a researcher and builder of tools for legal research and education, having access to a single feed that contains all of this data is just the thing I’ve been looking for (and occasionally trying to build) for the past 15 or so years. I have no doubt that the availability of this feed will spark a flurry of development to use the data in new and interesting ways. I will certainly be incorporating it in the CALI tools I’m currently working on.

Of course there are a couple of caveats here. First, we haven’t seen the feed yet. It won’t be available for a few weeks, so right now I’m still just waiting to see what it will look like. Second, there are 2 “timeouts” built into this service, direct government involvement by July 1, 2011 and a general sunset of private sector activity in creating the feed at the end of 2012. The timeouts underscore the belief that providing free and open access to primary legal materials is a duty of the government, plain and simple. As citizens we are bound to follow the law and our government should be obligated to provide us with free and open access to that law.

I know I’m certainly looking forward to a new year that brings greater free and open access to the law. Thanks, Carl.

9 Tools for Live Streaming Just About Anything

So now, for example, brainstorming can be done with a wiki-like tool, and notes from a meeting or background research can become a blog post. Instead of saving bookmarks as private “favorites” in a web browser, you can publish them as social bookmarks. Ideas and discussions can be expressed as blog posts or as status updates on social networks.

via MediaShift . 9 Tools to Help Live-Stream Your Newsroom | PBS.

Excellent set of tools to let you add live streaming to just about anything. It is easy enough to see that these sort of tools would be useful in legal education and certainly when used with Classcaster allow for all sorts of possibilities. I think the idea here is to open the room, removing the walls from the class/session/seminar/presentation to allow access to the broadest possible audience. Certainly applications for legal education could make use of any combination of these tools to enhance the classroom or open the symposium.

One thought here also is that the law.gov movement should really be using a tool set like this to open there discussions.

And keep in mind that using live streaming tools works best when usage is planned in advance.

RSS Isn’t Dead, People Just Don’t Get It

I’m sorry, but RSS feeds are way too slow. I know this first-hand. As part of my job here at TechCrunch, I monitor a lot of RSS feeds for breaking news. We also produce our own feed and I can see how quickly it propagates to various feed readers and feed-powered news aggregation services. The lag time between posting a story and seeing it pop up in the RSS feed is usually a few minutes, and then it can take another 10 to 15 minutes or so for it to appear in something like Google Reader.

via Speeding Up RSS .

The problem with this whole article is that it totally misses what RSS is and how it works. RSS is a data format, a very simple XML file. It isn’t slow or fast, it’s just a chunk of well-formated data. RSS feeds are produced by many systems at the very moment of publication. The feed for this very site will be updated as soon as I click the publish button. But the RSS feed is just an XML file. No speed is involved, just a file. Got it yet?

The lag the writer above seems to see is just a function of the systems that consume the RSS feeds and really has nothing to do with the RSS itself. As commonly implemented RSS is used as part of a ‘pull’ system. A remote client pulls the XML file periodically from the server. Servers and clients usually limit how often the files are pulled in order to conserve bandwidth. The idea behind RSS pull systems is that the client decides how often to poll servers for new updates. If you want faster updates, just crank up how often your client hits the server. Of course many servers will ban you if you try to update too often.

Now, I can easily imagine a system where the RSS feed is pushed out to known clients. Because the RSS specification is open and under a Creative Commons license and extensible through XML namespaces, it would just take a bit of design and programming to get RSS into shape and come up with a scheme that pushes out the updated RSS to a client that parses the XML into those stories we love. The client should be something lightweight and widgety. Once installed it needs to register with the server, giving its IP, and declare what feeds it wants pushed out. Then something new gets published and it gets sent right out to the client, queuing the update if the client is not available. As a bonus, the server can still dish out the very same RSS feed to any pull clients that want to consume it.

See nothing to it. RSS isn’t slow, it just isn’t doing what you want it to do, so go ahead and fix it.