Comet — My ADN hack project

Today is an App.net Hack day and my project is an embeddable comments widget that is built on top App.net Posts. It’s actually embedded on this page. So, feel free to test it out and give me feedback @voidfiles. You will need an App.net account to see the comments (for now), and you will need one to leave a comment for sure.

If you are intrested in using this widget your self, just embed this code somewhere on a webpage. There is one bug right now, on mobile safari the browser is getting scrolled to the top of the comments, but other then that it should work.

This is alpha So the things like the url will change.

I’m the newest contributor to The Changelog

I am a big fan of The Changelog. There is a folder in my reader dedicated to raw sources just so I can find new open source projects, but it was always nice to read The Changelog because it was curated. When I finished the first version of my brand new open source project, Lark, the first site I thought to contact was The Changelog.

Over the Christmas break I aproached them to see if they would be interested in covering not just Lark but a few other open source projects. That initial contact evolved into a disscussion about covering the open source community in general and eventually we talked about the prospect of having me join. I jumped at the opportunity. After talking with Adam and Jerod about it, and the future of the project I am thoroughly excited to be apart of what they are doing. If you don’t already subscribe to The Changelog you should.

My first contribution went up this morning its about Sandman: Give any SQL database a REST interface with Sandman

FeedShare: an OPML sharing service

Ask, and ye shall recieve. Brent Simmons posted his OPML file on saturday1, and also wrote “I wish there were an OPML sharing service.”. I am sure more than a handfull of people thought, I can do that, but FeedShare beat us all 2.

I added my feeds and I hope you do to.

  1. http://inessential.com/2014/01/04/my_feeds

  2. http://inessential.com/2014/01/06/feedshare_net

Introducing Lark a RESTy interface for Redis

Lark is a python library that provides a generic method for transforming a HTTP request into a redis command. If you know what webdis is then you’ll roughly know what this is. It does a couple of things right now:

  • It users REST as a guideline without getting to pedantic.
  • It has built in support for per user key prefixs.
  • It automatically JSON encodes redis values (where appropriate).
  • It has lots of tests (and TravisCI all setup).

I have been slowly amping up this project so that it could be my holiday project1, because I have a host of things I want to add quickly. Webdis has authentication right out of the box. Lark is setup for a scope based authentication scheme, but there is no examples of really how to do that. This is the next thing I am going to work on creating an authentication/authorization layer that people can use.

So, if you are interested checkout the github page and fork it. Tell me how crazy I am, tell me where this thing just doesn’t work at all like I say it does.

  1. for more on why holidays are a great time to work on projects checkout Macdrifters post Seasonal Tasks

The Story of ADNpy: An App.net API Client

This is the story of ADNpy. A python App.net API client. From the earliest days of the App.net API I wanted to build a client. In the beginning though the approaches were infinite. For as many times as I started an API client, I stopped because I couldn’t figure out how to start.

The fact is though, we are an API company and that meant we had to consume it. We didn’t have the luxury of thinking about our options we just had to start building. Time went on though and in a series of small steps we created a python API client. So, a couple months ago I started taking our pieces and built a client library.

The Parts

In July 2012, we started to work on the API. We decide the best way to showcase the it was to build a client. Thus Alpha, from day one, was going to be an API consumer, but what did that mean for us though?

For one thing we run on Django all be it with many pieces replaced. Our common data access pattern was to make a query to the database using Django’s ORM and to then operate on the models. That method was going to turn into making REST API calls and getting back JSON objects.

This made the first part pretty clear. For the sake of letting old code work roughly the same as the new we wanted to use dot notation to access the properties of our API objects instead of using dict notation. This has the added benefit of being more aesthetically pleasing. The way we ultimately solved this had the added benefit of being able to fallback to to dictionary lookups instead of using hasattr, or getattr (even though they still work as expected).

You can see this piece in adnpy.models.

Our first internal API consumer was Alpha, but it wasn’t comprehensive, Alpha doesn’t do everything the API can do. As we developed the API we all ended up building ways to make calls to the API for testing purposes. Besides needing to use the a dev, sandbox, and production version of the API for work, I build a lot of little test projects.

In my case, I built a script that modified a requests session that would make calls to the API that looked like this, r.get('/posts'). This is prefect for someone who has a good understanding of the API and all its paths, but not so much for a new comer. I needed to add a way to go from a class method, to a path plus additional configuration information. You can see the descedant of that code in the API handler.

The Build

After many months of frustration that I couldn’t have the best features of our internal API client in an easily installable python package. I decided to pull it all together in one python package and release it.

My goal was to build a python package in the most modern way possible with tests and nice-to-use API.

Seeing as that I use requests I heavily copied Kenneth Reitz style of python package. His guide to releasing python projects has also been incredibly helpful.

Considering python API’s, I have always enjoyed using tweepy when I need to interact with the Twitter API, so I was very influenced by how the tweepy API worked. I liked that you could get a User object, and you could then call methods on that User object. Another great idea in tweepy was the cursor object. Even though I consume the API all the time I would rather not think about pagination when I don’t have to. So, you can see that ADNpy also has a cursor.

I also took note of how tweepy built up its API internally. This allows one to encapsulate all the different configuration options required to make an API request. It also made possible the auto generation of doc strings so that there is at least some API docs, without needing to hand write each one. You can see the docs here.

Finally came testing. I wanted something that I could run over and over again with out accidentally tripping the API rate limits. The best would be something that could replay API calls. Happily that exists. This way as I made new API requests they would be recorded to a JSON file and replayed from the JSON file in the future. Using this tool I could run the whole test suite without needing to make actual API calls.

With this all in place I was able todo some TDD for the API. I’m not really a zealot when it comes to any paradigm, but in this case TDD was something that really helped.

The Release

Once, I had the library and the tests it was time to release, but I still needed a few things. I wanted to have some docs with all the main API methods documented and a “quickstart” guide. I also needed to host the docs. Again borrowing ideas from requests I used sphinx for the docs and I hosted them on Read The Docs.

Finally, I wanted there to be some kind of continuous integration. You may have seen some github projects that have these images saying the tests pass. That comes from an incredible open source resource called Travis CI. It will do continuous integration for any public Github project, for free, and it’s integrated into Github using webhooks.

There is a nice flow to all of these tools. I write code and test locally. I update the docs as needed. Then I commit to Github. Github will then ping travis and Read The Docs. Which in turn run the tests and rebuild the docs, so I don’t have to do a thing.

From time to time, I just need to push a new version to pypi.

The Future

The test package I was using to do replay no longer works. I’ll need to figure out how to fix that, until then running the tests often is problematic. I am hoping someone will fix it, but it may have to be something I do.

I am still not satisfied with how the docs have turned out, but its better then nothing. I at least didn’t want people to have to look up the code all the time to figure out how to do things. I think I have achieved that. But, the docs could be a lot more helpful.

In the beginning, I am trying not to have to much of a schedule. I will continue to push point releases until I feel that we have a very strong v1 and then there might be a more stringent release schedule.

At the end of the day, this is just one part of a larger goal which is to make using the App.net API incredibly easy. Libraries like this one are only part of that effort, but I have found that building these kinds of projects expose the holes in our existing efforts to support developers. So, releasing new open source projects that help developers is something I will continue to do. It’s only going to make App.net better.

Feedbin.me Goes Open Source

Feedbin.me has released the code that runs the service as open source, checkout out the Github. It joins Newsblur as being the second large for profit RSS reader to release its code in such a manner. I’m not aware of any other commercial services that open source all of their code.

This seems like a strong move that only a independent developer can take when faced with competition from well funded competitors. Being a 1 or 2 person team you can’t fight head-to-head with a Feedly or a Digg without doing something a little guerrilla. Open sourcing code makes a ton of sense in this case.

Open Question: What’s the impact of Reader’s shutdown on traffic?

Let’s say that there are only 1 million Google Reader users. In three days, that’s a million people who won’t be clicking on American Apparel ads. Which could directly effect the bottom line of websites that are entirely funded by ads, like blogs.

Is there anyway to know by how much though? What do you think the leading indicators might be?

I have enabled comments for this post so let me know if you have any answers.

On a side note, take a look at this alexa graph of blogger.com. I site like that has to be impacted by a Google Reader shutdown, right?

Blogger.com Alexa Traffic Graph http://www.alexa.com/siteinfo/blogger.com

The Rumproarious eBook

I have been threatening to do this for a while, and now I have. I created a mini-ebook about the future of feed readers. I used many of the blog posts that I wrote over the last few weeks as the raw material for it, but I tried to craft a solid narrative for the ebook. Check it out:

Feeding Our Reading Habits

Also available in PDF and for the Kindle.

Network Thinking In TV

The Sopranos is a story about a man that happens to be set in the world of organized crime. The Wire is a story about a setting, Baltimore, that happens to include a story about some people. They just so happen to be there so we learn a little about their lives, but the the real story is that of the city. This distinction is what makes The Sopranos one of the biggest shows of the last 20 years, and it makes The Wire a show with a small but vocal crowd who think it may be the best TV show ever made. Oddly, this comparison also illustrates why feed readers aren’t used by as many people as they could be.

These two shows are both critically acclaimed, but The Sopranos has objectively won more honors. I don’t think this was because The Sopranos was objectively better, though. I think it comes down to context, the network of stories in a long drawn out series and how they are connected in their respective worlds.

In The Sopranos, the stories are all closely connected through Tony. Everything could be understood through his perspective. On the other hand, the narrative of The Wire is solely connected through the city.

The way in which The Wire is written could create a problem for a casual viewer. First, you would need to understand the premise. If slow-paced stories set in a rundown city isn’t your cup of tea, it might require a lot of time before you start to see that each thread plays apart in a larger story about Baltimore.

The second problem is that a viewer needs to keep a running context of everything that has happened. The history of the show is sometimes the only way some stories mean anything.

The Sopranos is not a simple show, or any less deserving of its accolades, but it is easier to watch. I have read more than one blog post opining about why The Wire didn’t get a single Emmy. I think the simplest reason might be that it was just a difficult show to watch, and therefore fewer people watched it, but what made it difficult also made it so rich and pleasing to those who did watch it.

Now, here is the pivot. Comparing The Sopranos to The Wire could be like comparing a straightforward news diet of the daily paper, a few focused news sites, and possible one or two blogs, to that of a feed reader. For discussion purposes, let’s call the former a focused news diet, and the latter a diffuse news diet. The focused news diet is to The Sopranos as The Wire is to the diffuse news diet.

The focused news diet, like The Sopranos, has a strong central thread. Each website you visit there is a couple of editors who really put their stamp on things. They make sure that the site is dialed in basically the same way, day after day. If you read the same blogs everyday, even single-author blogs, you begin to understand how they tick. There isn’t much that surprises you or is orthogonal to the focused news diet. Also, like The Sopranos, the focused news diet is easy to consume. You don’t have to try very hard, and it becomes like a habit. Even if you wanted to consume more news, you couldn’t really, because the daily habit of opening all those webpages would become cumbersome if you tried to add in any more sites. Just like The Sopranos, you come away with a strong point of view from a few select sources.

Now compare that to the diffuse diet, the feed reading diet. The feed reading diet could easily include over 100 different news feeds that could be a mixture of professional, pro-am, and amateur alike. It could go further still and include feeds from aggregators that collect sources from all over the place. In this diet, there is no way any one editor will overpower another. You end up being your own editor. You must build a context for yourself. Like The Wire, this becomes the reward, understanding how stories relate to one another. Understand the underlying allegiances each site has to one another. By building that context yourself, each story means more and gets placed in a larger web of interest.

While The Sopranos will live on as a great show, it feels as if The Wire is beginning to ripen. The Wire will be a show that people discover slowly for a long time, and it’s possible more people will end up enjoying it long after it aired. David Simone said this himself recently at an interview in San Francisco: “I have a knack for making shows that people watch only after they have been on TV.”

Likewise, I think feed readers are an idea whose time is coming. We haven’t seen the best days of feed readers, and if we aren’t careful, we might not. But if there is a saving grace, it’s that people who use feed readers use them heavily and don’t want to lose them.

What Would a Facebook Reader Mean?

Last week multiple signs started to appear that point to the fact that Facebook might be prepping a news reader. While there has been discussion of the possibility few, if any, have tried to figure out what it would mean in general. Admittedly, there are few details, but it’s not that hard to extrapolate from past actions. You need to look at this from a couple different perspectives though. Why does this make sense for Facebook, Publishers, or the consumer? Finally, could it work?

First off, I think it makes sense for Facebook. If the newly public company had a list priorities growth and profits have to be at the top. With the street breathing down their neck they have entered new markets and cut out existing partners wherever there is even a little money to be made. One way to increase revenue is to increase page views, and thus ad views. So, the more they can get you to consume information on Facebook they better off they are.

It’s even a better idea when we look at mobile. The second birth of newsreaders is all due to mobile consumption. Reeder, Mr. Reader, Sunstroke all brought a second wind to a decaying product and extended it’s reach. If we throw Flibpoard, Google Currents, and other visual news readers the potential sherlocking gets even bigger. Think, what if Facebook has dedicated readers for the iPhone, and iPad, maybe Android? Built in social graph. The ability to cross promote the hell out of these apps. From Facebook’s perspective it keeps looking better.

This is where publishers come in. People already consume their content through the newsfeed via Facebook Pages, but this probably isn’t the best model for news content. Facebook might choose to create a new feed that consists only of news sources a user has chosen to subscribe to. They might even eliminate the blackbox algorithm on this feed and let users browse all the news they have subscribed too. They might even let users drill down to individual sources.

This makes sense to publishers for a couple of reasons. First, many news publishers already have RSS feeds of some kind, so it would be a very easy technical transition. Second, Facebook Pages are an okay way of doing engagement on Facebook, but I am sure that publishers would rather have a direct line to it’s users. This might loosen the path. Finally, Anyone who is paying attention to their subscriber base on Google Reader will realize that they are about to loose a giant slice of readers. If Facebook is actually proposing something close to a Google Reader replacement this might be an opportunity to salvage those readers.

Lets look at an example like Deadspin. They have 115k likes on their Facebook page. Which gives them a chance at reaching those subscribers every time they publish something to that page. Now we look at how many people subscribe to Deadspin on Google Reader; 221k.

Deadspin on Google Reader

Not only do they have almost twice the readers on Google Reader, each story is probably read more often then it is on Facebook.

Of course anything Facebook would do would have to include a bunch of big brand, upscale publishers, but I think its important to realize that even us little guys are looking at loosing a giant portion of readers.Even Marshall Kirkpatrick had this to say about a Facebook News Reader:

“Sure would be great for all the blog publishers of the world to gain access to even more readers on the FB platform!” — Kirkpatrick

Rumproarious RSS Readership

If smaller publishers are anything like my self. We are talking about losing 99% of our readers1. So, the whole spectrum of publishers could benefit from something like a Facebook News Reader that bootstraps off a closing Google Reader.

So, there are clear reasons why Facebook and Publishers would love this, what about consumers?

I have no idea. Facebook has shown that they can make good apps (Messanger) and they can get people to try new apps (Poke), even if they can get people to keep using them. There is no lack of news reading apps so, sure, I do think that if this is as broad reaching as app + web + feeds users could like it.

Sounds good, right? Well…

Look, whatever Facebook is it’s not a platform. This is 100% publishers beware. Letting Facebook hold the keys to your audience doesn’t seem like a great idea, but publishers might not have an option at this point. Consumers might get a nice product, but is this just another countdown to an eventual shutdown?

Whatever it is we will know more on the 20th.

  1. Even since I wrote about this when Google Announced the closure of Google Reader non-Reader feed readers have only increased by 1, or 2%.