Location, Location, Location! Now I get it.

I’ll admit that for the longest time I didn’t get the mobile world’s fascination with location. It seemed like one of those things that mobile developers did to push ads on me while I was in a grocery store or alert people I vaguely knew to my presence in a museum. Most implementations left me feeling underwhelmed. OK, so my phone knows where I’m at. Then what?

I’m coming around on location-based tech now as I’ve been working with a bit of it for a side project I’ve got going. The light bulb came on while writing a little web app that can tell me where I am and give me some basic info about that place. Turns out that once you peel off the veneer of constant ad generation using location in web apps (and, by extension, mobile apps) is fascinating from the developers point of view. Knowing where someone is provides a hook for offering up a lot of useful data that isn’t about selling things or letting near strangers know where you are.

And it isn’t that hard to do.

A good place to start is with the Google Maps JavaScript API. The developer site provides everything you need to get going with adding interesting location-based features to your apps. I tend to use JQuery when I have to deal with JavaScript and there is an excellent demo page of JQuery Mobile integrations with the Google Maps API with many useful examples.

I’ve put together a little example page for you to try. You’ll need to give it access to your browser location data and then you’ll get some basic location information. I find it interesting that in testing the most accurate location comes from mobile devices. The location data returned by laptop and desktop browsers is a lot less accurate, seemingly giving more weight to your IP address than other factors.

 

A Text Analysis API To Take For A Spin

AYLIEN Text API is a package consisting of eight different Natural Language Processing, Information Retrieval and Machine Learning APIs that will help developers extract meaning and insight from documents.

There are currently 8 endpoints available:

  • Article Extraction: Extracts the main body of article, including embedded media such as images & videos from an URL and removes all the surrounding clutter.
  • Article Summarization: Summarizes an article into a few key sentences.
  • Classification: Classifies a piece of text according to IPTC NewsCode standard into more than 500 categories.
  • Entity Extraction: Extracts named entities (people, organizations, products and locations) and values (URLs, emails, telephone numbers, currency amounts and percentages) mentioned in a body of text.
  • Concept Extraction: Extracts named entities mentioned in a document, disambiguates and cross-links them to DBPedia and Linked Data entities, along with their semantic types (including DBPedia and schema.org types).
  • Language Detection: Detects the main language a document is written in and returns it in ISO 639-1 format, from among 76 different languages.
  • Sentiment Analysis: Detects sentiment of a document in terms of polarity (positive or negative) and subjectivity (subjective or objective).
  • Hashtag Suggestion: Automatically suggests hashtags for better discoverability of content on Social Media.

via Text Analysis API Documentation | AYLIEN.

This might be interesting here when used in conjunction with something like the Free Law Reporter though my initial testing seems to bring uneven results. The API did good work with a copyright case, spotting key phrases and generating a good summary. It didn’t handle Brown v. Board of Education as well, missing key concepts and generating a useless summary. It seems to work better at extracting short newsy articles from cluttered web pages than analyzing lengthy text articles.

Tauberer On Creating A Good API

Let’s take the common case where you have a relatively static, large dataset that you want to provide read-only access to. Here are 19 common attributes of good APIs for this situation. Thanks to Alan deLevie, Ben Balter, Eric Mill, Ed Summers, Joe Wicentowski, and Dave Caraway for some of these ideas.

via What makes a good API? Joshua Tauberer’s Blog.

This lengthy article provides an interesting set of points that anyone creating an API for a data set or service should at least consider as they create. I think it’s worth listing the points here, but be sure to go read the article to get all of the details. Then think of these things when you are creating your API.

  • Granular access
  • Deep filtering
  • Typed values
  • Normalize tables, then denormalize
  • Be RESTful , and more
  • Multiple output formats
  • Nice errors
  • Turn intents into URLs
  • Documentation
  • Client libraries
  • Versioning
  • High performance
  • High availability
  • Know your users
  • Know your committed users more
  • Never require registration
  • Interactive documentation
  • Developer community
  • Create virtuous cycles

I think it is also interesting to consider these points when you are developing applications that consumes data or services through an API. If the API you are using is deficient on any of these points consider contacting the API’s developer to see about making the API better.

 

How Twitter Runs And Runs And Runs

Everybody has this idea that Twitter is easy. With a little architectural hand waving we have a scalable Twitter, just that simple. Well, it’s not that simple as Raffi Krikorian, VP of Engineering at Twitter, describes in his superb and very detailed presentation on Timelines at Scale.
If you want to know how Twitter works – then start here.It happened gradually so you may have missed it, but Twitter has grown up. It started as a struggling three-tierish Ruby on Rails website to become a beautifully service driven core that we actually go to now to see if other services are down. Quite a change.

via High Scalability – High Scalability – The Architecture Twitter Uses to Deal with 150M Active Users, 300K QPS, a 22 MB/S Firehose, and Send Tweets in Under 5 Seconds.

Read the article for a good summary of how Twitter runs. As noted, it isn’t all that easy anymore. The entire 38 minute talk is worth listening to, especially for anyone with an interest in designing next generation web apps.

One of the key points in the talk is that Twitter isn’t really a web site, it’s really an API with a web application built on top. The work is in getting the API to run as fast and as effectively as possible. The tech used to accomplish this is interesting because it isn’t just a bunch of database tables, and it is the future of the interactive web.

The HuffingtonPost Provides Open Source API For Public Polls | opensource.com

The initial release is big. It includes more than 215,000 responses to questions from more than 13,000 polls, which the HuffPost Pollster team has organized by subject and geography into more than 200 charts. Per their announcement, “the data feeds operate in real time, so shortly after we add a new poll to our database, itll appear in the HuffPost Pollster APIs responses and calculations.”Adding to the coolness is that the effort relies heavily on open source tools. The HuffPost Pollster team is publishing the data as an HTTP-based application programming interface, or API, with JSON and XML responses. They are releasing the data under a creative commons license.

via The HuffingtonPost releases Pollster, open source API for public polls | opensource.com.

This lets developers get access to a large body of polling data from over 13,000 polls. The API provides JSON and XML responses to queries sent over HTTP allowing developers to parse and display the information in their applications. This represents a major open source resource in the political field.