Shared posts

16 Jul 14:41

Alerting on twitter feeds - now that RSS output is dead - IFTTT & Google script, Zapier & Mention

by Aaron Tay
On July 1, 2013 Google reader was retired. This was high profile news that was covered heavily online.

This wasn't the only blow to RSS usage, a lesser blow was struck when Twitter announced permanently retiring the Twitter API v1.0 which allowed Atom and RSS feeds output. The current Twitter API 1.1 only allows JSON format and requires authentication to access. This took effect, June 12.

For most people, this did not make a difference. But for me it was a blow, because I was pairing RSS output from Twitter feeds with IFTTT to filter and  alert me only if a certain keyword appeared in the feed.




To backtrack a little, I've written many times on this blog about techniques on how to proactively scan for tweets about your library.

There are 3 ways to figure out if a tweet is about your library even though the person tweeting does not @mention your library.

1. If the  tweet contains the keyword (e.g NUS Library)

2. If the tweet contains keyword (e.g. Library)  and is within say 1 km of your library

3. If the tweet contains keyword (e.g. Library)  and is from people you can identify as your user

In general, my current technique involve, pulling out the results from Twitter in RSS & putting them into IFTTT which will then alert you when that occurs

In fact, before the retirement of Twitter API v1.0, IFTTT could even pull directly from Twitter results without RSS for search terms but this is no longer an option and now IFTTT can no longer trigger on Twitter but can work only as an action.

You might wonder why I use IFTTT, when Tweetdeck is capable of tracking such items including location alerts. 

IFTTT is pretty handy because 

1. It can filter and alert on a specific keyword only - eg You could put in a RSS of a twitter group of people who are presumably your users, but get alerts only if the word library is mentioned.

2. It provides a host of alert options, from email to SMS and more.

But now that Twitter does not provide results in RSS, what can be done?

1. Use a third-part service to provide Twitter output in RSS

Digital Inspiration has posted a tip on how to use Google Script to setup rss feeds from Twitter

"What we really need is some sort of a parsing program sitting between Twitter and our RSS Reader. The parser would fetch updates from Twitter at regular intervals and convert them from JSON to RSS which we can then subscribe in our favorite RSS Reader."

Follow the instructions,  then convert the existing RSS feed url to the new URL.

If you have no idea of the syntax on how to grab Twitter results in RSS in the first place see this for the syntax.

Using that, you just need to replace the portion before the q=xxxx with the new URL from the Google Script. 

For example if before you were doing

http://search.twitter.com/search.rss?q=nuslib%20OR%20nuslibrary%20OR%20nuslibraries

You just need to replace the part in red, with the new Google Script url given to you via email once you have set it up, eg.

https://script.google.com/macros/s//exec?action=search&q=nuslib%20OR%20nuslibrary%20OR%20nuslibraries


Below shows one IFTTT Recipe I setup that will SMS me, if certain keywords are tweeted.,




This works like a charm, but for some reason I couldn't get it to work with location alerts, though I wonder if it is due to the length of the rss string.  

IFTTT itself polls every 15 minutes, Google script itself only pulls from Twitter periodically, so this creates even more delay, so if you want close to real-time alerts this isn't ideal.


2. Use Zapier - a IFTTT alternative

Gary Green who is a bit of a IFTTT expert blogged about IFTTT alternatives and Zapier was mentioned.

It's very similar to IFTTT but it seems to be a lot more powerful in particular for Twitter and a bit more customizable.

The first few parts are similar to IFTTT, you select Trigger and actions. Here I select Twitter as a trigger and to email myself to Gmail as action.



Select the specific accounts



The Twitter options look promising because unlike IFTTT you can directly pull in Twitter search results without RSS


The basic options allow location searches, so you could pick a Latitude and Longitude and a radius around for tweets to be alerted on. 

But it gets really interesting if you click on Add custom filters



I soon realised the basic options just scratches the surface, Zapier is apparently capable of filtering on pretty much every piece of metadata available from a Tweet, and it is a very long and complicated list. 

So for example, you could setup alerts on favourited tweets, whether they are retweeted, how many times and much much more. 

The downside is, you pretty much need to be an expert on Twitter API to know what each field means.

Setting up the action can be also quite customized.



Unlike IFTTT which has defaults for Zapier, you are expected to set your own, some are pretty obvious. But others are not.


The text in Orange are actually dynamic based on the metadata from tweets. It takes some experimenting to know what you want. But the "live preview" helps by showing what the data will be like. Below shows for example what "User Created At" field will show.



I found this part of Zapier a bit buggy, occasionally it won't show any data (because there is no real tweet to draw on) which is normal, but occasionally some fields just wont appear, even though I know they are available for selection, but if I reload the pull down again after several tries the option appears?

But in general it works. Here's an example.




Compared to the RSS->IFTTT method, Zapier sends out the alert faster, based because there is only a 15 minutes delay for the free version, while IFTTT also boasts the same 15 minute polling time, there is additional time delay due to the additional of the Google Script to pull in Twitter results.

The free version of Zapier is also limited because you can get up only 5 such tasks and receive a maximum of 100 alerts per month for the free version, while IFTTT has no such limits.


3. Use Mention

For some different, try the Mention Service, this covers not just Twitter but also blogs, facebook etc and pretty much everything. It does not allow location based alerts, and is limited to 250 alerts per month for free. I use the ios app mainly but there is a desktop version, Chrome 





Conclusion

This is a somewhat geeky post, though I have been using such techniques since 2010 and have found them invaluable in keeping on top of news I am interested in. 
11 Apr 17:37

This Must be the Band

by Dana
David Byrne is coming to Ann Arbor in July, to perform with St. Vincent. However, for Talking Heads devotees, that show might seem too far off,  be too costly or too tangential from Byrne's New Wave roots. Tonight, Live Music Alliance is presenting a Talking Heads Tribute. Come down to the Blind Pig at nine to speak in tongues. You can buy tickets here, for ten dollars.  And to get you into the mood, The Talking Heads:
08 Apr 20:03

Twitter throws a bone: Increased hits and metadata in Twitter Search API 1.1

by Martin Hawksey

Twitter has recently frustrated a number of developers and mashup artists moving to tighter restrictions on it’s latest API. Top of the list for many are all Twitter Search API requests need to be authenticated (you can’t just grab and run, a request has to be via a Twitter account), removal of XML/Atom feeds and reduced rate limits. There are some gains which don’t appear to be widely written about so I’ll share here

#1 Get the last 18,000 tweets instead of 1,500

Reading over the notes for the latest release discussion/notes for NodeXL I spotted that

you now specify how many tweets you want to get from Twitter, up to a maximum of 18,000 tweets

Previously in the old API the hard limits were 1,500 tweets from the last 7 days. This meant of you requested a very popular search term you’d only get the last 1,500 tweets making any tweets made earlier in the day inaccessible. In the new API there is still the ‘last 7 days’ limit but you can page back a lot further. Because the API limits to 100 tweets per call and 180 calls per hour this means you could potentially get 18,000 tweets in one hit. If you cache the maximum tweet id, wait an hour for the rate limit to refresh you could theoretically get even more (I’ve removed the 1.5k limit in TAGSv5.0, but haven’t fully tested how much of the 18k you can get before hit by script timeouts).

#2 Increased metadata with a tweet

Below is an illustration of the data returned in a single search result comparing the old and new search API.

Old and new Search API responses

If you look at the old data and the new data the main addition is a lot more profile data. A lot of this isn’t of huge interest (unless you wanted to do a colour analysis of profile colours), but there is some useful stuff. For example in this example I have profile information for the original and retweeter. as well as friend/follower counts, location and more (I’ve already shown how you can combine this data with Google Analytics for comparative analysis).

Whilst I’m sure this won’t appease the hardcore Twitter devs/3rd party for hackademics like myself grabbing extra tweets and more rich data has it’s benefits.

08 Apr 18:19

Announcing the NYPL Digital Collections API

by Doug Reside, Digital Curator of Performing Arts, Library for the Performing Arts
Bethanycron

This.

The New York Public Library is pleased to announce the release of its Digital Collections API (application programming interface). This tool allows software developers both in and outside of the library to write programs that search our digital collections, process the descriptions of each object, and find links to the relevant pages on the NYPL Digital Gallery. We are very excited to see what the brilliant developers who use our digital library will create. In the following post, Digital Curator for the Performing Arts, Doug Reside, reflects on the importance of APIs in our age of digital information.


It is now April, when, as both Chaucer and T.S. Eliot observed, small roots shoot up from the ground signaling new beginnings. Twenty years ago this month, the European Organization for Nuclear Research (known commonly by its French-based acronym, CERN) decided to make the technology that powers the World Wide Web free for anyone to use — a move towards openness that led, in just two decades, to an explosion of innovation and unprecedented access to information.

Ten years later, in April of 2003, a government-sponsored project to map the entirety of the human genome was declared complete. This monumental accomplishment was the result of a worldwide collaboration among researchers who contributed their data to a common pool. Although some private companies attempted to patent their own contributions, President Bill Clinton declared in 2000 that the project would "continue its longstanding practice of making all of its sequencing data available to public and privately funded researchers worldwide at no cost." The potential, yet unimagined, uses for this data were felt to be too important to be stalled by limiting innovation to a few companies. The small green shoots of innovations in medicine and biotechnology are even now beginning to emerge from the seeds of this decision.

In theory, the free and open standards of the web should allow data sources like the human genome project to be easily combined with others and enable new discoveries, but in the early days of the Internet many important data sources remained isolated from each other. What T.S. Eliot wrote in The Wasteland nearly a century before is an apt description of the situation:

What are the roots that clutch, what branches grow Out of this stony rubbish? Son of man, You cannot say, or guess, for you know only A heap of broken images[...]

To make sense of these scattered pockets of data, some programmers designed API (or application programming interface) to make their information more usable. An API is a set of commands that computer programmers expose to the world to allow other programmers to perform an action on their systems (often to retrieve data). Programmers use APIs to take scattered data sets (a heap of broken images?) and combine them together to create new knowledge. For instance, if you've ever seen a webpage that mapped events (such as job openings or real estate) on a Google Map, the programmers probably used the Google Maps API.

Today, in the spirit of the seedlings of openness that sprouted in past Aprils, I am very pleased to announce the first release of the New York Public Library Digital Collections API. This API, built by developers in our IT Group, allows computers to search our digital library and get back information about the objects along with links to the relevant Digital Gallery page. Of course, as a human, you can already do that using the Digital Gallery itself, but you can only perform one search at a time. If you wanted to make a chart of say, the most commonly occurring words in the titles of the Mid-Manhattan Picture Collection, it would take a while. Now that the API makes this data available to computer programs, though, it wouldn't take a great deal of coding to generate such a chart (I'll leave that as a challenge to you hackers out there... post your solutions in comments).

26 Mar 13:28

Reporting from the Bunker of Dark Elven Magic

by mariekeguy

I’m guessing that Marissa Mayer, CEO of Yahoo, had some idea of the wrath that she would incur by last week’s controversial decision to insist that remote workers move back in to the office. Headlines like Back To the Stone Age? New Yahoo CEO Marissa Mayer Bans Working From Home and Mommy Bloggers Are Tearing Apart Marissa Mayer might mean she now requires a cup of Horlicks before bed time – but then she was brought in to make tough decisions.

Apparently a lot of the remote workers Yahoo had on their books were no longer productive and as Business Insider explainsa lot of people hid. There were all these employees [working remotely] and nobody knew they were still at Yahoo.“. It sounds like Yahoo were doing a lousy job of managing their workforce, both in the office and out. They clearly need to scale down and make people redundant, so this “is a layoff that’s not a layoff.

I hope ultimately the Yahoo situation won’t put people off trying to become remote workers, or deter employers from employing remote workers. As I’ve said many times on this blog (and else where), it doesn’t work for everyone. The decision needs to one made by employer and employee together. Some people just can’t get themselves motivated without a little encouragement (i.e. they need to be sitting in an open-plan office where management can keep an beady eye on them). Also remote workers need real support to fulfill their potential, they need to be kept in the communication loop and that requires effort. Remote working wasn’t working at Yahoo because people messed up, not because remote working doesn’t work.

Out of all the posts and articles Ive read about the Yahoo situation Tim Sniffen’s An Open Letter to Yahoo! CEO, Marissa Mayer is by far my favourite. Tim, who claims to have been a Yahoo junior server administrator for eleven years, explains to Ms. Mayer that “you do not want me in your office.”

The reasons why are clear…

Screen Shot 2013-03-07 at 19.40.55

Tim ends with some PSs…

Screen Shot 2013-03-07 at 19.49.38

He sounds like the sort of employee I really wouldn’t want to lose if I was Marissa Mayer. Humour may be the tool they have in fighting their current financial crisis.


Filed under: work/life Tagged: yahoo