Shared posts

16 Oct 19:27

Release Notes for Safari Technology Preview 94

Safari Technology Preview Release 94 is now available for download for macOS Mojave and macOS Catalina. If you already have Safari Technology Preview installed, you can update in the Software Update pane of System Preferences on macOS.

This release covers WebKit revisions 250329-250947.

CSS Shadow Parts

Web Animations

  • Fixed removing an element to only cancel its declarative animations (r250335)

Storage Access API

  • Changed document.hasStorageAccess() to return true when the cookie policy allows access and false otherwise, for third parties not blocked by ITP (r250431, r250589)

WebRTC

  • Changed to allow suspending RTCPeerConnection when not connected (r250726)

Media

  • Updated MediaDevices to require a secure context (r250551)

JavaScript

  • Changed toExponential, toFixed, and toPrecision to allow arguments up to 100 (r250389)

CSS Grid

  • Preserved auto repeat() in getComputedStyle() for non-grids (r250715)

Web API

  • Accepted two values in the overflow shorthand (r250849)
  • Allowed using WebGL 2 when USE_ANGLE=1 (r250740)
  • Changed the default statusText of Response to an empty string (r250787)
  • Changed CSS ellipse() to accept 0 or 2 <shape-radius> (r250653)
  • Changed Service Worker Fetch events to time out (r250852)
  • Corrected clip-path <geometry-box> mapping (r250778)
  • Changed Fetch API no-CORs check to take into account same-origin (r250515)
  • Changed radio button groups to be scoped by shadow boundaries (r250708)
  • Fixed a newly inserted element to get assigned to a named slot if slot assignments had already happened (r250709)
  • Fixed AbortSignal to always emit the abort signal (r250727)
  • Fixed JSON.parse to correctly handle array proxies (r250860)
  • Made table’s clientWidth and clientHeight include its border sizes (r250553)
  • Updated attachShadow to support attaching a shadow root to a main element (r250770)
  • Updated Fetch data URL HEAD request to result in empty response body (r250822)
  • Updated radial gradients to reject negative radii (r250730)
  • Updated ImageBitmap to be serializable (r250721)

Web Inspector

  • Elements
    • Fixed issue where properties were always shown as invalid if they didn’t match the selected node (r250633)
  • Resources
    • Fixed issue where newlines were being unexpectedly added inside template string expressions (r250544)
    • Include local resource overrides in the Open Resource dialog (r250407)
  • Debugger
    • Prevent blackboxing of scripts that haven’t finished loading or failed to load (r250813)
  • Canvas
    • Made it more obvious that the cards in the Overview are clickable (r250859)
    • Show “No Preview Available” instead of an empty preview for WebGPU devices (r250858)
    • Support editing of WebGPU render pipelines that use the same shader module for vertex and fragment (r250874)
    • Fixed issue where clicking on the Overview path component didn’t work (r250855)
    • Dark Mode: Minor dark mode style fixes (r250533, r250854)
  • Settings
    • Enable the image transparency grid by default and create a checkbox for it (r250814)

WebDriver

  • Fixed an issue that prevented sudo safaridriver --enable from working correctly

back-forward Cache

  • Allowed pages served over HTTPS with Cache-Control: no-store header to enter the back-forward cache (r250437)
  • Allowed pages using EventSource to enter the back-forward cache (r250761)
  • Allowed pages using FontFaceSet to enter the back-forward cache (r250693)
  • Allowed pages using IDBIndex to enter the back-forward cache (r250754)
  • Added basic back-forward cache support for RTCPeerConnection (r250379)
  • Changed IDBTransaction and IDBObjectStore to not prevent a page from entering the back-forward cache (r250531)
  • Fixed pages that frequently fail to enter the back-forward cache due to pending loads (r250414)
  • Fixed pages using WebGLRenderingContext to enter the back-forward cache (r250464)
  • Fixed pages with Web Workers to enter the back-forward cache (r250527)
  • Fixed pages using PendingImageBitmap to enter the back-forward cache (r250782)
  • Fixed ServiceWorkerContainer to never prevent a page from entering the back-forward cache (r250758)
  • Fixed XMLHttpRequest sometimes preventing pages from entering the back-forward cache (r250678)
  • Fixed IDBRequest to not prevent a page from entering the back-forward cache (r250425)
  • Fixed provisional and scheduled loads in subframes to not prevent a page from entering the back-forward cache (r250686)
  • Fixed RTCDataChannel to not prevent entering back-forward cache except if in an open state (r250573)
  • Made fixes to allow youtube.com to enter the back-forward cache on macOS (r250935)
  • Improved Service Worker support for back-forward cache (r250378)

IndexedDB

  • Added size estimate for key path when estimating task size (r250666)
  • Fixed wrapping CryptoKeys for IndexedDB during serialization (r250811)
  • Included size of index records in size estimate of put/add task (r250936)
  • Updated size to actual disk usage only when estimated increase is bigger than the space available (r250937)
21 Mar 20:37

★ The New iPad Mini

by John Gruber

I’ve been a fan of the iPad Mini form factor ever since the first one. The only thing I didn’t like about the original Mini was its non-retina display. (The iPad 3 went retina earlier in 2012, and the original iPad Mini debuted alongside the iPad 4 in October 2012.) The conclusion of my review then:

If the Mini had a retina display, I’d switch from the iPad 3 in a heartbeat. As it stands, I’m going to switch anyway. Going non-retina is a particularly bitter pill for me, but I like the iPad Mini’s size and weight so much that I’m going to swallow it.

That original Mini didn’t have a retina display because that model served two purposes: it was smaller and it was the cheapest (or, in Apple’s parlance, “most affordable”) iPad in the lineup. The original iPad Mini also saved on cost by including a then-year-old A5 chip; the iPad 4 had the then-brand-new A6X chip.

This week’s new 5th generation iPad Mini doesn’t make any technical compromises. It has the same A12 CPU as the iPhone XR and XS (with 3 GB of RAM on the system-on-a-chip, like the XR, not 4 GB, like the XS models — but those XS models need extra RAM for their 3× retina displays). The new Mini supports Apple Pencil, has a laminated display (which puts the pixels closer to the surface of the glass), and very thankfully supports True Tone.

The new Mini is exactly like the new iPad Air, just smaller — and the new iPad Air is in almost every way the replacement for the 10.5-inch iPad Pro, which until now was still hanging around in the iPad lineup. As I wrote Monday, Apple “could have just called them both ‘iPad Air’ and had one be mini-sized and one regular-sized, similar to how the two sizes of iPad Pro have the same product name”. It’s my understanding that this naming scheme was actually considered, and ultimately rejected simply because everyone would call the 7.9-inch model the “Mini” anyway.

I’ve been testing the new iPad Mini since Monday afternoon and I am deeply enamored. Is it as good as today’s iPad Pros? No — see below. But it costs so much less than an iPad Pro. I think of the iPad Pros as the iPad Nexts, and these new iPad Air and iPad Mini models as the iPad Nows. A 64 GB 11-inch iPad Pro costs $800, the 64 GB new 10.5-inch Air costs $500, and the Mini is just $400. You even save on cellular models compared to the Pro — it costs $150 to add cellular to an iPad Pro, but only $130 to an iPad Air or Mini.

Technology-wise, the iPad Mini is missing obvious things that make the iPad Pros so much more expensive: no edge-to-edge display, no inductive (and magnetic) charging port for the superior Apple Pencil 2, no Face ID, no tap-to-wake. I own and use an 11-inch iPad Pro, and it’s been a bit hard to adjust to losing those features. But people who already own a new 2018 iPad Pro aren’t in the market for a new iPad. Again, it’s iPad Now vs. iPad Next — it just so happens that I’m already used to iPad Next.

Basically, it really comes down to the most obvious attribute: size. The iPad Mini hits a sweet spot: it’s way bigger than any phone and way smaller than any laptop. It’s the physical manifestation of what Steve Jobs in 2010 said the iPad set out to be: something between a phone and a laptop. He was speaking conceptually but the iPad Mini takes it literally. If you want to use your iPad as a laptop replacement, the iPad Mini is probably too small, and it definitely doesn’t fit as well with physical keyboards. There’s a reason why Apple doesn’t make a Smart Keyboard Cover for the Mini. The iPad Mini is meant to be in your hand. But if you use your iPad as something in addition to your laptop, it’s a marvelous size, and no competitor has a tablet even close in terms of performance or quality. And the addition of Apple Pencil support works perfectly with its hand-held size.

A lot of the complaints we in the commentariat have lodged against iOS as a tablet OS are washed away when using an iPad Mini. You can split-screen multitask etc., but who cares if it’s a kludge? With a 7.9-inch display you’re almost always going to be using one app at a time, and that feels right on this device. Really, in a lot of ways, the iPad Mini feels like the one true iPad, and the others are all just blown-up siblings that don’t quite know how to take advantage of their larger displays.

Look, I really like my 11-inch iPad Pro and I’m not going to replace it with a new iPad Mini. But damn, it’s a surprisingly close call, simply because I like this size so much.

Here are some cons. The old Pencil 1 feels greasy in hand because it’s glossy, not matte, and the silly caps and charging story are so inferior. Also, the Pencil 1 rolls around annoyingly. ProMotion (which the Pro models have and the new Air and Mini don’t) is nice, but not essential.1 The 11-inch iPad Pro has way better speakers. Tap-to-wake combined with Face ID is so much better than Touch ID.

But here’s a really big pro in the iPad Mini’s column that I didn’t fully anticipate until diving in with it this week: it’s so much better for thumb-typing. Honestly, I hate typing on the on-screen keyboard on my iPad Pro. I hate it. I really do. If I have to do it I’ll put it in landscape and set it down on a table or counter and try to touch type with all my fingers. But holding the iPad Pro in portrait, I literally can’t type with my thumbs. When I try, everything comes out garbled. I can’t reach all the keys, and inexplicably, the iPad Pro keyboards no longer support splitting them into two smaller more reachable halves. I don’t understand that decision at all. Whereas thumb-typing on the iPad Mini is a joy. I type better with the on-screen keyboard on the iPad Mini than I do on any other iPad because it is perfectly sized for thumbs, and my thumbs have been trained by my iPhone usage. Why in the world does the small iPad Mini support split keyboards and the much bigger 11- and 12.9-inch iPad Pros don’t? I don’t even need the split keyboard to reach all the keys with my thumbs on the Mini, but the Mini supports a split on-screen keyboard and the iPad Pros don’t.

Once again, I’ll refer back to my review of the original iPad Mini from 2012:

Typing is interesting. In portrait, I actually find it easier to type on the Mini than a full-size iPad. All thumbs, with less distance to travel between keys, it feels more like typing on an iPhone. In landscape, though, typing is decidedly worse. The keyboard in landscape is only a tad wider than a full-size iPad keyboard in portrait. That’s too small to use all eight of my fingers, so I wind up using a four-finger hunt-and-peck style with my index and middle fingers.

This is even more pronounced now, at least between iPad Mini and iPad Pro (as opposed to iPad Mini and iPad Air) because iPad Pro — inexplicably, as I said — does not support split keyboards, even though they’re bigger devices. I honestly don’t know how anyone is supposed to type on an iPad Pro while holding it in their hands. It’s crazy.

Basically, the iPad Mini knows exactly what it is and the iPad Pros do not — the iPad Pros are lost between the iOS world of conceptual simplicity and the complex world of competing with desktop OSes.

The iPad Mini puts the “pad” in iPad. If you want a device that is bigger than a phone, but smaller and more holdable than a laptop-screen-sized thing for reading and just walking around with, the iPad Mini is it. It’s in no way a laptop replacement and doesn’t aspire to be. It just is what it is, and what it is is great.


  1. ProMotion is Apple’s technology that adaptively updates the display at 120 Hz instead of 60 Hz. The old 10.5-inch iPad Pro had it, the new iPad Air and Mini don’t. But even the iPhone XS and XS Max don’t have ProMotion. ↩︎

29 Jan 04:41

A faster, more stable Chrome on iOS

by Chrome Blog
Out-of-process rendering was one of Chrome’s earliest innovations, and we’ve always wanted to bring its benefits to our iOS users. Unfortunately UIWebView, the component used to render web pages on iOS, is in-process, so that’s never been possible before. The introduction of WKWebView in iOS 8 gave us that opportunity, though migrating to the new framework brought significant challenges. In Chrome 48 we’ve made the switch from UIWebView to WKWebView, leading to dramatic improvements in stability, speed, responsiveness, and web compatibility.

The biggest change is in stability: with WKWebView’s out-of-process rendering, when the web view crashes or runs out of memory, it won’t bring down all of Chrome with it. As a result, Chrome crashes 70% less with WKWebView. Even when counting the “Aw, Snap!” page shown when the renderer crashes, there’s still a big improvement.
Outside of stability, WKWebView brings many other benefits. Web compatibility is improved with support for features like IndexedDB, bringing the HTML5test score for Chrome on iOS from 391 up to 409. Switching to background tabs will cause pages to reload 25% less often. JavaScript speed on benchmarks such as Octane is an order of magnitude faster, and scrolling is smoother and more responsive.

Screen Shot 2016-01-21 at 10.51.51 AM.png
The Chrome team is committed to improving stability and performance. We hope that you enjoy these changes and we are working hard on further improving your browsing experience on iOS.

Posted by Stuart Morgan, Software Engineer and Migratory WebView Watcher

10 Oct 20:48

Ada Lovelace Day is coming up!

by Sydney Padua

ada200thbig

Hello Folks! Popping my head above the parapet to remind you that it’s Ada Lovelace Day on Tuesday! It’s a big festival of blogging, tweeting, think-piecing, and taking about inspirational women in science, technology, engineering, and mathematics and you can see all the events here. I personally will be in person at Ada Lovelace Day Live on Tuesday at Conway Hall in London signing books.

I am on TELEVISION!! and online– in the UK only alas– you have 7 days left to gaze upon my visage in the excellent BBC documentary Calculating Ada.

Some upcoming events in London: I’ll be giving talks and signing books at the Science Museum Late on October 28th, should be an epic night (with drinks!). I’m also talking at CodeMesh about the very very alternative programming technology of the Analytical Engine.

As the inciting incident of this malarkey it’s always a big day around this blog, in preparation of which I have animated this fine gif for your use and enjoyment, here you go in a variety of sizes:

ada200th

ada200thsmall

ada200thicon

ada200thiconbig

And finally, on a personal note– having spent the last year hacking my way through impenetrable jungles battling leopards, snakes, and bears at a vengeance of a pace, I’m trying something new this year: I’m taking up the Academic mantle and am now Senior Lecturer in Animation at Bucks University. As I am meant to spend half my time lecturing and half of it THINKING GREAT THOUGHTS (not generally a priority in Visual Effects) this should allow me time to return to drawing comics AT LONG LAST.

Sydney

12 Dec 21:59

Det blir ingen Jim & jakten på Guldankan idag. Jag kände för...



Det blir ingen Jim & jakten på Guldankan idag. Jag kände för att lägga ut nåt helt annat. Varför? Vet inte. Detta är min blogg. Jag gör som jag vill.

10 Nov 20:48

Chapter 13, Part 12: Drop it

by Eric

ONE:
SABRE is rising from the couch, PISTOL still leveled at PAYNE.

PAYNE’S nice-guy act is evaporating, the smile gone.

1. SABRE: Drop it, sir, I shall not warn you again.

TWO:
PAYNE drops his HAT and PISTOL.

2. PAYNE: I have four men at my back, Lady Sabre.
3. PAYNE: You cannot take us all.

THREE:
SABRE smiles. Hans said very much the same thing not terribly long ago.

4. SABRE: Take you? I shouldn’t know what to feed you, let alone where to keep you all.
5. SABRE: As for your men

FOUR:
Outside the cabin, CALLOW, BURLEY, BARCLAY, and BEAUFORT (geez… lot of ‘B’ names, guys. Next time we do a poll to name characters, mix it up some more, okay?) are all exchanging looks.

6. SABRE/inside: …it’s your life they dice with, Mister Payne.

17 Nov 09:33

Apple Retail Stores to Integrate iBeacon Systems to Assist with Sales and Services

by Richard Padilla
Earlier this year at the keynote during its annual Worldwide Developers Conference, Apple mentioned iBeacon microlocation APIs as a new part of its SDK, which are designed to access location data through the Bluetooth Low Energy profile on iOS devices. Now, the company is preparing to integrate iBeacon systems into its retail locations, which will intially help with customer sales and later be implemented to assist with in-store services such as workshops and Genius Bar appointments, reports 9To5Mac.

ios7apis
According to the report, the integration of iBeacons within Apple's stores will be used in cohesion with a future update to the Apple Store app, which can give detailed information on a product when a user walks near an item. While Apple currently utilizes interactive iPad displays for many of its first-party products in-store, the updated Apple Store app would help users get information about the many products on shelves.
Apple is said to have begun stocking up on iBeacon transmitters, and the company, in the next few days, will begin installing these sensors in many Apple Stores across the United States. These transmitters will be placed on the tables that house Apple products in addition to store shelves holding accessories. The technology will serve as a way to both improve the Apple shopping experience, and in-turn, boost product sales.
Furthermore, Apple is also testing its new iBeacon-based retail system to better provide services in its stores, such as notifying consumers about upcoming workshops, locating customers for Genius bar appointments, and allowing consumers to be informed of a repaired product ready for pickup. This deeper integration of services will reportedly be implemented after the initial roll out of the updated Apple Store app and iBeacons, and would allow for greater accuracy in locating customers within stores compared to the capabilities of the current app.

The company is also looking to integrate an indoor mapping feature in a future version of Maps for iOS, which would help users navigate through buildings and stores, and could also be used with iBeacon technology to provide greater information within an area. A report in September stated that Apple was working on tapping into the power of the M7 motion coprocessor to add additional mapping enhancements in future software updates, and it is possible that indoor mapping and iBeacon technology could work in cohesion with the motion-sensing chip for greater mapping information overall.

Earlier this year, Apple was said to be collaborating with Major League Baseball to utilize iBeacon APIs to enhance its MLB.com At the Ballpark app to create interactive experiences for its fans at stadiums. 9To5Mac also notes that Apple is also rumored to be testing a program allowing iOS developers to easily integrate the iBeacon API into third-party apps. While iBeacons can be currently implemented into existing apps by iOS developers, Apple has yet to provide a straightforward development program for the API.


Recent Mac and iOS Blog Stories
Valentine One Radar Detector Connects to iPhone Via New Bluetooth LE Adapter
Annual iTunes Connect Shutdown to Take Place December 21 to 27
United Airlines Launches Redesigned iOS App with Travel Cards, Multi-Location Booking
Nest Mobile Gets Update With Redesign, Nest Protect Compatibility
Apple Seeds Safari 7.0.1 and 6.1.1 with Autofill and PDF Improvements to Developers
BlackBerry Messenger Now Supports iOS Devices Without Cellular Connections
Gmail for iOS Gets iOS 7 Update, New Features for iPad
Apple Expands List of Stores Open for Thanksgiving Day [Updated]

    






20 Sep 15:59

Logitech and ClamCase Teasing First Two 'MFi' Game Controllers

by Jordan Golson
Two controller makers are teasing MFi "Made for iPhone" game controllers following the public release of iOS 7. The new OS includes special APIs for third-party hardware game controllers, turning the iPhone and iPad into gaming systems on par with other handheld consoles.

ClamCase has published a trailer for its GameCase iPhone controller, which connects via Bluetooth, includes its own battery, and supports all iOS 7-compatible iOS devices.


At the same time, Logitech is teasing its new hardware controller on its Facebook page. The position of the hands strongly suggests the leaked iPhone enclosure controller that surfaced from Logitech back in June.

NewImage
Games designed specifically for iOS 7 will be able to tie into Apple's Game Controller framework, allowing for seamless connectivity to authorized third-party MFi devices.


Recent Mac and iOS Blog Stories
Google Releases New Universal 'Quickoffice' iOS App for Free
Google Drops NFC Requirement for Google Wallet, iOS App Now Available
Disney Launches Sandbox Creation Game 'Disney Infinity: Toy Box" for iPad
Refurbished Mid-2013 11-Inch MacBook Airs Now Available in Apple Online Store
iOS 7 Allows Siri to Disable Find My iPhone via Airplane Mode in Security/Convenience Trade-Off
Apple Releases Chrome and Firefox Extensions for Windows to Support iCloud Bookmark Syncing
Sapphire Home Buttons Coming to New iPads?
Rovio's 'Angry Birds Star Wars II' Hits the App Store

    






15 Jul 19:06

'Almost Bezel-Free' Redesign for iPad Mini Coming, but Still No Retina Until Early 2014?

by Eric Slivka
In the wake of continuing reports suggesting that Apple's Retina iPad mini may not be ready to launch until early next year, Digitimes has now added its thoughts on the matter, claiming that Apple will be releasing a slightly redesigned non-Retina iPad mini later this year before launching the Retina iPad mini early next year.

ipad_mini_promo
According to the report, the Retina iPad mini will see an "almost bezel-free" design, presumably referring to the sides of the device, which are already fairly narrow. The report is somewhat confusing about just what aspects of the redesign will appear when, also mentioning a lighter and thinner design for the new non-Retina model later this year.
Apple is reportedly aiming to use Retina panel technology equipped with 2,048 by 1,536 resolution in the next generation 7.9-inch iPad mini. Apple is also said to be revising the design of the chassis to give the next-generation iPad mini an almost bezel-free look.

While the new iPad mini may not be availabe during the year-end shopping season, Apple reportedly may first release a slightly updated version of the current iPad mini in the second half of 2013, which is expected to be lighter, thinner and equipped with improved specifications, the sources said.
Digitimes' report is very similar to claims from NPD DisplaySearch analysts, who have flip-flopped several times but now point to a thinner non-Retina iPad mini arriving later this year and a Retina iPad mini following in early 2014.

Today's report also reiterates claims that Apple's fifth-generation iPad is on its way with a thinner and lighter design inspired by the iPad mini. Apple's supply chain is reportedly beginning small-scale production on the new iPad this month, ramping up through October.


Recent Mac and iOS Blog Stories
Apple Seeds Two OS X Mavericks Preview 3 Bug Fixes
AT&T to Acquire Prepaid Carrier Leap Wireless/Cricket for $15/Share
'eBay Exact' App Allows Users to Customize and Print 3D Items
Infinity Blade Dungeons Officially Shut Down
Best Buy Announces Special Two-Day iPad Trade-In Program
AirPort Utility for Mac and iOS Updated with Bug Fix
Any.do Launches Calendar App Aimed at Helping You Find More Time
Rovio Publishing's Second Game 'Tiny Thief' Hits the App Store
    


09 Apr 06:37

An HTTP reverse proxy for realtime

Pushpin makes it easy to create HTTP long-polling and streaming services using any web stack as the backend. It’s compatible with any framework, whether Django, Rails, ASP, or even PHP. Pushpin works as a reverse proxy, sitting in front of your server application and managing all of the open client connections.

pushpin-diagram2

Communication between Pushpin and the backend server is done using conventional short-lived HTTP requests and responses. There is also a ZeroMQ interface for advanced users.

The approach is powerful for several reasons:

  • The application logic can be written in the most natural way, using existing web frameworks.
  • Scaling is easy and also natural. If your bottleneck is the number of recipients you can push realtime updates to, then add more Pushpin instances.
  • It’s highly versatile. You define the HTTP exchanges between the client and server. This makes it ideal for building APIs.

How it works

Like any reverse proxy, Pushpin relays HTTP requests and responses between clients and a backend server. Unless and until the backend invokes any of Pushpin’s special realtime features, this proxying is purely a pass-through. The magic happens when the backend server decides to respond with special instructions to a request. For example, if the backend server wants to long-poll a request, it can respond to the request with instructions saying that the connection be held open and bound to a channel. Pushpin will act on these instructions rather than forwarding them down to the requesting client. Later on, when the backend wants to respond to a request being held open, it makes a publish call to Pushpin’s local REST API containing the HTTP response data to be delivered.

Below is a sequence diagram showing the network interactions:

pushpin-diagram3

As you can see, the backend web application can either respond to an HTTP request normally, or it can respond with holding instructions and send data down the connection at a later time. Either way, the backend never maintains long-lived connections on its own. Instead, it is Pushpin’s job to maintain long-lived connections to clients.

The interfacing protocol between Pushpin and the backend server is called “GRIP”. You can read more about GRIP here.

An example

Let’s say you want to build an “incrementing counter” service that supports live updates. You could design a REST API as follows:

  • Single integer counter exists at resource /counter/value/.
  • POST /counter/value/ to increment and return the counter value (the value after incrementing).
  • GET /counter/value/ to retrieve the current counter value. Optionally, pass parameter last=N to specify the last value known by the client. If the server recognizes this value as the current value, then long-poll until the value changes.

Before we discuss how to implement this API with Pushpin, let’s go over the counter API design in more detail so it’s clear what we are trying to accomplish.

The POST action is straightforward. It’s the GET action that’s more complex, because it needs to long-poll or not, depending on the state of things. Suppose the current counter value is 120. Below, different GET requests are shown with the expected server behavior.

Client requests counter value, without specifying last known value:

GET /counter/value/ HTTP/1.1

Server immediately responds:

HTTP/1.1 200 OK
Content-Type: application/json

120

Client requests counter value, specifying a last known value that is not the current value:

GET /counter/value/?last=119 HTTP/1.1

Server immediately responds:

HTTP/1.1 200 OK
Content-Type: application/json

120

Client requests counter value, specifying last known value that is the current value:

GET /counter/value/?last=120 HTTP/1.1

The server will now wait (long-poll) before responding. The server will either respond with the next value eventually:

HTTP/1.1 200 OK
Content-Type: application/json

121

Or, the server will timeout the request, because the counter has not changed within some timeout window. In this case we’ll say the server should respond with an empty JSON object:

HTTP/1.1 200 OK
Content-Type: application/json

{}

At this point we haven’t even gotten to the Pushpin part. We’re just designing and describing a counter API, and there is nothing necessarily Pushpin-specific about the above design. You might come to this same design regardless of how you actually planned to implement it. This helps showcase Pushpin’s versatility in being able to drive any API. In fact, if a counter service already existed with this API, it could be migrated to Pushpin and clients wouldn’t even notice the switch.

Normally, implementing any kind of custom long-polling interface would require using an event-driven framework such as Node.js, Twisted, Tornado, etc. With Pushpin, however, one can implement this interfacing using any web framework, even those that are not event-driven. Below we’ll go over how one might implement the counter API using Django.

First, here’s the model code, which creates a database table with two columns, name (string) and value (integer):

class Counter(models.Model):
  name = models.CharField(max_length=32)
  value = models.IntegerField(default=0)

  @classmethod
  def inc(cls, name):
    cls.objects.filter(name=name).update(value = F('value') + 1)

Just a basic model with an increment method. Our service will use a counter called “main”. Now for the view, where things get interesting:

import gripcontrol as grip

pub = grip.GripPubControl("http://localhost:5561")

def value(request):
    if request.method == "GET":
        c = Counter.objects.get(name="main")
        last = request.GET.get("last")
        if last is None or int(last) < c.value:
            return HttpResponse(json.dumps(c.value) + "\n")
        else:
            headers = { "Content-Type": "application/json" }
            timeout_response = grip.Response(headers=headers, body="{}\n")
            return HttpResponse(
                grip.create_hold_response("counter", timeout_response),
                content_type="application/grip-instruct")
    elif request.method == "POST":
        Counter.inc("main") # DB-level atomic increment
        c = Counter.objects.get(name="main")
        pub.publish_http_response_async("counter", str(c.value) + "\n")
        return HttpResponse(json.dumps(c.value) + "\n")
    else:
        return HttpResponseNotAllowed(["GET", "POST"])

Here we’re using the Python gripcontrol library to interface with Pushpin. It’s not necessary to use a special library to speak GRIP (it’s just JSON over HTTP), but the library is a nice convenience. We’ll go over the key lines:

pub = grip.GripPubControl("http://localhost:5561")

The above line sets up the library to point at Pushpin’s local REST API. No remote accesses are performed on this line, but whenever we attempt to interact with Pushpin later on in the code, calls will be made against this base URI.

            headers = { "Content-Type": "application/json" }
            timeout_response = grip.Response(headers=headers, body="{}\n")
            return HttpResponse(
                grip.create_hold_response("counter", timeout_response),
                content_type="application/grip-instruct")

The above code generates a hold instruction, sent as an HTTP response to a proxied request. Essentially this tells Pushpin to hold the HTTP request (to the client) open until we publish data on a channel named “counter”. If enough time passes without a publish occurring, then Pushpin should timeout the connection by responding to the client with an empty JSON object. Once we respond with these instructions, the HTTP request between Pushpin and the Django application is finished, even though the HTTP request between Pushpin and the client remains open.

        pub.publish_http_response_async("counter", str(c.value) + "\n")

The above call publishes an “HTTP response” to Pushpin, with the body of the response set to the value of the counter. This payload is published on the “counter” channel, causing Pushpin to deliver it to any requests that are currently open and bound to this channel.

That’s all there is to it!

Realtime is no longer special

The great part about being able to use existing web frameworks is that you don’t need separate codebases for realtime and non-realtime. It’s not uncommon for projects to implement the non-realtime parts of their API using a traditional web framework, and the realtime parts in a more customized way using a specialized server. Pushpin eliminates the need for multiple worlds here. Instead, your entire API, realtime or not, can be implemented using the same framework (e.g. entirely in Django). Any HTTP resource can be made to stream or long-poll on a whim. All facilities of your traditional web framework, such as authentication or debugging, will work within a realtime context.

Ideal for everyone

Finally, lest Pushpin be misunderstood solely as a way to shoehorn realtime capabilities onto non-event-driven web frameworks, it’s worth emphasizing that the proxying approach makes a lot of sense even if your backend is Node.js. The decoupling of application logic from connection management will make your overall application much easier to manage and maintain. Additionally, introducing proxying layers is the inevitable endgame for high scale data delivery (just look at the topology of a CDN).

Pushpin is open source and available on GitHub. For more information about the motivation and thought process behind Pushpin, see this article. And if you find yourself wishing there was a cloud service that worked like Pushpin, there is.