Shared posts

19 Jul 17:00

Back in the blogosphere!

by somervillebikes

It’s been close to four years since my last blog post! But I’m back, and I plan to keep posting regular updates. A lot has happened with Velo Lumino over the past few years. We’ve expanded. We introduced a seat-tube variant of our beloved fender taillight, and added several lighting- and fender-related widgets. Many of our products are on their second generation, incorporating small improvements based on real world exposure. We’re proud that our products have held up so well. Our taillights and stem switch have been put to the test in grueling endurance rides such as Paris-Brest-Paris, and in numerous gnarly gravel races. And in all honesty, the only issues we’ve had have been very few and far between, and only on the very earliest production units we sold. For example, the very first batch of taillights we shipped suffered from the possibility of the lens popping off. For those units, we quickly changed the adhesive used to affix the lens. The problem hasn’t occurred since. And with the first batch of TMAT stem switches, we had just one example where the loc-tite that supplements the press-fit bond between the cap and spindle failed, causing the cap to rotate past its defined travel stops.  We quickly improved the design of the cap-stem interface to include a lock pin. Since these and other early improvements, our components have become virtually bomb-proof. The track record has been excellent. We couldn’t be more proud of how well our components are holding up after several years. As a side note, I’m still using the very first prototype TMAT stem switch on my daily commuter, a Bike Friday Haul-A-Day cargo bike.  That’s right, our first prototype, a beta version preceding the first production version. It’s still perfectly functional. Here’s a picture of it from 4-1/2 years ago:

32396575235_2aa329f8e9_z

I custom painted the cap cover to match the black stem, and today the paint is mostly peeled off, but the switch itself works as it did on day one. Oh, and this bike lives outside year-round. Without cover. In Boston.  Rain, sleet, snow, deep freeze, thaw, sun, repeat. Not to mention Boston’s pot-hole riddled, bone-shaking streets that stress not only my bones and nerves but all the bike components as well.

And until last year, I had been using the first 3D-printed prototype fender taillight, the very taillight design chosen for the production CNC milled aluminum taillight:

32019122210_d866839923_z

Alas, four years of UV exposure took its tool on the 3D printed plastic and it cracked. The electronics were just fine, and I even saved them for some future use, but I eventually replaced that taillight with a B&M Secula. I felt the production AT fender taillight is too nice for this utilitarian bike.

Even beyond the standard 3-year warranty we provide on all our components, we continue to stand by our products. If your Velo Lumino component fails outside of warranty, or is damaged by a crash, we offer a very cost-effective upgrade program where we can repair and/or upgrade your unit to the current version for a nominal fee (fee varies depending on the component, email us for details if you find yourself in need of an upgrade, or even if your unit is fine but you just want to have the most current build standard). We don’t believe in throw-away components. We build them to last, we build them to be upgradable, and we stand by them.

We’ve also finally upgraded our circa-2015 website and changing hosting companies, so we have a new look with integrated shopping cart functionality. We’re finally able to take international orders without having to create paypal invoices.

Moving forward, we’re going to focus many of the blog posts on techniques, how-tos, and demonstrations. These are the things that we get asked about constantly, so we figured this would be a great way to show people. Of course, we’ll also post about new products, and stuff in general.

02 Mar 01:54

How to Use Fastpages (2)

by Stephen Downes
This is part two of a series. If you haven't read part one, you should do so now. Click here to read part one.

OK them welcome back. Let's begin by reviewing what we've done in part one, because it's not obvious.

First, we created a repository on GitHub. We did this by 'cloning' the original Fastpages repository. Then we updated the repository by 'merging' an automatically-generated 'pull request'. My version of the repository is located here, based on the name I gave it in part one: https://github.com/Downes/fastpages

You should make sure you can find your GitHub repository. 

Second, the repository automatically generated a website for us using GitHub pages. The address is located at the bottom of your GitHub repository. My version of the website is located here, based on the name I gave the repository: https://Downes.github.io/fastpages/

It looks like this:

 Your new website has been preloaded with an index page and two posts. It also has an 'About Me', 'Search' and 'Tags' page.

Find Your Pages


Let's find these pages. Go to the GitHub repository, as illustrated below. I've highlighted a file called index.md - this is the index page for your website, which we just saw above.

The .md extension stands for 'Markdown'. It's a way to design web pages that is easier to use than HTML. You can see the other pages in your repository as well, for example, about.md and search.html You can edit your page by clicking on the name of the page.

Edit the Page Text

Click on the page title to open the display window:

Then click on the edit button (I've highlighted it) to enter the editing window.

This is the editing window. You can see the page text in the big box at the top. I've highlighted three major areas:
- comments and processing instructions. These won't appear on the page, but will tell GitHub how to process the page.
- actual page text, in Markdown
- embedded content, from another directory

To edit the page text:
  1. Change the text as desired in the editing window (just the text circled in red, ignore the rest for now). As soon as you change some text you'll see the green 'Commit changes' button light up.
  2. (Optional) Describe your changes in the boxes above the green button. This is to tell you and other people why you made the change or what you changed.
  3. Click on the green 'Commit changes' button.
This will accept the new text and take you back to the display window where you'll see the changes you've made appear on the page.

Note: if you go back to your index page on the website (for me, the page I see when I go to https://Downes.github.io/fastpages/ ) then you will not see the change right away. It takes a minute or two for the change you made to your repository to propagate to the website. Wait a few minutes, then go back to your page, and reload it, and you will see your changes.

Markdown

I don't want to write a Markdown tutorial here. What's important to remember is that GitHub pages uses a type of Markdown called 'Kramdown'. There is a guide to using Kramdown here: https://kramdown.gettalong.org/quickref.html 

I encourage you to practice various elements of Kramdown on your index page until you're comfortable with it.

The Page Generation Process

We need to talk a bit about what happens when your website pages are generated from your repository. Recall this diagram from the previous post of this series:


Here it is again, with a bit of elaboration. Your repository is on the left. It feeds through a program called Jekyll running in GitHub Actions, and the output if your website on the right. All of this happens automatically (but it takes a few minutes, which is why you have to wait).

If you look at the GitHub Actions webpage you can see that there are many choice. What Fastpages does for us is to select and configure GitHub Actions in Jekyll.

This gives us more options in our Markdown pages. We can use the 'comments and processing instructions' section of the page to give Jekyll some instructions when it processes our page.

Processing Instructions

We have now reached the beginning of the instructions on the Fastpages website, here, in the section titled 'Front-Matter related options'. We will go directly to the section for Markdown:

---
- title: "My Title"
- summary: "Awesome summary"
- toc: false
- branch: master
- badges: true
- comments: true
- image: images/some_folder/your_image.png
- categories: [fastpages, jupyter]
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
---

These instructions go into the top of your Markdown file. These instructions are used by Jekyll to format the page. There's a detailed description of how to use them on the Jekyll website (sorry about the super-ugly formatting).

As the Fastpages documentation notes, the processing instructions are written in YAML (which stands for "Yet Another Markup Language" - there's a full description here). The main thing to keep in mind is that your title and summary should be inside quotation marks (if you want to use a quotation mark in the title itself, you need to escape it with a \ character. Like this: "Here is a title with a \"quotation\" mark in it."

Here is what the instructions do:
  • title and summary - create a page title and page summary (these won't show up on the page but will be defined in the page header and used by search engines)
  • toc - this creates a table of contents for the page based on headings used in the page (which are defined using Markdown).
  • branch and badges - don't do anything in markdown, so just delete them 
  • comments - enables comments using a program called Utterances - this is a whole separate topic so we won't cover it here.
  • categories and metadata - creates additional page information that can be processed by Jekyll; for example, the 'categories' can be used to create tags for the pages.
 That's it for now. We'll talk about creating a blog using Fastpages in the next section.





























02 Mar 01:53

Back to the Fundamentals of the Web

by Reverend

The Reclaim Hosting team is gearing up for the OER20 Conference happening in London on April 1st and 2nd (Corona Virus allowing). As part of our proposed sponsorship of the conference this year we wanted to run an in-depth, hands-on workshop for beginner and intermediate internauts on various elements of how the Web works within a LAMP server environment. The curriculum was imagined in partnership with Coventry University’s extraordinary learning technologist Lauren Heywood (the brains behind the brilliant Coventry Learn).

The day-long workshop will offer participants a look at how DNS works, the basics of the FTP protocol, various uses and abuses of command line, file management, editing basic HTML and PHP code, as well as dissecting how database-driven applications work. All attendees with be provided their own sandbox server environment to work within, and each part of the workshop will provide an overview of the technology followed by a hands-on activity that will put theory into action!

The event will be held at Coventry University on March 31st, the day before OER20 kicks off. Please keep in mind Coventry is an hour train ride from London, so plan accordingly before registering. Speaking of which, there is a £75 registration fee that will go directly towards sponsoring ALT—the good folks that run this amazing conference. So, not only could you learn a thing or two about how the web works, but you will also be supporting the good folks that run OER.*

So, if this sounds interesting to you, do us all a favor and register for the workshop directly.


*What’s more, if no one registers then Reclaim might actually have to give them money! 

02 Mar 01:53

Love In The Time Of Coronavirus

by Kaori Shoji

by Kaori Shoji

The Japanese have never been known for being warm, affectionate or touchy-feely but now it seems like everyone has wedged a hefty distance between themselves and other human beings. On TV, commentators are comparing COVID-19 to the AIDS scare of 30 years ago in the way it discourages people from physical contact, much less the exchange of body fluids. What a bummer. From casual hugs to love hotel trysts, direct contact just isn’t happening anymore and it’s taking a toll on our emotional well-being. Which is why you need to get to the theater this weekend to see Hatsukoi (First Love), if only as a reminder that even in this time of virus infestation, love can thrive – in a manner of speaking. 

Hatsukoi is filmmaker Takashii Miike’s latest, and the cute title is a foil for the utterly sinister events that unfold on screen. This stands to reason – Takashi Miike has built his own cinematic kingdom on the foundations on gore and violence for the last 35 years. Why would he stop now? At the press conference given at the FCCJ earlier this week, Miike joked that he came up with the title in the hopes that people will be lured to theaters, thinking Hatsukoi is a “genuine love story.” If so, they are in for a rude awakening. Hatsukoi is less about the 59-year old Miike mellowing in his advancing years than Miike confirming he still has what it takes to go full throttle on his triple fortes of murder, mayhem and decapitation. 

Having said that, Hatsukoi shows Miike in an uncharacteristically romantic mood, even occasionally favoring the love story factor over the blood-spewing brutality thing. As a result, Hatsukoi is much more palatable than Ichi the Killer: Miike’s 2001 landmark project that put his name on the Hollywood map. Both works share significant similarities – they’s set in Kabukicho, Shinjuku, where a yakuza turf war is raging. Both feature double crossing yakuza going at each other’s throats. And in both movies, the lead role is a sad underdog who never had a break in his life. In Hatsukoi, this is Leo Katsuragi (Masataka Kubota), a boxer whose day job is a busboy at a Chinese diner. Leo had been abandoned by his parents at birth and since then, he’s been licking the bottom of a very rotten barrel. That’s about to change however, when he meets a girl on the streets of Kabukicho – hence, the titular first love. 

Interestingly, Hatsukoi’s present day Kabukicho is a different town from when Ichi had stomped its streets. The yakuza have gone corporate and their street cred is way down, which means they must look for ways to co-exist with the Chinese gangster groups that have infiltrated Shinjuku. But old-school clan boss Gondo (Seiyo Uchino) would rather just go to war and kill them all. This doesn’t sit well with Gondo’s young underling Kase (Shota Sometani), who is weary of the clan’s outdated notions of yakuzahood. He’s looking for a fast exit, but not before he lines his pockets with the clan’s meth supply. Crooked cop Otomo (Nao Otomo) is looking for a cut in Kase’s profits, plus a free sex session with the clan’s whore Monica (Sakurako Konishi), forced to turn tricks to pay back her father’s debt. Monica is the focal point of the story as well as the eye of the clan wars shit storm, and in the process she inspires Leo to dream of a future with at least a semblance of personal happiness. They fall for each other, and you can see they’re very careful not to muck it up with anything sexual just yet – it’s the first time either of them have ever been in love. 
The big surprise, coming from Miike, is that Hatsukoi is also about female empowerment. Leo and the other males in the cast may be compelling but they never get off the rails of Japanese machismo and as such, very predictable. The women characters on the other hand, are definitely not the familiar cut-out victims from a typical yakuza movie (take your pick between willing sex kitten or giving mother martyr). First, there’s Becky as Juli, a hard as nails yakuza moll who supervises Monica. When her boyfriend Yasu (Takahiro Miura) – the clan’s accountant – turns up dead, Juli morphs into a raging, screaming avenger with a tremendous blood thirst. She won’t stop until she hunts down Yasu’s murderer, even if that precludes her own, violent death. And Monica, who starts out as the stereotypical victim – sexually abused by her father, then sold into slavery to repay his debts – matures into a person with her own agenda under Leo’s tutelage. In the end, she even gives the ole patriarchy a good, hard kick in the teeth. 

The flesh tearing, head-rolling bloodscapades of Takashi Miike is alive and well. But dig a little under the surface and you’ll see that maybe the filmmaker has changed, just as his beloved Kabukicho has altered beyond recognition. Far from the nonsensical blood-drenched antics of Ichi, it’s now a town where two young people can meet, fall in love and hold hands even as the world bleeds and falls apart around them. Under the current circumstances of virus angst, this particular love story is probably as good as it gets. 

02 Mar 01:53

HTML: The Inaccessible Parts

I’ve always abided in the idea that “HTML is accessible by default and then we come along and mess it up.” In a lot places this is very true and by just using a suitable HTML element instead of a generic div or span we can have a big Accessibility impact.

But that’s not always the case. There are some cases where even using plain ol’ HTML causes accessibility problems. I get frustrated and want to quit web development whenever I read about these types of issues. Because if browsers can’t get this right, what hope is there for the rest of us. I’m trying to do the best I can, use the platform, but seems like there’s a dozen “gotchas” lurking in the shadows.

I’m going to start rounding up those HTML shortfalls in this here post as a little living document that I can personally reference every time I write some HTML.


<input type="number">
Gov.UK finds Number Inputs aren’t inclusive. (2020)

<input type="date">
Graham Armfield finds Date Inputs not ready for use. (2019)

<input type="search">
Adrian Roselli points out Search Inputs aren’t as useful as originally thought. (2019)

<select multiple>
Sarah Higley tests with actual users and finds Select Multiple has a 25.3% success rate. (2019)

<progress>
Scott O’Hara finds numerous errors with the Progress element. (2018)

<meter>
Scott O’Hara finds more numerous errors with the rare Meter element. (2018)

<dialog>
Scott O’Hara declares Dialog not ready for production. (2019)

<details><summary>
Adrian Roselli feels Details/Summary are only good in limited contexts (e.g. Details doesn’t work as an Accordion, which is what I would expect). (2019)

<video>
Scott Vinkle goes with a third-party player after seeing that the native HTML Video Player is a very inconsistent experience for screen readers. (2019)

<div onclick>
Technically this is JavaScript, but the screen reader JAWS announces “Clickable” when the element or one its ancestors have a click event handler. This is a bummer for trying to make tap areas bigger. (2018)

<div aria-label>
Paciello Group educates how aria-label, aria-labelledby , and aria-describedby only work on certain elements… and not <div> elements. It’s not very intuitive to me that aria-label would only work sometimes and it seems like something linters like axe should catch. (2017)

<a href><div>Block Links</div></a>
Adrian Roselli finds Block Links in a Card UI have usability issues. (2020)

aria-controls
The aria-controls attribute is a great way to establish a relationship between two elements and is in tons of tutorials… only one problem… Heydon Pickering points out aria-controls doesn’t do anything. (2016)

role="tablist"
After some user testing, Jeff Smith discovered the best way to make accessible tabs is to remove role="tablist", role="tab", role="tabpanel" from their tabs. FWIW, these findings were contested in a 3,900 word blog post by Léonie Watson. (2016)


Your mileage may vary, test with actual users. I’ll do my best to update this as the situation evolves glacially over the next 20 years.

Edit 2/28/2020:

  • Added the year to each link to help reflect the potential “staleness” of the information.
  • Added a note to test with actual users.
  • Added a link to Léonie Watson’s rebuttal under the tablist discussion.
02 Mar 01:52

On Coronavirus, we need the correct amount of fear

by Josh Bernoff

Fear is useful. Panic is not. COVID-19, popularly known as the Coronavirus, is coming. If we’re honest, we should recognize a few facts. At least 83,000 people are infected. Most are in China, but there are also outbreaks in South Korea, Iran, Japan, and Italy. There are cases in America, too. More than 2,800 people … Continued

The post On Coronavirus, we need the correct amount of fear appeared first on without bullshit.

02 Mar 01:52

A New Year & A New Adventure

by Adam Nash

Some personal news to share today.  After a great tour of duty at Dropbox, I’ve decided to take the leap into something new.

Having a January birthday has always added a little weight to my New Year’s resolutions, and as it turns out, 2020 was a big one for me. 45 might not be the biggest milestone birthday, but combined with the weight of a new decade, it had me thinking deeply over the holidays. Fortunately, I was able to spend a good deal of time with friends & family, and by New Year’s Eve I felt comfortable with a simple, but important, decision.

2020 will be different. This will be the year that I go off on my own.

The hardest part of this process was telling my team at Dropbox. I feel so very fortunate to have had the opportunity to lead such an amazing group of professionals. And as proud as I am of what we accomplished in 2018 & 2019, I’m even more excited about what this team will deliver for their customers in 2020 & beyond. 

For me, I’ll be spending the next few months preparing for the long road ahead founding a company. As one of the growing number of “operator-angels,” I’ll continue to advise and support the talented teams at the companies where I’ve invested over the past 8 years. Primarily, though, I’ll be spending time on a couple of specific fintech ideas that I think have the potential to be great companies. 

I have spent over 20 years learning to build & design great products and great companies, but somehow never my own. As an angel investor, I’ve now helped fund and advise over fifty amazing founding teams, and have had a front row seat to their struggles and successes. It’s time to take the plunge.

And who knows? I hear 45 is the best time to start.

02 Mar 01:52

You Don’t Need a Face Mask for Coronavirus

by Christina Colizza
You Don’t Need a Face Mask for Coronavirus

The coronavirus is coming. Federal officials confirmed this week that SARS-CoV-2’s stateside spread is no longer a question of “if” but of “when.” Take a deep breath: There are a few things you can do to prepare. First, wash your hands often. Second, if you haven’t already, get a flu shot. Third, to prepare for a scenario in which a worst-case outbreak occurs, refresh your supply of food, medicines, and essential household supplies (which is also a smart way to prepare for any potential emergency).

02 Mar 01:52

25 Years of Ed Tech

Martin Weller, AU Press, Feb 28, 2020
Icon

Martin Weller has written this book - a freely accessible eBook (224 page PDF) that looks at the last 25 years of educational technology. I've read through a few chapters (it may take you a couple of hours to finish it) and it's relatively light and accessible. Obviously my experience and his over that span of time both overlaps and also has its differences of emphasis. Still, I was a bit surprised to see how little attention was devoted to RSS, how major technologies like Usenet and listserv are not mentioned, and how major players like UMUC and ADL aren't mentioned (though at least SCORM was mentioned, though EML is not, nor is XML). Some of the dates also seem off; blogging, for example, begins in 1998, not 2003 (Blogger itself was launched in 1999). So you should read this book as one person's perspectivce, rather than as anything like a comprehensive overview of the field.

Web: [Direct Link] [This Post]
02 Mar 01:48

Are fastpages Really an EASY Way to Publish a Blog From Jupyter Notebooks?

Tony Hirst, OUseful Info, Feb 28, 2020
Icon

The answer is, of course, "No it is not," though having said that, it's easier than most such applications (butr the bar is so low this isn't really saying much). My experience having run though dozens and dozens of this sort of thing is that the authors typically assume that their readers have exactly the same backgroiund and experience that they do. Which, of course, I never do, and neither do most readers. As Tony Hirst says, if you already have a background in GitHub pages and Jekyll, this saves you a few steps. But if you don't have that background, then it's not easy. Anyhow, it still seemed to be worth exploring, and my guide now has two parts: part one, part two.

Web: [Direct Link] [This Post]
02 Mar 01:46

Multithreaded Python: slithering through an I/O bottleneck

by hello@victoria.dev (Victoria Drake)

I recently developed a project that I called Hydra: a multithreaded link checker written in Python. Unlike many Python site crawlers I found while researching, Hydra uses only standard libraries, with no external dependencies like BeautifulSoup. It’s intended to be run as part of a CI/CD process, so part of its success depended on being fast.

Multiple threads in Python is a bit of a bitey subject (not sorry) in that the Python interpreter doesn’t actually let multiple threads execute at the same time. Python’s Global Interpreter Lock, or GIL, prevents multiple threads from executing Python bytecodes at once. Each thread that wants to execute must first wait for the GIL to be released by the currently executing thread. The GIL is pretty much the microphone in a low-budget conference panel, except where no one gets to shout.

This has the advantage of preventing race conditions. It does, however, lack the performance advantages afforded by running multiple tasks in parallel. (If you’d like a refresher on concurrency, parallelism, and multithreading, see Concurrency, parallelism, and the many threads of Santa Claus.) While I prefer Go for its convenient first-class primitives that support concurrency (see Goroutines), this project’s recipients were more comfortable with Python. I took it as an opportunity to test and explore!

Simultaneously performing multiple tasks in Python isn’t impossible; it just takes a little extra work. For Hydra, the main advantage is in overcoming the input/output (I/O) bottleneck.

In order to get web pages to check, Hydra needs to go out to the Internet and fetch them. When compared to tasks that are performed by the CPU alone, going out over the network is comparatively slower. How slow?

Here are approximate timings for tasks performed on a typical PC:

Task Time
CPU execute typical instruction 1/1,000,000,000 sec = 1 nanosec
CPU fetch from L1 cache memory 0.5 nanosec
CPU branch misprediction 5 nanosec
CPU fetch from L2 cache memory 7 nanosec
RAM Mutex lock/unlock 25 nanosec
RAM fetch from main memory 100 nanosec
Network send 2K bytes over 1Gbps network 20,000 nanosec
RAM read 1MB sequentially from memory 250,000 nanosec
Disk fetch from new disk location (seek) 8,000,000 nanosec (8ms)
Disk read 1MB sequentially from disk 20,000,000 nanosec (20ms)
Network send packet US to Europe and back 150,000,000 nanosec (150ms)

Peter Norvig first published these numbers some years ago in Teach Yourself Programming in Ten Years. Since computers and their components change year over year, the exact numbers shown above aren’t the point. What these numbers help to illustrate is the difference, in orders of magnitude, between operations.

Compare the difference between fetching from main memory and sending a simple packet over the Internet. While both these operations occur in less than the blink of an eye (literally) from a human perspective, you can see that sending a simple packet over the Internet is over a million times slower than fetching from RAM. It’s a difference that, in a single-thread program, can quickly accumulate to form troublesome bottlenecks.

In Hydra, the task of parsing response data and assembling results into a report is relatively fast, since it all happens on the CPU. The slowest portion of the program’s execution, by over six orders of magnitude, is network latency. Not only does Hydra need to fetch packets, but whole web pages! One way of improving Hydra’s performance is to find a way for the page fetching tasks to execute without blocking the main thread.

Python has a couple options for doing tasks in parallel: multiple processes, or multiple threads. These methods allow you to circumvent the GIL and speed up execution in a couple different ways.

Multiple processes

To execute parallel tasks using multiple processes, you can use Python’s ProcessPoolExecutor. A concrete subclass of Executor from the concurrent.futures module, ProcessPoolExecutor uses a pool of processes spawned with the multiprocessing module to avoid the GIL.

This option uses worker subprocesses that maximally default to the number of processors on the machine. The multiprocessing module allows you to maximally parallelize function execution across processes, which can really speed up compute-bound (or CPU-bound) tasks.

Since the main bottleneck for Hydra is I/O and not the processing to be done by the CPU, I’m better served by using multiple threads.

Multiple threads

Fittingly named, Python’s ThreadPoolExecutor uses a pool of threads to execute asynchronous tasks. Also a subclass of Executor, it uses a defined number of maximum worker threads (at least five by default, according to the formula min(32, os.cpu_count() + 4)) and reuses idle threads before starting new ones, making it pretty efficient.

Here is a snippet of Hydra with comments showing how Hydra uses ThreadPoolExecutor to achieve parallel multithreaded bliss:

# Create the Checker class
class Checker:
    # Queue of links to be checked
    TO_PROCESS = Queue()
    # Maximum workers to run
    THREADS = 100
    # Maximum seconds to wait for HTTP response
    TIMEOUT = 60

    def __init__(self, url):
        ...
        # Create the thread pool
        self.pool = futures.ThreadPoolExecutor(max_workers=self.THREADS)


def run(self):
    # Run until the TO_PROCESS queue is empty
    while True:
        try:
            target_url = self.TO_PROCESS.get(block=True, timeout=2)
            # If we haven't already checked this link
            if target_url["url"] not in self.visited:
                # Mark it as visited
                self.visited.add(target_url["url"])
                # Submit the link to the pool
                job = self.pool.submit(self.load_url, target_url, self.TIMEOUT)
                job.add_done_callback(self.handle_future)
        except Empty:
            return
        except Exception as e:
            print(e)

You can view the full code in Hydra’s GitHub repository.

Single thread to multithread

If you’d like to see the full effect, I compared the run times for checking my website between a prototype single-thread program, and the multiheadedmultithreaded Hydra.

time python3 slow-link-check.py https://victoria.dev

real    17m34.084s
user    11m40.761s
sys     0m5.436s


time python3 hydra.py https://victoria.dev

real    0m15.729s
user    0m11.071s
sys     0m2.526s

The single-thread program, which blocks on I/O, ran in about seventeen minutes. When I first ran the multithreaded version, it finished in 1m13.358s - after some profiling and tuning, it took a little under sixteen seconds. Again, the exact times don’t mean all that much; they’ll vary depending on factors such as the size of the site being crawled, your network speed, and your program’s balance between the overhead of thread management and the benefits of parallelism.

The more important thing, and the result I’ll take any day, is a program that runs some orders of magnitude faster.

02 Mar 01:41

Goodbye Musette Café (for now?)

by Michael Kalus
Goodbye Musette Café (for now?)

Today Musette Café had its last day here on Burrard Street. It was a road cycling inspired café, though inspired doesn’t quite do it justice. It was a form of community hub for a lot of cyclists.

Origins

I came across the original Musette by chance. It was tucked away in a back alley not far from where the current one is (was) located.

It acted as an unofficial community hub of sorts for the cycling community in this part of town, and probably further out. People started and finished their road cycling trips and for all its modern touches, it had a wonderful old fashioned feel to it.

Goodbye Musette Café (for now?)

The interior is bicycle themed, jerseys, bibs, some old bicycles on the wall. But more than that they had bike locks you could borrow, large amounts of bike parking and if you needed a cartridge to inflate your tire, they had you covered.

Goodbye Musette Café (for now?)

The end (for now?)

So it was with a bit of a shock that I learned earlier this month that they were closing. I wasn’t here daily, but it was one of the coffee shops I really did enjoy, especially in the summer with the small patio they had, even though Burrard street with it’s car traffic will make sure you never think you’re sitting in a café somewhere in a small Belgian town.

Goodbye Musette Café (for now?)

But for now they are gone. Their billboard outside says they will be back, but no more details as of yet.

So why did they close up shop here? The café has been open for around five years at this location and I think their lease is now up and the landlord probably wanted more rent than was feasible. This is a real shame and it’s not the only place that ended up closing up for the same reason and I do not think it will be the last.

So, raising my last Americano (for now) and hoping they will be back I the saddle soon.

Goodbye Musette Café (for now?)

As I write this, they are already in the process of dismantling the decor, a sad day for sure.

02 Mar 01:41

Migrating WordPress without Access to Database or Core Files

by Reverend

One of the things I have been doing a lot of at Reclaim Hosting these days while the US sleeps is migrating accounts. We have a steady flow of fine folks moving to Reclaim, and one of the things we do is help them bring their content and domains along smoothly.

The other morning I ran into a situation wherein I couldn’t access either the database or core files of a WordPress site I was migrating so I went looking for a solution.* And while I was considering the Duplicator plugin, I was a bit wary given the recent exploit. Tim and Lauren pointed me to the All-in-One WP Migration plugin which was exactly what I needed.

The free version allows you to move both database and files of a site up to 515 MB. This worked for one of the two sites I was migrating, but the second was 518MB so I needed to access a premium addon plugin to finish the job. The add-on plugin allows for unlimited size downloads, and also allows you to upload the backup file the plugin creates via FTP so you can restore it directly from the plugin interface. I preferred this method because uploading the file via the web often runs in PHP upload size errors and timeout issues that often draw out the process unnecessarily, whereas the unlimited add-on plugin simplifies the process for big sites saving me a fair amount of time.

The All-in-One WP Migration plugin is a good plugin to have in your WordPress tool belt if you want a clean migration that takes all database settings, plugins, and themes and seamlessly migrates them to another WordPress site hosted elsewhere. I usually default to doing this with a MySQL export and an archive of core files, but if you find yourself without access to either (or not sure on how that process works manually), this plugin is an excellent option. I have not tried it on a WPMS site, yet but I think that would be the next logical step given getting core files and a database export for WPMS is far more involved process than a stand alone WordPress site. Anyone have experience on that front? Either way, it might be a good weekend experiment as I prepare for the #reclaimWP March madness..


*Although truth be told I could–but that is another story.

02 Mar 01:41

lazy loading images

As you might have noticed, the main page of this blog is not paginated. The text goes all the way back to the beginning. Fortunately I don't write very fast so I can keep filling it up and it's not even especially big for a web page today.

But the iframes and images can get big.

So I just put the lazysizes script, by Alexander Farkas, on here. So far so good.

Easier than setting up pagination. Please let me know if you run into any problems.

Bonus links

Preparing for Coronavirus to Strike the U.S.

A lot of words on words.

The EU will tell Britain to give back the ancient Parthenon Marbles, taken from Greece over 200 years ago, if it wants a post-Brexit trade deal

Board-game design is thriving

Playing D&D Can Save Your Life

Intellectual Dark Matter

This Guy Is Breeding City Pigeons for Affordable Food

02 Mar 01:41

“Einfach einsteigen und mitfahren!”

by Andrea

Deutsche Welle: Luxembourg makes public transport free.

“Luxembourg has become the first country in the world to provide public transport for free. The small EU hub aims to boost tram, train and bus usage and rid itself of traffic jams blamed on commuters using private cars.”

Deutsche Welle: Ab sofort kostenfreie Fahrt mit Bus und Bahn in Luxemburg. “Als erstes Land der Welt hat Luxemburg fast alle öffentlichen Verkehrsmittel gratis gemacht: Ab sofort braucht man in dem kleinen Großherzogtum für Bus, Bahn und Straßenbahn in der Regel keine Tickets mehr.”

“Der kostenfreie Transport ist Teil eines großen Konzepts zur Verkehrswende in Luxemburg. Parallel dazu werden Bus- und Bahnlinien massiv ausgebaut. Allein auf der Schiene investiert das Land von 2018 bis 2027 gut vier Milliarden Euro. Der kostenfreie ÖPNV beschert dem Luxemburger Staat Mehrausgaben von 41 Millionen Euro im Jahr. Viele Grenzgänger aus Frankreich, Belgien und Deutschland fahren ebenso wie die Mehrheit der Einheimischen des kleinen Großherzogtums mit ihrem Wagen zur Arbeit; Staus an der Grenze sowie im Zentrum der Hauptstadt sind an der Tagesordnung.

Eigentlich hatte der Gratis-Transport am Sonntag (1. März) losgehen sollen. Wegen der Feiern am Samstag hat die Regierung vor wenigen Tagen entschieden, den Start einen Tag vorzuziehen. “Das Interesse weltweit ist riesig”, resümiert Minister Bausch.”

02 Mar 01:39

No more spy pixels

(This was sent to my newsletter on March 1st, 2020.) Hello reader, I don’t know if you’ve opened or read this newsletter. Honestly, I have no idea. I won’t ever know this again either. I’ve turned ...
02 Mar 01:38

My 25 Years of Ed Tech

by Stephen Downes
Martin Weller has released his book 25 Years of Ed Tech today. It's a nice read; you are encouraged to check it out. But I have to confess, on having looked at the table of contents, I thought that it captured my career pretty well. Of course, that was not Weller's objective. But this is my lived history. So I thought I'd quickly summarize those 25 years from my perspective. Headings in bold are Weller's; subheadings in italics are mine.

By way of a preface, I want to say that I get that Weller wasn't attempting a comprehensive history of ed tech. As he says, it's more akin to selecting a top 100 than it is a full documentation. And I also get that the assignment of years is a bit arbitrary (but I also think it's telling, as we'll see below). At the same time, he's going to use his history of ed tech to say a lot of things about the discipline and technologists in general. As the title of his introduction suggests, he wants to say that they suffer from historical amnesia. It's a popular criticism; I've heard it many times. But it's not true.

More accurately, what it is true of is the people he reads. But I don't think he's reading the right people. His history of ed tech isn't a history of ed tech, it's a history of academics writing about ed tech. And that's a very different thing. The people I know - and credit - who have been doing the hard work of developing an entire infrastructure from the ground up have known since day one that they're in it for the long game. And it's not that we don't know the history of universities and distance education and learning theories, it's that so much of it and so many of them are wrong. That, though, is a different article.

My point with this article is that, in writing about the last 25 years of technology, Weller is writing about my history. This has been my work; I have been a learning technologist (and more, I hope) for that entire time, and for pretty much all of it I have a pretty good feeling that I knew - that we knew - what we were doing. And I'm proud of this work, that of myself and my colleagues around the world. And far from being a failure, we have made the world a better place, brought learning opportunities to hundreds of millions of people, and touched in the most human way possible the lives of countless individuals.


Want to edit this post? There's a version on Google Docs that anyone can edit. Click here.


1994 – Bulletin Board Systems
Also: Video, Mainframe conferencing, Telephone, Compugraphics, MUD
 
Weller's list actually starts a little late for me. My very first experience with ed tech was with Texas Instruments when, in 1981, I was sent to Texas for training. In addition to the usual hands-on instructor-led learning, I discovered the training room, which was stocked with training videos. I taught myself something called MVS-JES3 and also took a communications course called 'On the Way Up'. On the mainframe, I also found this fun game called 'Adventure'.

My first experience of online learning was in 1986 when the students in John A. Baker's philosophy of mind class used a mainframe discussion board to argue course topics back and forth - I still have the complete archives from those discussions. A year later I would write my Master's Thesis on the mainframe.

My other major ed tech experience in those years was the telephone. From 1987-1994 I was a tutor for Athabasca University and would spend hours on the phone helping students from across Canada. In the latter years I would also drive to remote communities and teach there, and we experimented with a system called 'compugraphics' to deliver a course using telephone and images sent directly from computer to computer (this was my first actual course development experience).

In the early 1990s I was also working on a Bulletin Board Service called 'Athabaska BBS', an unofficial (hence the K) adjunct to my work as a tutor with Athabasca University. I ran it off my 286 desktop using a Maximus BBS system. I was defeated, ultimately, by long distance. That, plus 99 percent of my students didn't have computers. I notice Weller cites Alan Levine's early use of BBS technology (it's unfortunate that this is the only bit of Levine's massive contribution he cites, and that it's nothing more than a comment on Weller's own blog).

In 1994 I was teaching at Grande Prairie Regional college. For me, in 1994, I was working on the internet proper, using Multi User Dungeon (MUD) technology, which we rebranded as Multi User Domain (and then Multi User Academic Domain). My colleagues and I built one called the Painted Porch MAUD, and we used this at Athabasca, Assiniboine Community College, and Cariboo College in B.C. Diversity University started out as an educational MUD; Walden University was also an early adopter, and a whole bunch more. One initiative worth noting is MediaMOO , built to support Seymour Papert's constructionism (there's a whole thread of ed tech there; there's no mention of Seymour Papert in Weller's history of ed tech, astonishingly) and involving people like Amy Bruckman and Mitchel Resnick.


1995 – The Web
Also: Plato, the Home Page
 
In 1995 I started at Assiniboine Community College. I built the college's first website (and also one for myself, for the city of Brandon, and for my cat). And I started teaching courses, both for staff and evening students, teaching them about the fundamentals of the internet.

I remember a long and interesting conversation with a representative from Plato, who was trying to sell us on their course system. I liked Plato a lot, and asked him when they expected to put it on the internet. Of course there were no such plans, and so I went one way and Plato went another.

I also needed to prove that the web could be used for online learning, so I took a guide I had created for my Grande Prairie students, Stephen's Guide to the Logical Fallacies, and published it on my website, openly licensed, free for all to use (the license was based on similar open licensing I had seen in the MUD community). Over the years, it has been my most enduring and popular resource. With a year of so we had created our first web-based course, 'Introduction to Instruction', with professor Shirley Chapman.

If 1995 was anything, it was the year of the home page. I don't know whether GeoCities was ever used for educational purposes, Launched in 1994, it was widely known (and maligned) by 1995. But it was brilliantly engineered; I learned tons by digging through the source, learning how all the Javascript worked, and adapting it for my own purposes. Like so many things, it was acquired by Yahoo and destroyed.


1996 – Computer-Mediated Communication
 Also: Usenet, IRC, Mailing Lists, Bulletin Boards, ICQ

Like Weller, we at Assiniboine experimented with FirstClass around this tine - we had a partnership with a school in Winkler/Morden (Manitoba) and worked back and forth with them. It was 'graphical' only in the sense that Windows 3.1 was graphical - you used icons instead of commands on the command line. The problem with FirstClass was that it required a lot of teacher preparation and intervention, and after a year, they were burned out. So I started focusing a lot more on content than on interaction.

While at Athabasca University (between 1991-1994) we had used the CoSy conferencing application, but I hated it. Hated it. We also had access to UseNet but I was never a UseNet person. I used to say at the time that there are two types of internet users: those who like dynamic content, such as games and email, and those who like static content, like Usenet and conferencing. If I were really a dynamic content person I would have used IRC, but that was for real geeks.

By 1996 I was a regular user of email mailing lists, including DEOS-L and NAWeb. A lot of my early articles were in reality long posts to these lists (that was back in the days when long posts were still acceptable, and nobody had considered a 128 character limit). I also took a massive open email course (MOEC?) called something like 'Welcome to the Internet' (I don't have the reference, but it's somewhere in my online corpus). People like Mauri Collins and Rik Hall were instrumental in establishing some of the first email exchanges connecting learning technologists worldwide, and in so doing, learning some core lessons (like, for example, how to moderate online forums).

A lot of people were using some of the earliest online bulletin boards by this time. The best were the boards that would automatically send you emails when a post was created. I participated a couple of years running in online conferences hosted by the University of Maryland University College (UMUC), which was running experiments in the genre. My contributions  included this effort in learning design.

Finally, around this time I also started using instant messaging, and to be specific, an application called ICQ. ICQ was eventually run into the ground by America Online, and people migrated to things like MSN messenger and AOL.It's easy to forget the importance of the early instant messaging services, because they played such a significant role in developing a lot of the thinking that would eventually result on social networks and more.


1997 – Constructivism
Also: Connectionism

We had our 'behaviourism-for-or-against' moment back in 1986, and at the same time were studying Bas van Fraassen's 1982 book The Scientific Image, in which he set out the agenda for 'constructive empiricism'. But by 1997 I had long since moved on past constructivism (and never took it seriously as a learning theory; people like Piaget and Vygotsky were wrong, and more importantly, known to be wrong); in 1987 I wrote my Master's Thesis on Models and Modality in which I took an anti-constructivist stance.

While constructivism was big in education, the new philosophy of connectionism was catching the eye of computer scientists and philosophers. On my own I developed something called a 'logic of modification' based on similarity theory, which resembled the nascent theory of 'connectionism'. I saw a talk by Francisco Varela at the U of A hospital and in 1990 a group of us travelled to Vancouver for the famous Connectionism conference at SFU. By 1993 I had written The Network Phenomenon, which is essentially a post-constructivist post-cognitivist theory of knowledge.

Weller makes an interesting point: "It also marked the first time many educators engaged with educational theory. This may sound surprising, but lecturers rarely undertook any formal education training at the time; it was usually a progression from researching a PhD into lecturing, with little consideration on models of teaching." (p.32) I recall that I did participate in such training, in part with Athabasca, but more explicitly with Grande Prairie Regional College (GPRC), where we learned about learning styles and Bloom's taxonomy (Weller doesn't mention either of these anywhere). In the Manitoba system, which I joined in 1995, the Certificate in Adult Education was required for all instructors. So maybe it was true in Weller's world, but it wasn't true in my world.


1998 – Wikis

Also: CMSs, RSS

I didn't bother with Wikis. Oh, sure, I looked at them, I liked Wikipedia, but for the most part, the idea of everybody writing a single article didn't appeal to me.

Instead, for me, 1998 was the year of RSS and content syndication. I did a lot of work with RSS, combining it with content management systems (CMS). I wrote my own CMS, however, a number of bulletin board services had evolved into CMSs and were being used for courses and such around the internet (places like UMUC were still in the lead).

Around this time I realized that bulletin boards were not going to last forever, and so I gathered the posts I had written to mailing lists and boards around the internet, but them in my own CMS, and started my own RSS feed. I soon started collecting 'links' as wells as 'posts' for my website. This was the beginning of OLDaily, though it wouldn't get that name for another three years.


1999 – E-Learning

Also: LMS, LOM

The term e-learning was coined by Jay Cross, whom I would meet a couple years later in Anaheim. I didn't like the term because I thought it was a way for (static) content providers to try to represent themselves as being 'online'. E-learning thus included things like static courses on CDs and DVDs, which did not interest me at all.

Instead, I spent 1998 and 1998 building a learning management system (LMS) called OLe (for 'Online Learning Environment'). I remember meeting Murray Goldberg at a NAWeb conference, who had written WebCT (in Perl, the ultimate (C?) version would come a number of years later). We used OLe to offer some high school courses through the Brandon Adult Learning Centre and the General Business Certificate through the College.

The big news for 1998, for me at least, was the arrival of the IMS learning Object Metadata specification, which made a lot of sense to me. I had designed OLe so that it used free-standing 'modules', and the IMS-LOM was perfect for this.


2000 – Learning Objects
Also: Portals, CoI

I wrote my paper 'Learning Objects' in 2000 - it was the product of a presentation I did on IMS-LOM a few months earlier, building on a lot of the idea people like Dan Rehak had already developed. What made my paper different, I think, was that I had already worked out a number of the practical applications (and problems) with OLe, and so I was in a good position to present the full case from principle to product.

By 2000 I had been hired by the University of Alberta. By now, online communities were big - Cliff Figello had written his book of that title, Wenger had written about Communities of Practice (CoP), and Hegel and Armstrong had written Net:Gain about the use of subject-specific portals. At the U of A I worked with Terry Anderson; Randy Garrison was Dean at that time, and Walter Archer was also there; they were fresh from developing the Community of Inquiry (CoI) model. All this was perfect for the while combination of CMSs and RSS syndication and newsletters, and we spent the year building MuniMall.


2001 – E-learning Standards
Also: Javascript, Email newsletters

In 2001 my term at the University of Alberta ended and I traveled to Melbourne, Australia, for work on ed tech projects with Tim van Gelder (the same Tim that spoke at the Connectionism conference in 1990, though this is purely a coincidence - he had seen my Logical Fallacies page and hired me because of that). I experimented with Javascript-based comment forms embedded in web pages. And I began experimenting with email newsletters, eventually figuring out how to use my content management system to automatically create the newsletter. Thus, OLDaily was born.


2002 – The Learning Management System
Also: Open Archives, Repositories, Resource Networks

It's odd that Weller put e-learning standards chronologically before the LMS because in fact the provenance is the reverse: first came the LMS, and then, much later, came e-learning standards.

For me, 2002 was the year of learning object repository networks. This was work pioneered by the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and supported a variety of metadata formats, including Dublin Core. At the same time, MIT came out with DSpace, which was their version of OAI-PMH. We built on this type of work in the EduSource national repository network project, work that included CanCore, spearheaded by Norm Friesen. I remember Griff Richards proposing a peer-to-peer metadata exchange system, and my criticism that at 27 megabytes it was far too large a download.


2003 – Blogs
Also: Podcasting, DRM

As mentioned above, blogs started much earlier than 2003. I had been posting my articles since 1998 and my email newsletter since 2001. But blogging was indeed catching on in educational circles around 2003 and I would write my article on educational blogging in 2004. I remember the black "I'm blogging this" t-shirts. Also, blogrolls and blogwheels. I built a blog 'referrer system' in 2003 that at its peak had 800,000 hits a day (it's the sort of thing that, had things worked slightly differently, would have made me a billionaire).

Podcasting was invented in 2003 by Dave Winer (the father of RSS) and Adam Curry (Daily Source Code). It entered a milieu where people were figuring out how to format content streams, so while in education we were seeing things like Simple Sequencing and Rob Koper's Educational Modeling Language (EML), in the wider web we were seeing things like Synchronized Multimedia Integration Language (SMIL). That's what I used for my own Ed Radio of 2003, which extracted MP3 links from RSS files and inserted them into SMIL, which I then exported into RealMedia and an online service called WebJay (later acquired and killed by Yahoo). There were some significant early educational podcast, for example, Dave Cormier and Jeff Lebow's Ed Tech Talk, as well as Ed Tech Posse by Alex Couros, Rick Schwier and Dean Shareski.

The other big thing that was happening in 2003 was digital rights management. These standards were getting off the ground, the commercially-inspired XrML, owned by ContentGuard, and the Open Digital Rights Language. We did work in ODRL and I created a 'distributed DRM' system using the language. There was a lot of debate around that time about openness and licensing, and a lot of complaints about the intrusive DRM systems being employed by commercial content providers, all of which overlapped into the field of ed tech.


2004 – Open Educational Resources
Also: The Semantic Web, Social Networks

David Wiley had been running his Open Content Project since 1998. MERLOT was started in 1997 and opened up in 2000. Creative Commons was founded in 2001. UNESCO formally defined 'Open Educational Resources' in 2002. There was a lot of debate about whether education should support federated repositories, like ADL's CORDRA, which would be closed, or would adopt more open repositories along the model of OAI.

Also in 2004, social networks burst onto the scene with the launch of sites MySpace and LinkedIn (2003) and thefacebook (exclusively for the Hah Vad community) and Flickr (2004). These were arguably an outgrowth of CMSs that became blogging platforms, and were presaged by sites like LiveJournal (1999) and Deviant Art (2000).

Meanwhile, the develop of the web, XML and standards all coalesced into the Semantic Web. The Resource Description Format (RDF) had been around for a while; I wrote about it in relation to RSS and learning objects. It was gradually joined by ontologies (OWL), query (SPARQL), and would form the foundation for web services properly so-called (SOAP and WSDL). If all this sounds like alphabet soup, well, that's how it felt in 2004. To me, a 'semantic social network' was a natural, and I wrote about that. I also combined the idea of distributed DRL with all this to propose something I called Resource Profiles (which we now think of as resource graphs).


2005 – Video
Also: OpenID, E-Learning 2.0

YouTube launched in 2005, which explains its inclusion here. But it was more about what YouTube represented than it was about videos. It was not just a place you could upload videos (though it was certainly that) but it was also very much a social network. This is why we see Weller quoting people like Henry Jenkins suggesting that we should begin to see learning online as creative and fundamentally participatory. And that's why we also see things like the Khan Academy and the flipped classroom (temporarily branded the 'Fisch flip' after Karl Fisch). Like so many other people, I had a Flip video camera (which was affordable) long before I had a smart phone (which was not).

But 2005 wasn't really about video for me. It was about authentication and identification. This made sense in the context of DRM, and also in the context of social networks. Meanwhile, LMSs, repositories and resource networks had never gone away, and so the question of how to identify people became urgent. FOAF was developed to help people identify themselves (and their nearest airports) on their websites, just like an alternative RSS feed. I developed my own system, mIDM, and five days later, OpenID was released to the world. Eventually this open approach to authentication would give way to 'Login with Google' and the like, though the dream never disappeared entirely. The academic community, meanwhile, adopted a federated approach, closed to outsiders, which resulted in protocols like Shibboleth (which saw a major release in 2005) and eduRoam (which was just beginning to expand outside Europe in 2005).

All of this was part and parcel of what was called Web 2.0., which was popularized by the O'Reilly  Web 2.0 Conference in late 2004. All of this came together for me, and I wrote a paper called E-Learning 2.0. Before the MOOC, and after Learning Objects, this is what I was most well-known for.

 
2006 – Web 2.0
Also: Skype

Weller writes quite a negative chapter on Web 2.0, though he does get some of the essential elements right. First of all is the idea that web 2.0 is a bottom-up concept. Second is the emergence of the idea of tagging (or, more formally, the folksonomy) rather than metadata. Yes, some web 2.0 sites like del.icio.us and Technorati are gone, but the core ideas of web 2.0 live on to this day, as do some of the key underlying technologies, like OAuth, for distributed identity management, and representational state transfer (REST), which eventually replaced the top-heavy model of web services. Weller says "everyone (including myself ) is  now  rather  embarrassed  by  the  enthusiasm  they  felt  for  Web  2.0." I don't understand that. Web 2.0 left a huge legacy and we would be much poorer off without it.

Skype is a good example of this. I began working with Skype in earnest in 2006, using it not only for its intended purpose of voice chat (then video chat) with faily and friends over long distances, but also with the world's first educational Skypecast, which would be a forerunner of the webcasts we have today. This first Skypecast was broadcast from Bloemfontein, South Africa, where I gave a talk to a theatre audience and a global audience measured in single digits. Skype eventually discontinued its Skypecast feature, but you could see the future through that crack in the door.


2007 – Second Life and Virtual World
Also: Twitter, OLPC

Virtual worlds date back to the Adventure mainframe game, and then MUDs, both mentioned above. In those days, there were two Holy Grails: first, transportation between MUDs, which was partially achieved with the InterMUD protocol, which at least allowed people in different MUDs to communicate with each other; and second, graphical MUDs, which would await the development of video games like DOOM and 3D modelling software like VRML. By 20087 a lot of work had been done in educational simulations using VRML, especially in military learning, but I wasn't involved in any of it.

Into this environment came Second Life, which was distinct in two ways: it was commercialized and heavily marketed, leading institutions and organizations to buy and build 'islands', and second, there was nothing to do, which led to it being mostly empty. Weller documents this pretty well, including the use of Second Life to give lectures, and the Second Life-Moodle integration called Sloodle.

Another thing that got started (at least for me) in 2007 was Twitter. I wasn't exactly an enthusiastic user, but it clearly had potential. It's worth noting that Twitter inherited a number of web 2.0 ideas, including especially following and tagging, which gave it an instant graph-like structure characteristic of the genre. I was never a fan of the idea that my communications with individuals could be read by everyone and I didn't like the idea of publicly posting a select list of people that I follow (it felt too much like high school cliques to me; I never had a blogroll either).

Weller also doesn't mention Nicholas Negroponte's One Laptop Per Child (OLPC), which is a surprising and serious omission in a book about Ed Tech. It was an ambitious plan that had some good ideas - first of all, the idea of an affordable personal computer for everyone, with a full suite of educational applications, with mesh networking for regions without internet, and alternative power sources. OLPC was widely viewed as a failure, partially because of cost, partially because the product wasn't very good, and partially because Negroponte's model required governments to invest in millions of the units. But it resulted in the widespread deployment of affordable tablets, influenced the development open operating systems like Android, and has a legacy today of products like Raspberry Pi and Arduino. I've done far too little work in this area, but I get it.



2008 – E-Portfolios
Also: Edupunk, Networks, MOOC

It's hard to imagine writing a history of e-Portfolios without mentioning Helen Barrett, but Weller has done that. From her original pioneering work in 2001 and more more than a decade on, Barrett was at the core of the development and popularization of e-Portfolios. They are now standard parts of LMSs, widely used in educational institutions (as a quick Google search will show), and invaluable to students for job applications, especially in creative fields. My own work with e-Portfolios included work on using them to establish a digital identity. Yes there were issues, including the institutional ownership of e-Portfolio websites, as Weller notes. But the more significant issue was in the use of e-Portfolios for assessment as opposed to e-Portfolios for growth and development.

Edupunk was Jim Groom's idea. He drew on the punk ethos of 'do it yourself' and of making things do what they weren't intended to do. It's what you get, I think, if you take Web 2.0 and remove the corporations.

The other big think for me in 2008 was networks. For me the start of all this is my 2004 'networks' edition of OLDaily. This was an outgrowth of the Future of Learning in a Networked World project run by Leigh Blackall in New Zealand, and I followed it up by elaborating on the idea of 'Groups vs Networks', which attracted some attention from Terry Anderson and Jon Dron. It was also an outgrowth of my longstanding interest in connectionism, which meshed perfectly with George Siemens's description of connectivism, and by the fall of 2008 we were embarked on our first MOOC.


2009 – Twitter and Social Media
Also: PLE

If there's a trend in Weller's book, it's that he's generally about four years behind the date something actually happened. And here again with social networks. I think this represents the time it takes for something to be developed in the field, and for it to be recognized in academia. But of course that's just a theory. Anyhow, in 2009 social media had been around for five years, Twitter for three. In passing, let me note that here Weller mentions Twitter's use of the hashtag (and reading this, it strikes me that maybe he does not know that the folksonomy and the hashtag are the same thing, as they are treated completely separately). 

There's a good point to be made about the benefit of social media, and especially the way it democratizes access to distribution and influence, and the harm of social media, where it serves as a platform for harassment, bullying and fake news. Social networks are also the locus of context collapse, discussed by people like danah boyd and Michael Wesch, where there's no distinction between talking to one person and talking to everyone.

In 2009 we were talking about things like the Orange and Rose revolutions (and the next year, the Arab Spring), thinking how good it all was, and didn't imagine it being used against us. But we should have, had sufficient attention been paid to things like the harassment of Kathy Sierra. And it was only later we saw how sites like Reddit and 4chan inspired things like GamerGate and, a few years later, profoundly influenced politics.

My focus in 2009 was the personal learning environment. This concept was already a couple of years old; I participated in a conference with people like Scott Wilson in Manchester and developed a project at NRC called PLEARN. It was an attempt to take what we already knew about things like Web 3.0, social networks and e-portfolios and move them from institutional ownership (or corporate ownership) and into the hands of individuals.



2010 – Connectivism
Also: digital literacy, critical literacies

George wrote his connectivism paper in 2004 (and I made sure it got published in IJITDL), and my own Introduction to Connective Knowledge followed shortly in 2005. And of course our Connectivism and Connective Knowledge MOOC was in 2008. So I don't really know why it shows up under '2010' in Weller's book, save for the theory I mentioned above. That would explain why he credits Kop's (2011) version of my four principles of aggregation, remix, repurpose and feed forward.  After discussing Dave Cormier's rhizomatic learning (which I think was developed independently of, and not following, Deleuze and Guattari) and his own theory, Weller writes, "it feels that the sense of experimentation and exploration that connectivism represented has dried up." I don't agree.

My focus in 2010 was on knowledge. I had taken the observations of people like Bell, Mackness and Kop seriously and was working through what it was that people needed to be successful in a MOOC, and what it was even for a person 'to know'. I eventually settled on the idea of 'knowledge as recognition', but my focus in 2010 was the critical literacies, a set of ideas that I had developed in a couple of talks in late 2009 and expanded into a full concept in 2010. My thinking was this: in traditional learning, students need to be literate, so we teach them literacies first thing. In an online world, there's a distinct literacy, often called digital literacy, sometimes called twenty-first century skills. What underlies these? Kop and I would eventually teach a Critical Literacies MOOC later in 2010.

There's a whole pile of work in digital literacies that was being done by others around this time, especially in relation to education. See for example Doug Belshaw's work. But the term appears exactly once in Weller's book.


2011 – Personal Learning Environments
Also: performance support, LTI

In 2011 I was still working on the PLEARN projects and beginning to ramp up for what would be the Learning and Performance Support Systems project. The idea of the PLE had now been around for some six years with no real progress being made. The PLE encompasses a bunch of hard problems. These are only hinted at in Scott Wilson's famous diagram (and the many other diagrams that followed). Weller mentions Wilson's Plex, which was a PLE (though to me it looked like a file manager) and Dave Tosh and Ben Werdmuller's ELGG, which was not (it was a social network service).

Weller also references the term PLN, or 'personal learning network', which refers to the network of connections a person makes using social network services. To me it was nothing but a branding exercise; people taking the idea of the PLE, applying it to something that already existed, and giving it a new name.

He finishes the section saying "personalized learning remains one of the dreams of ed tech, with learners enjoying a personalized curriculum, based on analytics." This has nothing to do with the PLE. It's a common confusion, though. A lot of people see personalization as the core of performance support, for example (and that's how LPSS became an ungainly attempt to merge those two concepts, and how so many people thought we were building a recommendation engine, but that's another story). I spent most of 2011 trying to explain this difference.

I'm also going to use 2011 as the year for learning tools interoperability (LTI), even though the specification came out in 2010, because LTI is a natural complement to what we were trying to do with PLEs. We did some preliminary work with LTI, and in the PLEARN application we build, we used LTI to help learners (not designers or instructors) define what applications they would use (PLEARN was shut down prematurely by NRC management, much to my regret).


2012 – Massive Open Online Courses
Also: learning experience design

Laura Pappano in the New York Times declared 2012 'the year of the MOOC' so I guess I can see why Weller chose 2012, though I remember it more as the end of the Mayan calendar. And looking back, I can see I did a lot of talking about MOOCs in 2012, as the phenomenon swept me up along with all the publicity.

But I also started talking about the learning experience around that time. This was the big difference between the PLE as I saw it and the content-based learning-path based systems emerging both in the commercial MOOCs and in the nascent learning analytics movement (which I was watching but not a part of at all). In my view, what we're designing when we design learning is an environment that is open-ended and characterized by its affordances. Today of course learning experience design means something totally different, but that's what it meant to me then.

More recently, people have been talking about the death of MOOCs. But there are more MOOC providers than ever, all the MOOC companies managed to somehow find success, and (to my mond) most importantly of all, MOOCs cemented the idea of open access as mainstream. As Weller says, "there is no real driver for educators to focus on open access above other resources. But when universities started creating MOOC, this placed pressure on people to use open access resources, because the open learners probably wouldn’t have library access privileges." I think focusing on the few commercial approaches (as Weller does) is a mistake; far more interesting are the many more open MOOC initiatives around the world in less commercially-minded nations.


2013 – Open Textbooks
Also: ownership, cooperatives

In 2013 I was still talking about MOOCs. The interest was intense, and that's what people wanted me to talk about. I talked about how they needed open educational resources, about how they were needed to create learning communities, and how they supported open-ended informal learning and performance support. We were just designing the LPSS program, trying the meet the government's objective of corporatizing NRC, and I was thinking of issues of community and ownership. My main project at the time was Moncton Free Press, an Occupy-inspired journalism cooperative.

Needless to say, I wasn't interested in open textbooks at all. That's not to sell them short; as Weller notes, initiatives like OpenStax and BCcampus (where people like Paul Stacey and David Porter should get a lot of credit) have saved students millions of dollars.But they strike me as old technology, and a problem that was solved years before by things like Project Gutenberg and even (to a degree) Internet Archive. Weller claims (p.181) that open textbooks are an outcome of learning objects and OER, but that is hardly the case.


2014 – Learning Analytics
Also: cooperation, recognition


After MOOCs and connectivism, when I made the turn toward personal learning, George Siemens made the turn toward learning analytics. His turn was more successful than mine, at least in the short term. Weller quite properly credits George with a lot of this early work (though for some reason credits Sclater, Peasgood, and Mullan (2016) for the definition instead of Siemens and Long (2011)). Another case where the Weller version is several years too late and credits the wrong people. Anyhow, I document many uses of learning analytics here.

Given my interest in neural networks I should have been more interested in learning analytics, I suppose, but I wasn't, and for several reasons. First, learning analytics was (and is) about creating personalized learning paths, based on analyses if students' performance. Second, most learning analytics of the time was (and is) based on clustering and regression, which aren't really network properties. Weller has a couple of cases that show this sort of failure: the indifferent success of CourseSignals, and the 'six myths' about students at the Open University.

Weller also notes, "The downsides to learning analytics are that they can reduce students to data and  that ownership of this data becomes a commodity in itself." They don't "reduce" students to data, but they do extract data from students. And this is something that had been noted long before; it is the dichotomy inherent in Michael Wesch's (2007) "The machine is us/ing us" and the 2010 statement that "If you are not paying for it, you're not the customer; you're the product being sold."

Weller also talks about ethics in learning analytics, quoting Slade and Prinsloo, who did indeed write an interesting paper (2013). But it should be noted that by 2013 a ton of other work has been done in this area, which I've been digging into elsewhere.

In 2014 I was still deeply immersed in LPSS, and still talking a lot about PLEs and MOOCs. Subtopics that were of interest were the idea of cooperation (as opposed to collaboration), and automated student assessment. I didn't actually build anything related to these (I was typing to get LPSS interested in them) but I wrote a number of articles on both topics.



2015 – Digital Badges
Also: competencies

As Weller notes, the beginning of digital badges is probably best placed in 2011 with the launch of Mozilla's backpack and badge infrastructure project. It was a neat idea (and if you look at the mechanics you can see that it parallels almost exactly the distributed DRM and authentication approaches of earlier years, with a three-part relationship established between an issuer, a recipient, and a badge hosting service. What made it tricky for people to adopt was that the badge data was 'cooked' into the digital badge image, which was taking the tech a step too far. And of course the idea of the badge itself is a direct descendant from the original idea of smaller chunks of learning found in concepts like learning objects.

I wrote nothing about competencies during that period, but it was the first bit of LPSS that was really developed (though it never did see a wide release). There was quite a bit of activity in the field with numerous efforts to develop various competency standards. They were a natural adjunct to badges, and mentioned by Weller briefly in that context. What competency definitions added, crucially, was to define what would count as evidence for an assertion. So you could see the chain of connections from performance support to competencies to assessment to recognition, and from assessment to learning resources (via learning object metadata). I remember John Baker calling all this "magic rubrics" back in 2008 or so, and it made total sense to me.

The problem with badges and competencies wasn't "the credibility issue," as Weller notes (if Harvard started giving out badges the credibility issue would disappear overnight). It was in the idea of defining what constituted performance, competence, and evidence. I remember arguing at the time that assessment was based on recognition, not specific performance outcomes. But whether or not I am right about this, the actual practice of defining competencies is difficult, and for many, poses an intractable problem.


2016 – The Return of Artificial Intelligence
Also: deep learning, fake news, personal learning

Weller discusses 2016 as 'the return of artificial intelligence' but most of the chapter is an extended argument explaining 'why AI will not work'. Most of the discussion is focused on 'expert systems', a rule-based approach widely considered to have limited applications because of the difficulty in defining the rules and the inherent complexity of the problem space, as Weller (in so many words) notes. But the list of 'reasons why AI will fail' offered by Selwyn (2018) are just so much nonsense.

By 2016 the entire learning analytics community (and the entire AI community in general) was abuzz over the potential of 'deep learning'. This is an approach based in neural networks, and hence, the connectivism of old. What makes the networks 'deep' is the use of multiple 'hidden' layers. All this was described by people like Rumelhart and MacClelland in the 1990s, but it took this long to develop a solid set of applications. While much to the 'learning analytics' touted in previous years was based in rule-based approaches, by 2016 the bulk of the work was being done with neural networks, and these were beginning to work quite well. It's true that none of these amount to 'general AI', which emulated full human capabilities. But this isn't a serious issue at trhe moment.

The big issue in 2016 was that AI was beginning to work and that it was immediately being put to less honourable purposes. The emergence of fake news wasn't so much the result of social networks - these had been around now for a decade - but the use of bots and analytics to craft messages, identify recipients, and spread propaganda. Complicit in these efforts were data-gathering technologies to feel the algorithms with the input they needed to learn and recommendation algorithms that would gradually lead people to more and more extreme content.

My response to this is personal learning. This is a response on several levels. First, a personal learning network is much more distributed than social networks, and thus much less amenable to take-over by bot networks. Second, personal learning networks allow people to select the people and messages they want to receive, this preventing unwarranted influence by algorithms. And third, personal learning helps each individual develop a personal immunity to fake news, rather than relying on experts. The jury is still out on whether personal learning will come to the fore. It's not going to be supported by institutions of experts like schools and universities.


2017 – Blockchain
Also: xAPI, Caliper, cloud, virtualization 

Blockchain is an interesting technology that will have future applications, though it is far from prime time. Weller cites Tapscott and Tapscott (2016) to define it (yet another secondary source) though it originated in 2008 with Satoshi Nakimoto's white paper. There's a really good overview in Ethereum. To really understand blockchain, though, is to recognize it as a combination of two underlying technologies: consensus (that is, the theory of distributed networks), and encryption. Weller, citing Grech and Camilleri (2017), identifies four areas impact; in fact, there are a lot more, as I have documented here in my notes for a (forthcoming?) book The Blockchain Papers.

As many commentators have noted, blockchain is a natural for badges. As Weller notes, Mozilla passed off the badge infrastructure to Badgr. Other companies, like Credly, were also in the mix. And most significantly, IMS took over the formal badge specification in 2017. This reflected a community-wide interest in assessment standards. In 2017 IMS also released its competencies standard, in 2018 it would release Caliper, and then the Competencies and Academic Standards Exchange (CASE) standard. Weller doesn't mention xAPI or Caliper anywhere, though these are crucial pieces of the developing ed tech infrastructure.

In 2017 I spent a lot of time trying to learn about the cloud and virtualization, something that cumulated with my (less than successful) gRSShopper in a Box workshop. By 2017 cloud technology was becoming huge, with a whole infrastructure developing under it. I covered a lot of this in my presentation late that year in Tunisia. The use of cloud technologies for learning probably first emerged in a big way with the Stanford MOOCs, which used AWS for scaling. Ed tech infrastructure companies have been building on that ever since. In 2017 I found myself playing catch-up, but also, we were reaching the point where individuals could begin doing a lot with tools like VirtualBox, Vagrant, Docker and more. Weller's book mentions none of this, though it's probably the most significant development in ed tech in the last decade.


2018 – Ed Tech’s Dystopian Turn
Also: open practices, fediverse, distributed learning technology, domain of one's own


I'm not really sure 2018 was more dystopian than any of the other years, but Weller picks this year to make the point. To be sure, it is the year many of the more dystopian aspects of learning analytics began to appear in the literature, and Weller highlights several. Much of this chapter consists of lists of examples. I've compiled a comprehensive and categorized summation of these issues here. But in any case, there's a long history of ed tech dystopianism. We have things like David Noble's Digital Diploma Mills. Kearsley. The whole history of education deform movement. And for every one of the 100 ed tech debacles described in Audrey Watters's seminal article, there was a chorus of critics. I get that things go wrong. But I don't know of any industry in which they haven't. To me, ed tech is doing a lot better than a more traditional model of schooling that involved hazing and bullying, corporal punishment, suicide, child sex abuse, and worse. I mean, we can pick our criticisms, can't we?

The issue that really emerged in 2018, to my observation, was open practices (or practises) (sometimes also called open pedagogy). There are various aspects: learning by doing, working and learning openly, communities of knowledge. All of these have roots in the earlier technologies we've talked about, from blogs and social networks to e-Portfolios to MOOCs (where we mentioned it explicitly), to performance support, and the definition first emerged in 2010; see, eg., Ehlers and Conole (also here). Also (especially to people like David Wiley, who defined it back in 2013 in terms of what could be done with openly licensed resources), things like open source and OER. You also see it in things like OERu's 'Logic Model', originally developed in 2017 by Jeff Taylor. The issue came to a head with the two versions - summarized by Cromin (2017) and Wiley and Hilton (2018) establishing their respective ground.

For me, 2018 was the year of E-Learning 3.0, more properly called 'distributed learning technology'. It's essentially what you get if you combine the cloud with the core ideas of the blockchain (consensus and cryptography). There's a lot of serious thinking happening about things like federated social networks (including ActivityPub, decentralized IDs, Mastodon, WebMentions, microblogs, and much more) as well as cloud-based learning networks, data-driven learning resources, and more. I covered a lot of it in my 2018 E-Learning 3.0 course. I would also mention Jim Groom's Domain of One's Own as an important initiative in this space, reinforcing once again the idea that learning networks should be centred around people, not institutions or corporations.

2019 - VR
Also: Advanced Decentralized Learning Ecosystem, Duty of Care

If Martin Weller had included a chapter for 2019, I expect it would have been VR, based on what I expect to see in the literature in 2022 and 2023. VR has recently coalesced around two popular engines, Unreal and Unity. My own colleagues have been doing development of learning simulations in Unity. We're also seeing the emergence of much more affordable VR headsets, such as the Oculus Rift and Hololens (of course, 'affordable' here is still relative; I can't afford one persoanlly yet). As always, there's a long history of work happening in the background, and I remember Donald Clark being big on the technology a number of years ago. The other thing that's making it more mainstream is the emergence of GPUs and distributed processing. Because of them I can play games like No Man's Sky on my laptop. Ah, bliss.

For me, 2019 was all about the Advanced Decentralized Learning Ecosystem (ADLE?) (my answer, I guess, to EDUCAUSE's NGDLE (which probably belongs in the 2015 section)). I ran a number of experiments, publishing results related to the electron infrastructure, connectivism, badges and blockchain, the future of OERs, and related topics.

I also started my monolithic Ethics, Analytics and the Duty of Care. As a colleague recently commented, everybody is getting on the ethics bandwagon these days (and I should be working on that, not this article). The other side of it is the analysis from the perspective of feminist 'Care' ethics. My first exposure to this was at a seminar hosted by the department of public affairs at Carleton (I can't find the reference, but I wrote about it). It's where I heard the name Carol Gilligan for the first time.


Conclusions: Reclaiming Ed Tech
Also: historical amnesia, academic isolationism

It there's an overall theme to Weller's work, it's this: "in ed tech, the tech part of the phrase “walks taller.'" And this, he says, has been largely unsuccessful. "Some of the innocence and optimism invested in new technologies is no longer a valid stance." This is because education technologists fail to serve educational objectives, he suggests. "The technologies that are most widely adopted and deeply embedded in higher education institutions tend to correlate closely with core  university  functions, which are broadly categorized  as content, delivery, and recognition."

But it's hard to reconcile his criticism that the ed tech sector suffers from historical amnesia with the treatment of ed tech in this book. There's just so much of the rich history of ed tech that doesn't sho up in these pages. Even in my own treatment, I can think of dozens of things I should have mentioned - things varying from Flash to XSL to Webheads in Action to the Flat World Network, informal learning, and so much more. With Weller, I didn't include mobile learning, game-based learning, and  learning design, and they deserve a place. I don't know how you can say that, in essence, nothing has changed over twenty-five years, when from a broader perspective it becomes apparent that everything has changed. Forget Christensen's disruption theory, which never had anything to do with education technology. Look at real people: how they learn, where they learn, what they learn. Nothing is the same. And most of the change has been for the better.

One example of this is academic publication itself. If you're relying on journal articles to understand ed tech, then you're years behind, you're missing most of the story, and you're crediting the wrong people for innovation. Weller notes at one point,

In examining the different subcommunities that have evolved under the broad heading  of “open education” Weller, Jordan, DeVries, and Rolfe (2018), using a citation analysis method, discovered eight distinct communities. The published papers in these areas rarely cross over and reference work from other communities, which is symptomatic of the year-zero mentality.  

Right. That's how the academic community is approaching the subject. Closed citation networks, in-groups, self-centered societies, no regard for primary literature. Weller's book doesn't mention Jay Cross once. Can you believe it? But the practitioners on the ground have been engaged in the same enterprise from year one, working the same themes and the same ideas, developing the ideas and infrastrcuture that really is rewriting how we learn. The really big things are already evident: open access education, learning communities, informal learning. The rest of it - the really hard work of building an entire infrastructure from scratch - is still in progress.

After reading this book, I want to say to Weller the same thing I say to so many baseball umpires and referees: "Hey, open your eyes, you're missing a really good game out there."






















02 Mar 01:35

Surface Pro X :: Battery Life

by Volker Weber

SurfaceProXbattery.jpg

I have yet to run down the battery during a single day. I cannot tell you in hours how long it lasts, but it lasts longer than I sit in front of it. Which is actually quite impressive. It means I don't need to charge it during the day. This all depends mostly on the screen brightness and the workload though. I like to sit in dimly lit rooms when I work and my workloads are rather light.

02 Mar 01:33

How to enable privacy settings like tracker blocking in Chrome, Firefox, Edge and more

by Jonathan Lamont
Chrome, Firefox, Edge and Safari icons

Some of you may not think about your web browser all that much. These days, however, it could be worth considering why you use what you use — and how you could benefit if you switch.

One of the central features that you should consider is privacy. Most browsers offer at least some level of privacy protection, but just what they offer and how you enable it can be a bit confusing. So, we’ve compiled a list of how to enable the privacy functions offered by some of the most popular web browsers out there and how you can turn them on, as well as get an idea for the differences between each browser.

Keep in mind that this is not a comprehensive list. There are a lot of browsers out there, and most have multiple versions for mobile and desktop, as well as betas and test builds. On top of that, many browsers support extensions that go above and beyond what’s included in-browser. As such, don’t take this as the be-all and end-all guide for web browsers. The list will focus on the tools included in some of the most popular browsers for desktop — and most on the list have excellent mobile variants with similar privacy features on offer. That said, let’s dig in.

Google Chrome

Google Chrome icon

We’ll kick things off with a look at Google Chrome, arguably the most popular browser available at the moment and likely the one you’re currently using to read this. It’s also one of the more limited when it comes to dealing with privacy issues like trackers, but it is getting better.

Chrome version 80 introduced the beginnings of a system that will gradually make cookies more secure. The system allows all first-party cookies (used to store your preferences for a specific site) but adds limitations for third-party cookies (which can follow you around the web and store your online activity). You can read all the details here, but in short, third-party cookies will be required to include specific ‘same-site’ settings to ensure they’re accessed securely. Further, Google plans to phase out third-party cookies entirely and replace them with a new technology it’s developing in about two years.

However, Chrome also has a built-in cookie blocker. Go to Settings (click the three dots in the upper-right corner) > Advanced > Privacy and security > Site settings > Cookies and site data. From there, you can toggle the option to ‘Block third-party cookies’ or toggle the ‘Allow sites to save and read cookie data’ to block all cookies. When Chrome blocks cookies, it displays a cookie icon on the right side of the address bar. Clicking the icon lets you see what cookies were blocked and choose to allow or block individual cookies.

Fingerprinting and ad-blocking

At the moment, Chrome does not offer any protection against fingerprinting — the practice of using certain browser and web data to identify people across the web — and only offers limited built-in ad-blocking. Google plans to implement fingerprint protection later this year. As for ad-blocking, by default Chrome blocks all ads on sites that break rules set by the Coalition for Better Ads. However, that’s set to change as well — you can learn more about that here.

Mozilla Firefox

Mozilla Firefox icon

Mozilla’s Firefox browser has been around for some time and it’s honestly one of the best out there, especially from a privacy standpoint. Mozilla has made privacy one of its main goals with Firefox, and it seems most of the browser’s recent feature releases add to privacy protections in some way.

To start, Firefox began offering ‘Enhanced Tracking Protection’ as a feature in October 2018. It’s a customizable feature that blocks trackers on the ‘Disconnect list‘ as well as cross-site tracking cookies by default. Also, it allows first-party cookies, but you can adjust that setting if you choose. The easiest way to check if Enhanced Tracking Protection is running is to look at the address bar in Firefox. There should be a shield icon on the left side; if it’s purple, Firefox is blocking trackers. If the icon is grey, there aren’t any trackers to block and if the icon is crossed out, then protection is disabled for that website.

Clicking the shield lets users quickly toggle protection on or off, as well as see which cross-site cookies Firefox is blocking. Clicking the ‘Manage Protection Settings’ button — or clicking the three-line menu button, then Options > Privacy & Security — will let you customize the settings.

Enhanced Tracking Protection’s settings let you pick between the ‘Standard’ experience, ‘Strict’ or ‘Custom.’ Strict blocks more than the Standard setting, but Firefox warns it may break some sites. One of the best examples of this is embedded content, such as tweets. Those rely on, at least to some extent, cookies. In my experience, using the Strict setting stops those from working properly. Custom gives users control over what gets blocked.

Fingerprinting and ad-blocking

Mozilla is currently building out a fingerprint protection feature that would warn users when a site tries to extract data that could potentially be used to create a fingerprint. Currently, Firefox warns that using the feature could “degrade your web experience.” However, you can still turn it on if you want by typing ‘about:config’ in the address bar and searching for ‘privacy.resistFingerprinting.’ If it’s marked ‘true,’ then the feature is on. If not, click the toggle button to enable it. You can learn more here.

As for ad-blocking, Firefox does not block ads natively, Mozilla does provide a list of recommended ad-blocker add-ons to use in Firefox.

Microsoft Edge

Microsoft Edge icon

Arguably the new kid on the block, Microsoft recently remade it’s Edge browser using Chromium as the base — the same open-source foundations that power Google Chrome. While there are a lot of similarities between the two browsers, one thing that Edge does exceptionally well is privacy.

Like Firefox, Edge offers built-in tracking protection, which comes in three flavours; ‘Balanced,’ the default option, ‘Basic’ and ‘Strict.’ Balanced blocks some third-party trackers along with any trackers designated as ‘malicious.’ It also takes into accounts the sites you visit and the organizations that own those sites. It lowers tracking protection for organizations you interact with regularly. Basic only blocks the ‘malicious’ trackers while Strict blocks most third-party trackers.

Users can edit these settings by clicking the three-dot menu button in the top right of the browser, then Settings > Privacy and Services. Further, while on this page users can tweak the trackers by clicking ‘Blocked trackers.’ This shows a list of all the trackers Edge has blocked. There’s an ‘Exceptions’ option as well where you can specify sites that Edge won’t subject to tracking protection.

Alternatively, users can click the lock icon in the address bar when on a website to adjust per-site tracking settings and see how many trackers Edge blocked on a given site.

Finally, Microsoft says its tracking modes — especially Strict — will also help protect users against fingerprinting. The Edge browser does not block ads natively, but since it’s based on Chromium, users can add extensions from the Chrome Web Store as well as from the Microsoft Store, giving users a wealth of options when it comes to ad-blocking extensions.

Safari

Safari icon

Apple’s Safari browser is the default option on macOS and while you can change it if you choose, there are plenty of good reasons to stick with Safari. Privacy is one of them.

Safari offers a feature called ‘Intelligent Tracking Protection,’ which uses machine learning to determine which sites can track you across the web. Then, Safari blocks and deletes third-party trackers from sites you haven’t visited in 30 days. On top of that, Safari allows third-party cookies from sites you visit frequently to function as third-party cookies for 24 hours. Afterwards, the browser sections them off so they can still store your login info and help you sign into sites, but can’t track you. Again, if you don’t visit the website the cookie is from after 30 days, Safari deletes it.

Users can head to Safari > Preferences > Privacy to disable Intelligent Tracking Protection (it’s on by default) or tell Safari to block all cookies.

As for fingerprinting, Safari offers a few protections. One protection is that Safari presents a simplified system configuration to trackers, which ultimately makes it more difficult to determine one device from another. Apple also says Safari doesn’t add custom tracking headers or other unique identifiers to web requests.

Further, Safari doesn’t natively block ads but does support extensions so users can add a third-party ad-blocker.

Source: The Verge

The post How to enable privacy settings like tracker blocking in Chrome, Firefox, Edge and more appeared first on MobileSyrup.

01 Mar 17:21

The Art of Escapist Cooking: A Survival Story, with Intensely Good Flavors

A fascinating food book. Most of the best food writing has pursued what Adam Gopnik calls the “mystical microcosmic” — “sad thoughts on the love that got away or the plate that time forgot.” Mystical microcosmic writers — M. F. K. Fisher, Julia Child, Anthony Bourdain, Michael Ruhlman — implicitly argue that they are like us, that we would enjoy what they enjoyed, that thoughtful eating can improve your life. Julia went to a fish place in Normandy and found a future husband and beurre blanc, and much of her best writing implicitly concerns the pursuit and care of each.

Mandy Lee’s book comes from a different place. In 2012, Lee was deeply depressed and living in a city she hated. Lee was born in Taiwan, grew up in Vancouver, went to grad school in (and loved) New York. Now, she was in Beijing, and everything in Beijing was awful: so awful that she could seldom get out of bed. She became an obsessive cook because focusing on elaborate and time-consuming recipes (and on elaborate and lovely photography of the prep) meant she could spend hours — days — locked in her home. Her cooking is not fun or easy or fast: her cooking is very angry, and she knows it.

Lee is always cooking for herself. (She’s cooking for her husband too, but he’s even more in shadow here than were M.F.K. Fisher’s lovers.) There’s no patron, no restaurant, no one to please but herself, and Lee is not easy to please. Her tastes are unusual, and for this she offers no explanation or apology. Reading between the lines, she likes savory and bitter breakfasts on the Chinese model, but she also really likes cheese. A few of her recipes reinvent what Minnesotans call a Juicy Lucy — hamburgers infused with tons of cheese — but hers represent a systematic study of how much cheese is possible in a burger and also feature green chili aioli, poached eggs, spicy pork or lamb patties, and sweet potato buns.

Most of the recipes concern spectacular and complex interplay of contrasting flavors and textures — finding ways to combine hot and sweet, crisp and unctuous and sour in each bite. There’s a lot of prep and plenty of challenging ingredients. In my first foray into cooking one of these, I struck out on one ingredient not only at Whole Foods but also at Super 88, an big Asian store that has two separate freezer cases of frozen buns, a whole aisle of fish sauce, and family-size packages of beef penis.

The book has a chapter on elaborate home-cooked dog food.

This is not, in other woods, a replacement for The Joy Of Cooking. But it’s got some very fine (and hilarious) writing, some nifty food ideas, and a nice insight into what cooking means to many of us.

01 Mar 17:21

Loons

A while ago, I joined and did some chores for an online New England group combatting anti-semitism.

Alas: the loons have descended. They’re eager to denounce Linda Sarsour. They’re eager to denounce Bernie Sanders. They’re eager to denounce Islam. They denounce the illuminati, praise Trump, praise Russia, and muddy the waters.

In part, this is the old story of two Jews, three opinions. But I expect that in good part it’s a planned campaign from our old friends in St. Petersburg. Send a few trolls to stir things up. Get them to spread some Trump talking points: it might help. Slip in a few blood libels and Illuminati: that’s always fun. Start promoting “Islam is Evil!”: maybe we can get the Jews and the Moslems to exterminate each other, bringing on the End Times or giving Russia a nice Mediterranean port.

And if someone tries to object, shout ,“Censorship!” And shout, “Incivility!”

01 Mar 17:21

Programming Healthily

by Eugene Wallingford

... or at least differently. From Inside Google's Efforts to Engineer Its Food for Healthiness:

So, instead of merely changing the food, Bakker changed the foodscape, ensuring that nearly every option at Google is healthy -- or at least healthyish -- and that the options that weren't stayed out of sight and out of mind.

This is how I've been thinking lately about teaching functional programming to my students, who have experience in procedural Python and object-oriented Java. As with deciding what we eat, how we think about problems and programs is mostly unconscious, a mixture of habit and culture. It is also something intimate and individual, perhaps especially for relative beginners who have only recently begun to know a language well enough to solve problems with some facility. But even for us old hands, the languages and mental tools we use to write programs can become personal. That makes change, and growth, hard.

In my last few offerings of our programming languages course, in which students learn Racket and functional style, I've been trying to change the programming landscape my students see in small ways whenever I can. Here are a few things I've done:

  • I've shrunk the set of primitive functions that we use in the first few weeks. A larger set of tools offers more power and reach, but it also can distract. I'm hoping that students can more quickly master the small set of tools than the larger set and thus begin to feel more accomplished sooner.

  • We work for the first few weeks on problems with less need for the tools we've moved out of sight, such as local variables and counter variables. If a program doesn't really benefit from a local variable, students will be less likely to reach for one. Instead, perhaps, they will reach for a function that can help them get closer to their goal.

  • In a similar vein, I've tried to make explicit connections between specific triggers ("We're processing a list of data, so we'll need a loop.") with new responses ("map can help us here."). We then follow up these connections with immediate and frequent practice.

By keeping functional tools closer at hand, I'm trying to make it easier for these tools to become the new habit. I've also noticed how the way I speak about problems and problem solving can subtly shape how students approach problems, so I'm trying to change a few of my own habits, too.

It's hard for me to do all these things, but it's harder still when I'm not thinking about them. This feels like progress.

So far students seem to be responding well, but it will be a while before I feel confident that these changes in the course are really helping students. I don't want to displace procedural or object-oriented thinking from their minds, but rather to give them a new tool set they can bring out when they need or want to.

01 Mar 17:20

"And, finally, do the strudel part..."

by peter@rukavina.net (Peter Rukavina)

My grandmother Nettie’s signature dessert was her apple strudel. It was served at all family occasions. Not only was it very good, but it was made from phyllo pastry that she made herself, from scratch, a miracle of patience and technique.

As I was going through Catherine’s things a couple of weeks ago, I came across a laminated copy of Nana’s strudel recipe stuck between a couple of cookbooks on the bookshelf. I set it aside, with thoughts that I might make it myself someday; opportunity presented itself this weekend when we were invited to the multicultural potluck lunch tomorrow at St. Paul’s Anglican Church. We will be the Croatian contingent.

Setting out to actually make the strudel this evening, I realized that Nana’s recipe was very heavy on the phyllo making and very light on the strudel making. This makes sense: when you’re making your own phyllo, the strudel part, time- and complexity-wise, is insignificant.

I wasn’t up to making phyllo from scratch, however, and so I was left to follow her scant instructions at the very end for the strudel, and to wing it from there. I had the benefit of having watched her make it many times, but that was over 30 years ago.

I’m rather proud of the result. It’s demonstrably apple strudel. I could have used more phyllo, and kept the apples away from the edges, but the result is pleasantly tart, with a hint of cinnamon and walnuts, and while nowhere near as good as Nana’s, it’s a credible homage.

If you are a parishioner at St. Paul’s, there will be a dozen pieces up for grabs tomorrow morning.

, , ,
01 Mar 17:19

In Solidarity with Wet’suwet’en… reconciliation and climate justice?

by Stephen Rees

The following is a Newsletter I just received from the Be The Change Earth Alliance. Comments, pingbacks and trackbacks have been disabled for this post. If you want to do any of those things please use the links below. I regret that the way WordPress now handles a simple cut and paste command wrecks the HTML of the original. Despite the cranky formatting that results I trust this is still readable.

SPECIAL EDITION NEWSLETTER: Our newsletter this month is dedicated to supporting Wet’suwet’en land defenders and protesters, in solidarity with Indigenous rights, title and climate justice. Climate justice is at the heart of our work at Be the Change.

The concept of climate justice highlights how environmental and social justices issues intersect and are often inextricably linked. One such intersection, deeply rooted in the essence of Canada’s Nationhood, relates to reconciliation and the long history of colonial institutions promoting, protecting and expanding large scale resource extraction projects on traditional Indigenous territory. It’s important to recognize that these land intensive projects are consistently upheld by violence toward Indigenous peoples and direct ignorance of Indigenous rights and title to unceded, unsurrendered land sought out by our government and fossil fuel giants for profit.

 Climate justice and reconciliation go hand in hand.

Recently, our Provincial and Federal governments and RCMP forces have been criticized for a string of injunctions and invasions of Wet’suwet’en people and protesters, which are alleged to have broken Wet’suwet’en, Canadian and International Law. Indigenous protesters have been blocking Coastal Gaslink from accessing their territory to construct the single largest private investment in Canadian history- a 6.6 billion dollar fracked gas pipeline that would extend 670-kilometers from Dawson Creek, B.C. to the coastal town of Kitimat, where LNG Canada’s processing plant would be located. 

.

Each clan within the Wet’suwet’en Nation has full jurisdiction under their law to control access to their territory. 

Under ‘Anuc niwh’it’en (Wet’suwet’en law) all five clans of the Wet’suwet’en 

have unanimously opposed all pipeline proposals and have not provided free, prior, and informed consent to Coastal Gaslink/ TransCanada to do work on Wet’suwet’en lands.” (Unist’ot’en

Coastal Gaslink has also yet to receive approval from the province’s Environmental Assessment Office they require to begin work. Outside Wet’suwet’en law, the hereditary chiefs’ land claim is backed by a 1997 Supreme Court of Canada decision.

Free, prior and informed consent is a human rights requirement under the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), which ironically, BC recently became the first province to have enshrined it into law. 

Local RCMP have been condemned for a number of actions at the blockade sites.

 Over the last month and a half following an injunction to remove Wet’suwet’en people from their own unceded territory, Police forces have been documented arresting Indigenous Matriarchs in ceremony, dismantling healing structures, including a ceremony for Missing and Murdered Indigenous Women, sawing through a sign reading “Reconciliation” and were exposed to have been prepared to use lethal force on Indigenous protesters and violating freedom of the press.

Dozens of peaceful land defenders have been arrested locally.The violence in Unist’ot’en sparked National protest, with railway and shipping roadways being blocked in Vancouver, Delta, Hazelton (BC) Toronto, Belleville (Ont), Montreal, Winnipeg and Edmonton, which have shut down rail transport across the country. There have also been a number of rallies and occupations of government offices in BC and other provinces– all in solidarity with Wet’suwet’en.

If you are a teacher, you may see that BCTF has also released a letter of support. Protest seems to be working, as the RCMP have begun to leave the area, as Canadian officials feel the economic pressure of halted railway networks and seek talks with hereditary chiefs. The Canadian government has also postponed bringing an UNDRIP motion to the table in response to the crisis.Source: https://www.ubyssey.ca/news/Indigenous-student-groups-to-fundraise-for-legal-fund/

Reconciliation is not a destination, it is a road that we walk.

It means listening openly, learning and owning the responsibility we have to mend the past and build Nation-to-Nation relationships moving forward- even when it feels inconvenient. It means showing up with resources to offer and acting in solidarity against injustices toward indigenous peoples. Wet’suwet’en also offers us the opportunity to see and feel climate injustice, as we watch another indigenous community fight for their inherent rights and title, while fighting to protect land and climate for all of us, as our Canadian governments attempt to push through another fossil fuel megaproject. Climate justice means changing this story.

We invite you to use this as a learning moment on a continued legacy of violent oppression of Indigenous Peoples and on the importance of respecting the varying perspectives and beliefs of First Nations who refuse to align with Canada’s colonial interests not true to their people. Let’s also remember the reason Indigenous rights are being violated- to protect and uphold the production of fossil fuels at a time when we have only 10 years to rapidly cut our global emissions in half.

Indigenous Peoples are leading the environmental movement. Together, we can collectively step into this space and hold it, own it, and change it.   What can you do?There are a number of resources you may use to teach others or take action, including this toolkit produced by the Unist’ot’en resistance (visit for more info and actions). Some actions requested by Unist’ot’en resistance are:DONATE/FUNDRAISE Donate to Gidimt’en Access PointDonate to Unist’ot’en Legal FundHold a fundraiser to help the Unist’ot’en with the prohibitive legal costs designed to be in favour of industry.  Follow the Solidarity Fundraiser Protocols. EDUCATEHost a film screening of the new documentary, Invasion. Create a lesson on reconciliation and its connection to climate justice for students. (BCTF and the BC Curriculum now have resources for teaching Indigneous education)Sign up for the Unist’ot’en Camp Newsletter.Share posts on social media, talk to your community, keep eyes on the Unist’ot’en and Wet’suwet’en!BUILD SOLIDARITYAnswer the Callout for Solidarity Actions in your region!Sign the Pledge to support the Unist’ot’en.Source: https://raventrust.com/2020/01/07/act-now-in-solidarity-with-wetsuweten-tell-coastal-gaslink-to-uphold-indigenous-rights/
Be The Change Earth Alliance
http://www.bethechangeearthalliance.org/Be The Change Earth Alliance · 949 W 49th Ave, Vancouver, BC V5Z2T1, Canada

01 Mar 17:16

Going #Faceless

by Doc Searls

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

01 Mar 17:12

Twitter Favorites: [BobKronbauer] It's @Planta. Bless Planta. https://t.co/ufwmYJmqPG

Bob Kronbauer @BobKronbauer
It's @Planta. Bless Planta. pic.twitter.com/ufwmYJmqPG
01 Mar 17:10

Twitter Favorites: [Planta] I forgot and the Facebook time machine had to remind me that this was from 20 years ago this past month. Allan Foth… https://t.co/rxRu86ePqK

Joseph Planta @Planta
I forgot and the Facebook time machine had to remind me that this was from 20 years ago this past month. Allan Foth… twitter.com/i/web/status/1…
01 Mar 17:09

Week Notes 20#09

by Ton Zijlstra

A busy week with very light blogging as a result. This week I

  • Having returned from a week in the snow, I used Monday to get back into things, did some admin and had a conversation with colleague Emily about the EU study we’re doing on High Value Datasets. Just before I left I submitted a first report and we prepared for the upcoming discussion of it with the client.
  • Caught up with colleague Sara about the work we’re doing for a province, and made plans for the coming few weeks
  • Had a client meeting for the EU study, in which I participated remotely and presented our current findings on meteorological data, and earth observation and evironmental data.
  • Discussed the same study and the Dutch position on it with the Dutch Ministry for the Interior’s responsible civil servant.
  • Was interviewed by our intern Shivani on the way The Green Land operates, and the experience, skills and knowledge I bring to the mix.
  • Checked and improved our household’s preparedness w.r.t. a potential further spread of Covid-19. Ensured I have enough of my prescription meds, and we have enough food and household material for a month. It’s less a precaution against the virus, although self-quarantine is now easily done if needs be, and more resilience against sudden runs on certain items, or apothecaries being overwhelmed by other things than my regular meds. The Italian outbreak saw immediate empty shelves in supermarkets, and the 10 current cases in the Netherlands have already led to hand soap being in short supply in shops around the country.
  • Installed a new microwave in the kitchen with E. It broke down a day before we left for the French Alpes last week, and E ordered one upon our return.
  • Had a management team meeting/breakfast with my business partners
  • Had our monthly all hands meeting with the company, followed by a group dinner.
  • Met up for a few beers with two members of my old fraternity. We all live in the same town, but we’ve never gotten around to catching up until now. A next date is set in 8 weeks or so.
  • Y last Sunday couldn’t believe you could see space from the ground, as we were walking home in the dark. Especially not that a bright start was actually a planet. But planet’s are in space! And space is far away, you can’t see that from here! As a consequence we drove to Eise Eisinga’s Planetarium, the oldest still working mechanical one in the world, and discussed planets, the sun and the moon. Added bonus is that this 18th century planetarium is in Friesland, giving us opportunity to acquire some local treats, not available elsewhere in the country.

This week in …… 1879*

Dutch architect Jan Frederik Staal was born. He designed several iconic buildings in Amsterdam. One of which is the 12-story house, the second highrise of its kind in the Netherlands and the first in Amsterdam, with spacious 6-room apartments. It was placed at the head of the 1920s built ‘Rivers’ neighbourhood designed by Berlage, in Amsterdam. The building came with fast elevators, a trash chute, and window hinges placed so that windows when opened could be cleaned from the inside. (see the open windows top left in the image below)

12 verdiepingenhuisPhoto by JPMM, license CC BY NC ND

In the 3d map image below you can see how the building is positioned in the neighbourhood.

(* I show an openly licensed image with each Week Notes posting, to showcase more open cultural material. See here why, and how I choose the images for 2020.)



This is a RSS only posting for regular readers. Not secret, just unlisted. Comments / webmention / pingback all ok.
Read more about RSS Club
01 Mar 17:06

How The Filipino Community Is Fighting Medical Invisibility - HuffPost

01 Mar 17:02

iOS 13.4 beta hints Apple could be working on Wi-Fi ‘OS Recovery’ feature

by Patrick O'Rourke
iPhone 11 Series

Recovery over Wi-Fi could be coming to iOS, according to new code uncovered by 9to5Mac in the iOS 13.4 developer beta.

While restoring an iPhone or iPad from a Mac or PC used to make sense, over the last few years, both Apple’s smartphone and tablet have grown more independent from the company’s desktop ecosystem. For example, I can’t even remember the last time I plugged my iPhone into my MacBook Pro — there’s just really no reason to do it anymore.

iOS 13.4’s third beta reportedly includes references to a feature called ‘OS Recovery.’ 9to5Mac speculates that this could be a new way to restore the iPhone, iPad — and possibly even other Apple devices like the Apple Watch and HomePod — directly over-the-air via Wi-Fi.

Further, it also might be possible to connect an iPhone or an iPad to other Apple devices, similar to Apple’s Migration Tool, to recover a device.

The feature doesn’t actually work yet, so it’s possible Apple could remove it before the beta’s public release.

Apple’s Mac devices have featured internet-based recovery options for years. I’ve needed to use this recovery option a few times when formatting a Mac after accidentally deleting the partition that includes the operating system.

As someone who is often flashing both developer and public iOS/iPadOS beta software on their devices, I hope this feature makes its way into the final release of iOS 13.4.

Source: 9to5Mac 

The post iOS 13.4 beta hints Apple could be working on Wi-Fi ‘OS Recovery’ feature appeared first on MobileSyrup.