Because Busybot and Slack look so much alike and are so tightly connected, I avoid the cognitive costs of switching.
This post is based on a theory and a book outline I’ve been chipping away at since 2010. Since I’m probably going to be too busy to ever finish the full thing, I figured I would massively truncate and post it here so that it’s finally out in the world in some form. In the six years I’ve been thinking about this subject, it’s only become clearer with the advent of the on-demand economy, 3D printing etc. Please excuse the length!
In Silicon Valley we’ve used the term “Unbundling” to describe the phenomena of mobile apps breaking apart into multiple separate apps, each essentially providing more focused, single purpose features. Think of the Facebook app being separated into Facebook + Messenger.
I believe this Unbundling phenomena is happening almost universally across all aspects of life. It’s a meta-trend that has been happening for decades (or more) and will continue for decades to come. It’s a common process affecting many of the things happening in the world today. In fact most of the major disruptions we see (loss of traditional jobs, failing record companies, terrorism, divorce rates, the rise of fringe/underdog political candidates etc) are all, in at least some way, connected to this fundamental transition.
I’ve devoted some time over the past month to learning how to create distribution packages for Ubuntu, using the Debian packaging system. This has been a longstanding interest for me since Dane Springmeyer and Robert Coup created a Personal Package Archive (PPA) to easily and quickly distribute reliable versions of Mapnik. It’s become more important lately as programming languages have developed their own mutually-incompatible packaging systems like npm, Ruby Gems, or PyPi, while developer interest has veered toward container-style technology such as Vagrant or Docker. My immediate interest comes from an OpenAddresses question from Waldo Jaquith: what would a packaged OA geocoder look like? I’ve watched numerous software projects create Vangrantfile or Dockerfile scripts before, only to let those fall out of date and eventually become a source of unanswerable help requests. OS-level packages offer a stable, fully-enclosed download with no external dependencies on 3rd party hosting services.
What does it take to prepare a package?
The staff at Launchpad, particularly Colin Watson, have been helpful in answering my questions as I moved from getting a tiny “hello world” package onto a PPA to wrapping up Mapzen’s worldwide geocoder software, Pelias.
I started with a relatively-simple tutorial from Ask Ubuntu, where a community member steps you through each part of wrapping a complete Debian package from a single shell script source. This mostly worked, but there are a few tricks along the way. The various required files are often shown inside a directory called “DEBIAN”, but I’ve found that it needs to be lower-case “debian” in order to work with the various preparation scripts. The control file, debian/control, is the most important one, and has a set of required fields arranged in stanzas that must conform to a particular pattern. My first Launchpad question addressed a series of mistakes I was making.
The file debian/changelog was the second challenge. It needs to conform to an exact syntax, and it’s easiest to have the utility dch (available in the devscripts package) do it for you. You need to provide a meaningful version number, release target, notes, and signature for this file to work. The release target is usually a Debian or Ubuntu codename, in this case “xenial” for Ubuntu’s 16.04 Xenial Xerus release. The version number is also tricky; it’s generally assumed that a package maintainer is downstream from the original developer, so the version number will be a combination of upstream version and downstream release target, in my case Pelias 2.2.0 + “ubuntu” + “xenial”.
A debian/rules file is also required, but it seems to be sufficient to use a short default file that calls out to the Debian helper script dh.
I have not been able to determine how to test my debian director package locally, but I have found that the emails sent from Launchpad after using dput to post versions of my packages can be helpful when debugging. I tested with a simple package called “hellodeb”; here is a complete listing of each attempt I made to publish this package in my “hello” PPA as I learned the process.
My second Launchpad question concerned the contents of the package: why wasn’t anything being installed? The various Debian helper scripts try to do a lot of work for you, and as a newcomer it’s sometimes hard to guess where it’s being helpful, and where it’s subtly chastising you for doing the wrong thing. For example, after I determined that including an “install” make target in the default project Makefile that wrote files to $DESTDIR was the way to create output, it turned out that my attempt to install under /usr/local was being thwarted by dh_usrlocal, a script which enforces the Linux filesystem standard convention that only users should write files to /usr/local, never OS-level packages. In the end, while it’s possible to simply list everything in debian/install, it seems better to do that work in a more central and easy-to-find Makefile.
Finally, I learned through trial-and-error that the Launchpad build system prevents network access. Since Pelias is written in Node, it is necessary to send the complete code along with all dependencies under node_modules to the build system. This ensures that builds are more predictable and reliable, and circumvents many of the SNAFU situations that can result from dynamic build systems.
Rolling a new release is a four step process:
"Mistakes are not all created equal," writes the author, "and they are not always desirable. In addition, learning from mistakes is not all automatic. In order to learn from them the most we need to reflect on our errors and extract lessons from them." Eduardo Briceñ o makes this point clear by identifying four types of mistakes, two of which can be seen as beneficial, and two of which really should be avoided.[Link] [Comment]
The Maker movement began as a free-form exercise. "Typically, 'Making' involves attempting to solve a particular problem, creating a physical or digital artifact, and sharing that product with a larger audience. Often, such work is guided by the notion that process is more important than results." But as it began to be applied more in schools, it began to evolve. Diversity and inclusiveness became more important, and questions began to be asked about what was learned. This article is a good overview of some of the recent research. And it's interesting to compare the similarities between the evolution of MOOCs and the evolution of making.[Link] [Comment]
This could have been much more appropriately titled, but the content of the piece is spot on. Specifically:
Open means rights
Open means access
Open means use
Open means transparent
Open means participatory
Open means enabling openness
Open means philosophically aligned with open principles
en-US TL;DR: BlueGriffon is now officially recommended as the html editor for the French Administration in its effort to rely on and promote Free Software!
Je suis très heureux de signaler que BlueGriffon, mon éditeur Web cross-platform et Wysiwyg, est officiellement recommandé par le Socle Interministériel de Logiciels Libres pour 2016 !!! Vous trouverez la liste officielle des logiciels recommandés ici (document pdf).
Gap between ecosystem and hardware to increase this year.
Smartphone and ecosystem
Although not necessarily the best way of publishing data, data tables in PDF documents can often be extracted quite easily, particularly if the tables are regular and the cell contents reasonably space.
For example, official timing sheets for F1 races are published by the FIA as event and timing information in a set of PDF documents containing tabulated timing data:
In the past, I’ve written a variety of hand crafted scrapers to extract data from the timing sheets, but the regular way in which the data is presented in the documents means that they are quite amenable to scraping using a PDF table extractor such as Tabula. Tabula exists as both a server application, accessed via a web browser, or as a service using the tabula extractor Java application.
I don’t recall how I came across it, but the tabulizer R package provides a wrapper for tabula extractor (bundled within the package), that lets you access the service via it’s command line calls. (One dependency you do need to take care of is to have Java installed; adding Java into an RStudio docker container would be one way of taking care of this.)
Running the default extractor command on the above PDF pulls out the data of the inner table:
extract_tables('Best Sector Times.pdf')
Where the data is spread across multiple pages, you get a data frame per page.
Note that the headings for the distinct tables are omitted. Tabula’s “table guesser” identifies the body of the table, but not the spanning column headers.
The default settings are such that tabula will try to scrape data from every page in the document.
Individual pages, or sets of pages, can be selected using the pages parameter. For example:
Specified areas for scraping can also be specified using the area parameter:
extract_tables('Lap Analysis.pdf', pages=8, guess=F, area=list(c(178, 10, 230, 500)))
The area parameter
appears to take co-ordinates in the form: top, left, width, height is now fixed to take co-ordinates in the same form as those produced by tabula app debug: top, left, bottom, right.
You can find the necessary co-ordinates using the tabula app: if you select an area and preview the data, the selected co-ordinates are viewable in the browser developer tools console area.
The tabula console output gives co-ordinates in the form: top, left, bottom, right
so you need to do some sums to convert these numbers to the arguments that the tabulizer area parameter wants.
Using a combination of “guess” to find the dominant table, and specified areas, we can extract the data we need from the PDF and combine it to provide a structured and clearly labeled dataframe.
On my to do list: add this data source recipe to the Wrangling F1 Data With R book…
The Upshot highlights research from the Stanford Center for Education Policy Analysis that looks into the relationship between a child’s parents’ socioeconomic status and their educational attainment. Researchers focused on test scores per school district in the United States.
Children in the school districts with the highest concentrations of poverty score an average of more than four grade levels below children in the richest districts.
Even more sobering, the analysis shows that the largest gaps between white children and their minority classmates emerge in some of the wealthiest communities, such as Berkeley, Calif.; Chapel Hill, N.C.; and Evanston, Ill. (Reliable estimates were not available for Asian-Americans.)
Be sure down to browse the chart the shows points for race within the the same school districts. Color represents race, and connecting lines between dots show the magnitude of the differences between white, Hispanic, and black.
If you’re interested in the data itself, you can download it from the Stanford Education Data Archive.
See also the education spending map from NPR, which suddenly takes on a new dimension.
In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.
The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".
When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.
For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:
To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.
To solve the problem, we recommend introducing 3 new concepts:
Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).
User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.
As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.
Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:
Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.
Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.
Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.
Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.
The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.
Illustration by Ian Sullivan
In our latest contribution, councillor Joe Mihevc (Ward 21, St. Paul's West) declares his support for bike lanes and the minimum grid. He also hints at Bike Share news, touts infrastructure expansion and discusses collaboration with city staff to build successful cycling projects in the city.
What are the top priorities for bike infrastructure in your ward this year?
Bike Share. The city is expanding the stations this year and there will be a few arriving for Ward 21. An announcement will be made by the city in June, so I will need to leave it at that for now, but expanding the program up the escarpment is a discussion in which I have engaged city staff over the past few years. The potential the expansion has to increase the number of people choosing to make trips by bike is exciting.
Another piece of infrastructure that I am happy about is to see our Ward 21 cyclists' request for a lane along Winona Avenue has made it into the Cycling Division's 10 Year Plan, which will go to the May16th public works and infrastructure committee (PWIC) meeting and then to city council for approval.
Would you encourage PWIC, the executive committee and council to support the minimum grid?
Absolutely. I am among councillors who signed on to Cycle Toronto's minimum grid goal. I am working to assist in the effort to have the city meet those bike lane targets. I personally met with city cycling staff to discuss the successful motion by council colleagues at the March PWIC meeting, requesting staff include $20 million and $25 million annual options for a budget increase to make the minimum grid happen. That report is part of the 10 Year Plan being considered at the May 16th PWIC meeting - and I will certainly defend the minimum grid all the way to City Council.
Related on the dandyBLOG
The Ontario government has been ahead of the curve in trying to adapt to the probable revolution looming in car transportation, with initiatives allowing self-driving vehicles to be tested in the province, and a $325 million Green Investment Fund working to reduce the effects of climate change through meaningful, environmentally-friendly initiatives.
This week, the Ontario Ministry of Transportation announced that a $20 million grant program has been created from the Green Investment Fund to build 500 electric vehicle charging stations across Ontario next year. The Electric Vehicle Chargers Ontario will provide 27 public and private sector groups with funds to create a network of charging stations in places where they’re needed most.
Transportation is the single largest source of greenhouse gas emissions in Ontario, with cars accounting for more atmospheric pollution than iron, steel, cement and chemical industries combined. Currently, there are 6,400 electric vehicles on Ontario’s roads, but with the industry slowly moving toward hybrid and electric models, that number will be growing quickly in the next few years.
The province has set a goal to reduce greenhouse gas emission levels to just 20 percent of 1990 pollution levels by 2050. Making recharging stations more accessible across the province will mean owning an electric vehicle as opposed to a gas-powered one is that much easier.
Wireless competition in Canada may have just been reduced.
BCE, parent company to Bell’s wireless business, has entered into an agreement to purchase Manitoba-based MTS (Manitoba Telecom Services Inc.) for $3.9 billion.
The transaction is expected to close in late 2016 or early 2017 pending regulatory approvals. As for the terms, BCE has agreed to pay MTS shareholders $40.00 per share, which is reportedly above the current price by a staggering 23.2 percent, and “values MTS at approximately 10.1 times 2016 estimated EBITDA.” The total transaction value is about $3.9 billion but will purchase all shares for $3.1 billion and assume its debt of approximately $800 million. Both BCE and MTS shareholders and Board of Directors have agreed to the terms of the deal.
Bell has also agreed to open a western Canadian headquarters in Manitoba, which will employ 6,900 people (MTS currently has 2,700 employees). To show its commitment to the province, Bell will also build out its network in the region and invest $1 billion over the next five years, which will specifically see its Gigabit Fibe Internet be available 12 months after the transaction closes, expand its LTE network and release Fibe TV.
MTS currently has 565,000 subscribers and the terms also state that Bell will divest a third “of MTS dealer locations in Manitoba to Telus.” This number approximately represents 140,000 to Telus. Unfortunately, there is no indication as the dollar amount Bell will be selling the MTS subs to Telus for.
When the transaction closes, Bell and MTS have agreed to sell its services under the “Bell MTS” brand name.
Finally, fine print in the release reveals that if the “arrangement agreement is terminated in certain circumstances, including if MTS enters into a definitive agreement with respect to a superior proposal, BCE is entitled to a break-fee payment of $120 million.”
George Cope, president and CEO of BCE, stated, “BCE looks forward to being part of Manitoba’s strong growth prospects, building on the tremendous MTS legacy of technological innovation, customer service and competitive success by delivering the best broadband, wireless, internet and TV services to the people of Manitoba in communities large and small. As the headquarters for the Western operations of BCE, Bell MTS will focus on delivering the benefits of new broadband communications infrastructure, ongoing technology development and enhanced community investment to Manitobans everywhere.”
“This transaction recognizes the intrinsic value of MTS and will deliver immediate and meaningful value to MTS shareholders, while offering strong benefits to MTS customers and employees, and to the Province of Manitoba,” said Jay Forbes, President & CEO, MTS. “We are proud of our history and what we have achieved as an independent company. We believe the proposed transaction we are announcing today with BCE will allow MTS to build on our successful past and achieve even more in the future.”
We’ve received some complaints from our readers about the Bluetooth devices we recommend acting up, working intermittently, or otherwise failing, especially when multiple devices are involved. The problems include failing to pair, audio hiccups, and recurring dropped connections. The situation usually involves a few Bluetooth devices—say, a phone, a smartwatch, and a car stereo—trying to get along. Sometimes the conflict is between a phone, a fitness tracker, and Bluetooth headphones. Occasionally, the issue is simply a keyboard that’s confused about different iPads. And though we haven’t heard about all three-way troubles, we have some ideas about what’s going on.
Fascinating study by Justin O'Beirne on how Google Maps changed from 2010 to 2016 – fewer cities, more roads, and not a lot of balance between them on a map at the same zoom level.
Unfortunately, these "optimizations" only served to exacerbate the longstanding imbalances already in the maps. As is often the case with cartography: less isn't more. Less is just less. And that's certainly the case here.
As O'Beirne also notes, the changes were likely made to provide a more pleasant viewing experience on mobile devices.
I understand his point of view – the included examples really make a solid case – but I can also see why Google may consider the average user (looking up points of interest nearby, starting navigation on their phone) and think that most users don't want that kind of cartographic detail anymore.
It'd be interesting to see the same comparisons between Apple and Google, as well as between old Apple Maps and Apple Maps today.
→ Source: justinobeirne.com
Change and development along the rapid-transit lines as part of “Skywalking through Burnaby” on Sunday:
A big hole at the Brentwood Town Centre redevelopment. Is this the the biggest single complex in the municipality’s history?
At the Commercial-Broadway station, the new east platform for westbound trains is visible:
Already the station handles more passengers in a day than YVR airport. With the opening of the Evergreen line later, the crush would be unmanageable without a platform expansion.
This week, the Pants project announced a 1.0 release of the open source Pants Build System. The 1.0 release of Pants indicates that the tool is ready for more widespread adoption.
Square is a proud contributor to Pants. Developers at Square have been using and contributing to Pants since 2014 to develop our Java services. When we first joined the project, we found a tool that required lots of customization and insider knowledge to install and operate. Today the tool has a streamlined installation process, extensive documentation, a clean extensible design, a vibrant community and a history of stable weekly releases.
With Pants we get:
We build and release services from HEAD of master. Our Java codebase is housed almost entirely in a single repo consisting of over 900 projects and 45,000 source files. In this style of development, we prefer keeping all code at HEAD consistent using global refactoring and rigorous automated testing instead of maintaining strict API backwards compatibility and long deprecation cycles within the codebase.
We also have a strong commitment to using open source libraries. We extensively rely on artifacts published through the Maven Central Repository. When we upgrade a library, we can do so in one place and update the code dependencies for all services.
With such a large codebase, it becomes impractical to load the entire repo into the IDE. At Square, we primarily use IntelliJ IDEA to develop code. We use Pants to configure and launch it. Probably the most valuable feature for day to day development is simply having the ability to quickly bring up any portion of the repo in IntelliJ. With a single command, Pants configures IntelliJ to edit a module and all of its dependencies defined in the repo.
Making it easy to configure any project in the IDE means that developers can easily contribute to any project in the codebase. Being able to easily switch between branches encourages developers to collaborate. Now it is convenient to check out each other’s work locally when performing code reviews. It is easier to confine change requests to small, manageable chunks and switch between them while waiting on code reviews to complete.
We came to the Pants project looking for a tool to help solve problems in our build environment. Previously, we used Apache Maven to compile and package binaries. Maven is a powerful and popular tool with a modular design that makes it easy to extend with excellent support from third party tools and libraries. We had a significant investment in Maven, including many custom plugins for running code generation and supporting a distributed cache for artifacts in our continuous integration (CI) build system.
Using Maven with our “build everything from HEAD” policy strains the Maven model. Maven is designed to support editing a few modules at a time while relying on binary artifacts for most dependencies. To support building the entire repo from HEAD, we set every Maven module in the repo to a SNAPSHOT version.
Using Maven in this way works, but has drawbacks. Running a recursive compile of all dependent modules incurs a lot of overhead. We had wrapper scripts to help us try to be productive in this environment, say to run just run code generation or only run a subset of tests. Still, developers would get into trouble in some situations, often having to deal with inconsistencies between stale binary artifacts and the source on disk. For example, after using
mvn install, pulling in new changes from the repo or switching back to an older branch could leave them compiling against stale code. When developers routinely question the integrity of their development environment, they waste a lot of time cleaning and rebuilding the codebase.
Our first priority was to allow developers to quickly configure their workspace in the IDE. Next, we migrated to using Pants as the tool to test and deploy artifacts in our CI builder. As of this writing, we have replaced all of our use of Maven in this repo using Pants, including:
Replacing all of our uses of Maven was not easy. We were able to do this by generating the Pants configuration using the Maven pom.xml files as a source of truth. During an interim phase we supported both tools. Through collaboration with the Pants open source community, we were able to modify Pants through hundreds of open source contributions.
Pants comes out of the box ready to edit, compile, test, and package code. Beyond that, we were able to leverage Pants' extensible module based system. A current favorite is a small custom task to deploy directly to staging environments over our internal deployment system. Along the way, other custom modules run custom code generators, gather meta information about our build process for our security infrastructure, and package the output of annotation processors into yaml files for our deployment system. Today we have about two dozen internal Pants plugins that do all those things, plus additional tools to audit our codebase, integrate with our CI system, and customize our IDE support.
At Square, Pants has helped us realize the promised benefits of monorepo style development at scale. Making sure the development process is consistent and reliable increases our developer velocity. Being able to quickly and reliably edit, compile, and test allows developers to concentrate on reviewing and writing code, not struggling with configuring tools. We believe that Pants is ready for more widespread adoption and encourage you to give it a try.
My survey of 547 business writers found one big pain point: managing editorial feedback. Only half of business writers get the feedback they need, and only one in three feels that their process for managing feedback works well. Business writers have little love for feedback processes In this table, the first data column summarizes all the responses; the other columns … Continue reading Managing feedback challenges writers (Survey data) →
The post Managing feedback challenges writers (Survey data) appeared first on without bullshit.
An Item from Ian found in Vancity Buzz.
Greg Mitchell asks the question:
… let’s rethink the narrative. We know Vancouver is an expensive city in which to live – it seems to be all Vancouverites think about currently (yes, I’m guilty too). And we are obviously limited in terms of our land on which we can develop (mountains, ocean – enough said). So our only option is to densify – but the question is HOW to densify.
He provides, in detail, an alternative:
Those details here.
With the smartphone industry seemingly hitting a plateau when it comes to innovation and perhaps more importantly excitement, internet of things (IoT) gadgets have become one of fastest expanding and most interesting areas of the tech industry.
In Samsung’s case, however, a report from University of Michigan computer science researchers indicates the South Korean smartphone manufacturer’s IoT SmartThings platform suffers from security issues that could potentially allow malicious apps to operate smart locks, change access codes and set off smoke Wi-Fi-enabled detectors, as well as a variety of other forms attacks on Samsung’s smart device line.
A malicious SmartThings app, with access to more permissions than necessary, downloaded directly from Samsung’s SmartThings store, is the source of the security issues according to the research. The problem also stems from apps being given permissions that aren’t actually required. For example, a smart lock only needs the ability to lock remotely, but SmartThings’ API links this command with a variety of others.
After installation, SmartThings apps also request additional permissions, allowing them to be linked to different apps installed on the smartphone, a move the researchers say isn’t necessary because it gives the app more access than is required.
Researchers demonstrated their discovery through an app that monitors the battery life of a variety of Samsung SmartThings products. After installing and granting the malicious but normal looking app permissions on the smartphone, it not only monitors battery, but also has the ability to manipulate the lock’s functionality. It does this by automatically sending out an SMS to the app’s developer each time the user reprograms the the smart lock’s pin code.
A second demonstration showed off an app allowing the user to to program their own pin code through an app that locks and unlocks a browser. Research revealed that of the 499 apps part of the study, 42 percent of them have more privileges than are necessary, giving malicious developers ample opportunity to create exploits.
Following this discovery, the University of Michigan researchers behind the discovery say they have reached out to Samsung’s SmartThings team with their findings.
While these exploits do require user interaction, many people swiftly move through the permission section of installing an app without actually realizing what they’re giving the software access to. Researchers say that of 22 SmartThings users they surveyed, 91 percent said they would allow a battery monitoring app to check their smart lock and give the app whatever permissions it requested. However, only 14 percent said they would allow the battery app to send door access codes to a remote server.
In an email to The Verge, a SmartThing representative said the following about the study.
“The potential vulnerabilities disclosed in the report are primarily dependent on two scenarios – the installation of a malicious SmartApp or the failure of third party developers to follow SmartThings guidelines on how to keep their code secure. Following this report, we have updated our to provide even better security guidance to developers.”
“Smart home devices and their associated programming platforms will continue to proliferate and will remain attractive to consumers because they provide powerful functionality. However, the findings in this paper suggest that caution is warranted as well – on the part of early adopters, and on the part of framework designers. The risks are significant, and they are unlikely to be easily addressed via simple security patches.”
A master-planned community called Concord Brentwood is the latest development from Concord Pacific Developments Inc., renowned for its skyline-defining communities on Vancouver’s False Creek and Toronto’s lakefront.
Concord Brentwood will create a bustling community according to Concord Pacific senior vice president Matt Meehan. “Our next project in Burnaby, Concord Brentwood, will see 26 acres in the Brentwood neighbourhood transform into a beautiful and diverse mixed-use park-side community that completes the exciting revitalization of the Brentwood Town Centre neighbourhood.” …
Designed by award-winning architect James K.M. Cheng of Vancouver, Concord Brentwood will consist of 10 towers, most between 40 and 45 storeys tall. Tower 1 of Phase 1 will consist of 426 units on 45 storeys.
I don’t know if this a rendering of the massing for the proposal or the final product. But if the latter, the architecture looks pretty blah. I still have no explanation for why in this region there is such a reluctance to use colour, why the palette seems so constrained – off-white or gray, beige and green glass.
Via Tanya Elias/eliast05, a query regarding tools for harvesting historical tweets. I haven’t been keeping track of Twitter related tools over the last few years, so my first thought is often “could Martin Hawksey’s TAGSexplorer do it?“!
But I’ve also had a the twecoll Python/command line package on my ‘to play with’ list for a while, so I though I’d give it a spin. Note that the code requires python to be installed (which it will be, by default, on a Mac).
On the command line, something like the following should be enough to get you up and running if you’re on a Mac (run the commands in a Terminal, available from the Utilities folder in the Applications folder). If wget is not available, download the twecoll file to the twitterstuff directory, and save it as twecoll (no suffix).
#Change directory to your home directory $ cd #Create a new directory - twitterstuff - in you home directory $ mkdir twitterstuff #Change directory into that directory $ cd twitterstuff #Fetch the twecoll code $ wget https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll --2016-05-02 14:51:23-- https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll Resolving raw.githubusercontent.com... 220.127.116.11 Connecting to raw.githubusercontent.com|18.104.22.168|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 31445 (31K) [text/plain] Saving to: 'twecoll' twecoll 100%[=========================================>] 30.71K --.-KB/s in 0.05s 2016-05-02 14:51:24 (564 KB/s) - 'twecoll' saved [31445/31445] #If you don't have wget installed, download the file from: #https://raw.githubusercontent.com/jdevoo/twecoll/master/twecoll #and save it in the twitterstuff directory as twecoll (no suffix) #Show the directory listing to make sure the file is there $ ls twecoll #Change the permissions on the file to 'user - executable' $ chmod u+x twecoll #Run the command file - the ./ reads as: 'in the current directory' $ ./twecoll tweets -q "#lak16"
Running the code the first time prompts you for some Twitter API credentials (follow the guidance on the twecoll homepage), but this only needs doing once.
Testing the app, it works – tweets are saved as a text file in the current directory with an appropriate filename and suffix .twt – BUT the search doesn’t go back very far in time. (Is the Twitter search API crippled then…?)
Looking around for an alternative, the GetOldTweets python script, which again can be run from the command line; download the zip file from Github, move it into the twitterstuff directory, and unzip it. On the command line (if you’re still in the twitterstuff directory, run:
to check the name of the folder (something like GetOldTweets-python-master) and then cd into it:
to move into the unzipped folder.
Note that I found I had to install pyquery to get the script to run; on the command line, run: easy_install pyquery.
This script does not require credentials – instead it scrapes the Twitter web search. Data limits for the search can be set explicitly.
python Exporter.py --querysearch '#lak15' --since 2015-03-10 --until 2015-09-12 --maxtweets 500
Tweets are saved into the file output_got.csv and are semicolon delimited.
A couple of things I noticed with this script: it’s slow (because it “scrolls” through pages and pages of Twitter search results, which only have a small number of results on each) and on occasion seems to hang (I’m not sure if it gets stuck in an infinite loop; on a couple of occasions I used ctrl-z to break out). In such a case, it doesn’t currently save results as you go along, so you have nothing; reduce the --maxtweets value, and try again. On occasion, when running the script under the default Mac python 2.7, I noticed that there may be encoding issues in tweets which break the output, so again the file can’t get written,
Both packages run from the command line, or can be scripted from a Python programme (though I didn’t try that). If the GetOldTweets-python package can be tightened up a bit (eg in respect of UTF-8/tweet encoding issues, which are often a bugbear in Python 2.7), it looks like it could be a handy little tool. And for collecting stuff via the API (which requires authentication), rather than by scraping web results from advanced search queries, twecoll looks as if it could be quite handy too.