Shared posts

01 Feb 14:01

Dynamic Rendering with Rendertron

by Google Webmaster Central
Many frontend frameworks rely on JavaScript to show content. This can mean Google might take some time to index your content or update the indexed content.
A workaround we discussed at Google I/O this year is dynamic rendering. There are many ways to implement this. This blog post shows an example implementation of dynamic rendering using Rendertron, which is an open source solution based on headless Chromium.

Which sites should consider dynamic rendering?

Not all search engines or social media bots visiting your website can run JavaScript. Googlebot might take time to run your JavaScript and has some limitations, for example.
Dynamic rendering is useful for content that changes often and needs JavaScript to display. Your site's user experience (especially the time to first meaningful paint) may benefit from considering hybrid rendering (for example, Angular Universal).

How does dynamic rendering work?

Dynamic rendering means switching between client-side rendered and pre-rendered content for specific user agents.
You will need a renderer to execute the JavaScript and produce static HTML. Rendertron is an open source project that uses headless Chromium to render. Single Page Apps often load data in the background or defer work to render their content. Rendertron has mechanisms to determine when a website has completed rendering. It waits until all network requests have finished and there is no outstanding work.

This post covers:
  1. Take a look at a sample web app
  2. Set up a small express.js server to serve the web app
  3. Install and configure Rendertron as a middleware for dynamic rendering

The sample web app

The “kitten corner” web app uses JavaScript to load a variety of cat images from an API and displays them in a grid.
Cute cat images in a grid and a button to show more - this web app truly has it all!
Here is the JavaScript:


  const apiUrl = 'https://api.thecatapi.com/v1/images/search?limit=50';

  const tpl = document.querySelector('template').content;
  const container = document.querySelector('ul');

  function init () {
    fetch(apiUrl)
    .then(response => response.json())
    .then(cats => {
      container.innerHTML = '';
      cats
        .map(cat => {
          const li = document.importNode(tpl, true);
          li.querySelector('img').src = cat.url;
          return li;
        }).forEach(li => container.appendChild(li));
    })
  }

  init();

  document.querySelector('button').addEventListener('click', init);

The web app uses modern JavaScript (ES6), which isn't supported in Googlebot yet. We can use the mobile-friendly test to check if Googlebot can see the content:
The mobile-friendly test shows that the page is mobile-friendly, but the screenshot is missing all the cats! The headline and button appear but none of the cat pictures are there.
While this problem is simple to fix, it's a good exercise to learn how to setup dynamic rendering. Dynamic rendering will allow Googlebot to see the cat pictures without changes to the web app code.

Set up the server

To serve the web application, let's use express, a node.js library, to build web servers.
The server code looks like this (find the full project source code here):

const express = require('express');

const app = express();

const DIST_FOLDER = process.cwd() + '/docs';
const PORT = process.env.PORT || 8080;

// Serve static assets (images, css, etc.)
app.get('*.*', express.static(DIST_FOLDER));

// Point all other URLs to index.html for our single page app
app.get('*', (req, res) => {
 res.sendFile(DIST_FOLDER + '/index.html');
});

// Start Express Server
app.listen(PORT, () => {
 console.log(`Node Express server listening on http://localhost:${PORT} from ${DIST_FOLDER}`);
});

You can try the live example here - you should see a bunch of cat pictures, if you are using a modern browser. To run the project from your computer, you need node.js to run the following commands:
npm install --save express rendertron-middleware node server.js
Then point your browser to http://localhost:8080. Now it’s time to set up dynamic rendering.

Deploy a Rendertron instance

Rendertron runs a server that takes a URL and returns static HTML for the URL by using headless Chromium. We'll follow the recommendation from the Rendertron project and use Google Cloud Platform.
The form to create a new Google Cloud Platform project.
Please note that you can get started with the free usage tier, using this setup in production may incur costs according to the Google Cloud Platform pricing.

  1. Create a new project in the Google Cloud console. Take note of the “Project ID” below the input field.
  2. Clone the Rendertron repository from GitHub with:
    git clone https://github.com/GoogleChrome/rendertron.git 
    cd rendertron 
  3. Run the following commands to install dependencies and build Rendertron on your computer:
    npm install && npm run build
  4. Enable Rendertron’s cache by creating a new file called config.json in the rendertron directory with the following content:
    { "datastoreCache": true }
  5. Run the following command from the rendertron directory. Substitute YOUR_PROJECT_ID with your project ID from step 1.
    gcloud app deploy app.yaml --project YOUR_PROJECT_ID

  6. Select a region of your choice and confirm the deployment. Wait for it to finish.

  7. Enter the URL YOUR_PROJECT_ID.appspot.com (substitute YOUR_PROJECT_ID for your actual project ID from step 1 in your browser. You should see Rendertron’s interface with an input field and a few buttons.
Rendertron’s UI after deploying to Google Cloud Platform
When you see the Rendertron web interface, you have successfully deployed your own Rendertron instance. Take note of your project’s URL (YOUR_PROJECT_ID.appspot.com) as you will need it in the next part of the process.

Add Rendertron to the server

The web server is using express.js and Rendertron has an express.js middleware. Run the following command in the directory of the server.js file:

npm install --save rendertron-middleware

This command installs the rendertron-middleware from npm so we can add it to the server:

const express = require('express');

const app = express();
const rendertron = require('rendertron-middleware');

Configure the bot list

Rendertron uses the user-agent HTTP header to determine if a request comes from a bot or a user’s browser. It has a well-maintained list of bot user agents to compare with. By default this list does not include Googlebot, because Googlebot can execute JavaScript. To make Rendertron render Googlebot requests as well, add Googlebot to the list of user agents:

const BOTS = rendertron.botUserAgents.concat('googlebot');

const BOT_UA_PATTERN = new RegExp(BOTS.join('|'), 'i');

Rendertron compares the user-agent header against this regular expression later.

Add the middleware

To send bot requests to the Rendertron instance, we need to add the middleware to our express.js server. The middleware checks the requesting user agent and forwards requests from known bots to the Rendertron instance. Add the following code to server.js and don’t forget to substitute “YOUR_PROJECT_ID” with your Google Cloud Platform project ID:

app.use(rendertron.makeMiddleware({

 proxyUrl: 'https://YOUR_PROJECT_ID.appspot.com/render',
 userAgentPattern: BOT_UA_PATTERN
}));

Bots requesting the sample website receive the static HTML from Rendertron, so the bots don’t need to run JavaScript to display the content.

Testing our setup

To test if the Rendertron setup was successful, run the mobile-friendly test again.
Unlike the first test, the cat pictures are visible. In the HTML tab we can see all HTML the JavaScript code generated and that Rendertron has removed the need for JavaScript to display the content.

Conclusion

You created a dynamic rendering setup without making any changes to the web app. With these changes, you can serve a static HTML version of the web app to crawlers.

Posted by Martin Splitt, Open Web Unicorn
31 Jan 14:29

CodeSOD: A Date with a Consultant

by Remy Porter

Management went out and hired a Highly Paid Consultant to build some custom extensions for Service Now, a SaaS tool for IT operations management. HPC did their work, turned over the product, and vanished the instant the last check cleared. Matt was the blessed developer who was tasked with dealing with any bugs or feature requests.

Everything was fine for a few days, until the thirthieth of January. One of the end users attempted to set the “Due Date” for a hardware order to 03-FEB–2019. This failed, because "Start date cannot be in the past".

Now, at the time of this writing, Feburary 3rd, 2019 is not yet in the past, so Matt dug in to investigate.

function onChange(control, oldValue, newValue, isLoading) {
	if (isLoading /*|| newValue == ''*/) {
		return;
	}
	
	//Type appropriate comment here, and begin script below
	var currentDate = new Date();
	var selectedDate = new Date(g_form.getValue('date_for'));
	
	if (currentDate.getUTCDate() > selectedDate.getUTCDate() || currentDate.getUTCMonth() > selectedDate.getUTCMonth() || currentDate.getUTCFullYear() > selectedDate.getUTCFullYear()) {
		g_form.addErrorMessage("Start date cannot be in the past.");
		g_form.clearValue('date_for');
	}
}

The validation rule at the end pretty much sums it up. They check each part of the date to see if a date is in the future. So 05-JUN–2019 obviously comes well before 09-JAN–2019. 21-JAN–2020 is well before 01-JUL–2019. Of course, 01-JUL-2019 is also before 21-JAN-2020, because this type of date comparison isn't actually orderable at all.

How could this code be wrong? It’s just common sense, obviously.

Speaking of common sense, Matt replaced that check with if (currentDate > selectedDate) and got ready for the next landmine left by the HPC.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.
31 Jan 13:37

Heroes in Crisis

Heroes in Crisis

 

These confessionals with superheroes and villains in this excerpt from DC Comics Heroes in Crisis are a great read...

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Heroes in Crisis

Source: Heroes in Crisis

(via: IRONsoldier915)

Follow us on:
 

January 30 2019
30 Jan 23:37

Facebook Blocks Ad Transparency Tools

by John Gruber

ProPublica:

A number of organizations, including ProPublica, have developed tools to let the public see exactly how Facebook users are being targeted by advertisers.

Now, Facebook has quietly made changes to its site that stop those efforts.

ProPublica, Mozilla and Who Targets Me have all noticed their tools stopped working this month after Facebook inserted code in its website that blocks them.

“This is very concerning,” said Sen. Mark Warner, D-Va., who has co-sponsored the Honest Ads Act, which would require transparency on Facebook ads. “Investigative groups like ProPublica need access to this information in order to track and report on the opaque and frequently deceptive world of online advertising.”

Shocker.

30 Jan 22:30

It’s So Cold Out! Where’s the Global Warming?!

by Jason Kottke

In what is now an annual tradition, when the temperatures in some part of the US plunge below zero degrees on the Fahrenheit scale, some nitwit Republican climate change-denier live-tweets from the back pocket of industry something like “It’s so cold out where’s the global warming when we need it???? #OwnTheLibs”. This time around, it was our very own Shitwhistle-in-Chief who tweeted merrily about the current polar vortex bearing down on the Midwest:

In the beautiful Midwest, windchill temperatures are reaching minus 60 degrees, the coldest ever recorded. In coming days, expected to get even colder. People can’t last outside even for minutes. What the hell is going on with Global Waming? Please come back fast, we need you!

Some time ago, Randall Munroe addressed what severe cold in the US has to do with climate change on XKCD: it used to be colder a lot more often but we don’t really remember it.

XKCD Cold Weather Global Warming

When I was a kid growing up in Wisconsin, I recall experiencing overnight low temperatures in the -30°F to -40°F range several times and vividly remember being stranded in my house for a week in 1996 when the all-time record low for the state (-55°F) was established in nearby Couderay.

Munroe’s observation isn’t even the whole story. Jennifer Francis, senior scientist at the Woods Hole Research Center, writes that the polar vortex bringing cold air into the Midwest is connected to the rapidly warming Arctic.

Because of rapid Arctic warming, the north/south temperature difference has diminished. This reduces pressure differences between the Arctic and mid-latitudes, weakening jet stream winds. And just as slow-moving rivers typically take a winding route, a slower-flowing jet stream tends to meander.

Large north/south undulations in the jet stream generate wave energy in the atmosphere. If they are wavy and persistent enough, the energy can travel upward and disrupt the stratospheric polar vortex. Sometimes this upper vortex becomes so distorted that it splits into two or more swirling eddies.

These “daughter” vortices tend to wander southward, bringing their very cold air with them and leaving behind a warmer-than-normal Arctic.

(via @mkonnikova)

Tags: Donald Trump   global warming   Jennifer Francis   Randall Munroe   science   weather
30 Jan 17:33

How to publish Android apps to the Google Play Store with GitLab and fastlane

by Jason Lenny

When we heard about fastlane, an app automation tool for delivering iOS and Android builds, we wanted to give it a spin to see if a combination of GitLab and fastlane could help us bring our mobile build and deployment automation to the next level. You can see an actual production deployment of the Gitter Android app that uses what we'll be implementing in this blog post; suffice to say, the results were fantastic and we've become big believers that the combination of GitLab and fastlane is a truly game-changing way to enable CI/CD for your mobile applications. With GitLab and fastlane we're getting, with minimal effort:

  • Source control, project home, issue tracking, and everything else that comes with GitLab.
  • Content and images (metadata) for Google Play Store listing managed in source control.
  • Automatic signing, version numbers, and changelog.
  • Automatic publishing to internal distribution channel in Google Play Store.
  • Manual promotion through alpha, beta, and production channels.
  • Containerized build environment, available in GitLab's container registry.

If you'd like to jump ahead and see the finished product, you can take a look at the already-completed Gitter for Android .gitlab-ci.yml, build.gradle, Dockerfile, and fastlane configuration.

Configuring fastlane

We'll begin first by setting up fastlane in our project, make a couple key changes to our Gradle configuration, and then wrap everything up in a GitLab pipeline.

fastlane has pretty good documentation to get you started, and if you run into platform-specific trouble it's the first place to check, but to get under way you really just need to complete a few straightforward steps.

Initializing your project

First up, you need to get fastlane installed locally and initialize your product. We're using the Ruby fastlane gem so you'll need Ruby on your system for this to work. You can read about other install options in the fastlane documentation.

source "https://rubygems.org"

gem "fastlane"

Once your Gemfile is updated, you can run bundle update to update/generate your Gemfile.lock. From this point you can run fastlane by typing bundle exec fastlane. Later, you'll see that in CI we use bundle install ... to ensure the command runs within the context of our project environment.

Now that we have fastlane ready to run, we just need to initialize our repo with our configuration. Run bundle exec fastlane init from within your project directory, answer a few questions, and fastlane will create a new ./fastlane directory containing its configuration.

Setting up supply

supply is a feature built into fastlane which will help you manage screenshots, descriptions, and other localized metadata/assets for publishing to the Google Play Store.

Please refer to these detailed instructions for collecting the credentials necessary to run supply.

Once you've set this up, simply run bundle exec fastlane supply init and all your current metadata will be downloaded from your store listing and saved in fastlane/metadata/android. From this point you're able to manage all of your store content as-code; when we publish a new version to the store later, the versions of content checked into your source repo will be used to populate the entry.

Appfile

The ./fastlane/Appfile is pretty straightforward, and contains basic configuration you chose when you initialized your project. Later we'll see how to inject the json_key_file in your CI pipeline at runtime.

./fastlane/Appfile

json_key_file("~/google_play_api_key.json") # Path to the json secret file - Follow https://docs.fastlane.tools/actions/supply/#setup to get one
package_name("im.gitter.gitter") # e.g. com.krausefx.app

Fastfile

The ./fastlane/Fastfile is more interesting, and contains the first changes you'll see that we made for Gitter vs. the default one created when you run bundle exec fastlane init.

The first section contains our definitions for how we want to run builds and tests. As you can see, this is pretty straightforward and builds right on top of your already set up Gradle tasks.

./fastlane/Fastfile

default_platform(:android)

platform :android do

  desc "Builds the debug code"
  lane :buildDebug do
    gradle(task: "assembleDebug")
  end

  desc "Builds the release code"
  lane :buildRelease do
    gradle(task: "assembleRelease")
  end

  desc "Runs all the tests"
  lane :test do
    gradle(task: "test")
  end

...

Creating Gradle tasks that publish/promote builds can be complicated and error prone, but fastlane makes this much easier by giving you pre-built commands (called Actions) that let you perform complex tasks with just a few simple actions.

In our example, we've set up a workflow where a new build can be published to the internal track and then optionally promoted through alpha, beta, and ultimately production. We initially had a new build for each track but it's safer to have the same/known build go through the whole process.

...

  desc "Submit a new Internal Build to Play Store"
  lane :internal do
    upload_to_play_store(track: 'internal', apk: 'app/build/outputs/apk/release/app-release.apk')
  end

  desc "Promote Internal to Alpha"
  lane :promote_internal_to_alpha do
    upload_to_play_store(track: 'internal', track_promote_to: 'alpha')
  end

  desc "Promote Alpha to Beta"
  lane :promote_alpha_to_beta do
    upload_to_play_store(track: 'alpha', track_promote_to: 'beta')
  end

  desc "Promote Beta to Production"
  lane :promote_beta_to_production do
    upload_to_play_store(track: 'beta', track_promote_to: 'production')
  end
end

An important note is that we've only scratched the surface of the kinds of actions that fastlane can automate. You can read more about available actions here, and it's even possible to create your own.

Gradle configuration

We also made a couple of key changes to our basic Gradle configuration to make publishing easier. Nothing major here, but it does help us make things run a little more smoothly.

Secret properties

The first changed section gathers the secret variables to be used for signing. These are either loaded via configuration file, or gathered from environment variables in the case of CI.

app/build.gradle

// Try reading secrets from file
def secretsPropertiesFile = rootProject.file("secrets.properties")
def secretProperties = new Properties()

if (secretsPropertiesFile.exists()) {
    secretProperties.load(new FileInputStream(secretsPropertiesFile))
}
// Otherwise read from environment variables, this happens in CI
else {
    secretProperties.setProperty("oauth_client_id", "\"${System.getenv('oauth_client_id')}\"")
    secretProperties.setProperty("oauth_client_secret", "\"${System.getenv('oauth_client_secret')}\"")
    secretProperties.setProperty("oauth_redirect_uri", "\"${System.getenv('oauth_redirect_uri')}\"")
    secretProperties.setProperty("google_project_id", "\"${System.getenv('google_project_id') ?: "null"}\"")
    secretProperties.setProperty("signing_keystore_password", "${System.getenv('signing_keystore_password')}")
    secretProperties.setProperty("signing_key_password", "${System.getenv('signing_key_password')}")
    secretProperties.setProperty("signing_key_alias", "${System.getenv('signing_key_alias')}")
}

Automatic versioning

We also set up automatic versioning using environment variables VERSION_CODE, VERSION_SHA, which we will set up later in CI (locally they will just be null which is fine). Because each build's versionCode that you submit to the Google Play Store needs to be higher than the last, this makes it simple to deal with.

app/build.gradle

android {
    defaultConfig {
        applicationId "im.gitter.gitter"
        minSdkVersion 19
        targetSdkVersion 26
        versionCode Integer.valueOf(System.env.VERSION_CODE ?: 0)
        // Manually bump the semver version part of the string as necessary
        versionName "3.2.0-${System.env.VERSION_SHA}"

Signing configuration

Finally, we inject the signing configuration which will automatically be used by Gradle to sign the release build. Depending on your configuration, you may already be doing this. We only worry about signing in the release build that would potentially be published to the Google Play Store.

When using App Signing by Google Play, you will use two keys: the app signing key and the upload key. You keep the upload key and use it to sign your app for upload to the Google Play Store.

https://developer.android.com/studio/publish/app-signing#google-play-app-signing

IMPORTANT: Google will not re-sign any of your existing or new APKs that are signed with the app signing key. This enables you to start testing your app bundle in the internal test, alpha, or beta tracks while you continue to release your existing APK in production without Google Play changing it.

https://play.google.com/apps/publish/?account=xxx#KeyManagementPlace:p=im.gitter.gitter&appid=xxx

app/build.gradle

    signingConfigs {
        release {
            // You need to specify either an absolute path or include the
            // keystore file in the same directory as the build.gradle file.
            storeFile file("../android-signing-keystore.jks")
            storePassword "${secretProperties['signing_keystore_password']}"
            keyAlias "${secretProperties['signing_key_alias']}"
            keyPassword "${secretProperties['signing_key_password']}"
        }
    }
    buildTypes {
        release {
            minifyEnabled false
            testCoverageEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
            signingConfig signingConfigs.release
        }
    }
}

Setting up the Docker build environment

We are building a Docker image to be used as a repeatable, consistent build environment which will speed things up because it will already have the dependencies downloaded and installed. We're just fetching a few prerequisites, installing the Android SDK, and then grabbing fastlane.

Dockerfile

FROM openjdk:8-jdk

# Just matched `app/build.gradle`
ENV ANDROID_COMPILE_SDK "26"
# Just matched `app/build.gradle`
ENV ANDROID_BUILD_TOOLS "28.0.3"
# Version from https://developer.android.com/studio/releases/sdk-tools
ENV ANDROID_SDK_TOOLS "24.4.1"

ENV ANDROID_HOME /android-sdk-linux
ENV PATH="${PATH}:/android-sdk-linux/platform-tools/"

# install OS packages
RUN apt-get --quiet update --yes
RUN apt-get --quiet install --yes wget tar unzip lib32stdc++6 lib32z1 build-essential ruby ruby-dev
# We use this for xxd hex->binary
RUN apt-get --quiet install --yes vim-common
# install Android SDK
RUN wget --quiet --output-document=android-sdk.tgz https://dl.google.com/android/android-sdk_r${ANDROID_SDK_TOOLS}-linux.tgz
RUN tar --extract --gzip --file=android-sdk.tgz
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter android-${ANDROID_COMPILE_SDK}
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter platform-tools
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter build-tools-${ANDROID_BUILD_TOOLS}
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter extra-android-m2repository
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter extra-google-google_play_services
RUN echo y | android-sdk-linux/tools/android --silent update sdk --no-ui --all --filter extra-google-m2repository
# install Fastlane
COPY Gemfile.lock .
COPY Gemfile .
RUN gem install bundle
RUN bundle install

Setting up GitLab

With our build environment ready, let's set up our .gitlab-ci.yml to tie it all together in a CI/CD pipeline.

Stages

The first thing we do is define the stages that we're going to use. We'll set up our build environment, do our debug and release builds, run our tests, deploy to internal, and then promote through alpha, beta, and production. You can see that, apart from environment, these map to the lanes we set up in our Fastfile.

stages:
  - environment
  - build
  - test
  - internal
  - alpha
  - beta
  - production

Build environment update

Next up we're going to update our build environment, if needed. If you're not familiar with .gitlab-ci.yml it may look like there's a lot going on here, but we'll take it one step at a time. The very first thing we do is set up an .updateContainerJob yaml template which can be used to capture shared configuration for other steps that want to use it. In this case, it will be used by the subsequent updateContainer and ensureContainer jobs.

.updateContainerJob template

In this case, since we're dealing with Docker in Docker (dind), we are running some scripts which log into the local GitLab container registry, fetch the latest image to be used as a layer cache reference, build a new image, and finally push the new version to the registry.

.updateContainerJob:
  image: docker:stable
  stage: environment
  services:
    - docker:dind
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG

updateContainer job

The first job that inherits .updateContainerJob, updateContainer, only runs if the Dockerfile was updated and will run through the template steps described above.

updateContainer:
  extends: .updateContainerJob
  only:
    changes:
      - Dockerfile

ensureContainer job

Because the first pipeline on a branch can fail, the only: changes: Dockerfile syntax won't trigger for a subsequent pipeline after you fix things. This can leave your branch without a Docker image to use. So the ensureContainer job will look for an existing image and only build one if it doesn't exist. The one downside to this is that both of these jobs will run at the same time if it is a new branch.

Ideally, we could just use $CI_REGISTRY_IMAGE:master as a fallback when $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG isn't found but there isn't any syntax for this.

ensureContainer:
  extends: .updateContainerJob
  allow_failure: true
  before_script:
    - "mkdir -p ~/.docker && echo '{\"experimental\": \"enabled\"}' > ~/.docker/config.json"
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
    # Skip update container `script` if the container already exists
    # via https://gitlab.com/gitlab-org/gitlab-ce/issues/26866#note_97609397 -> https://stackoverflow.com/a/52077071/796832
    - docker manifest inspect $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG > /dev/null && exit || true

Build and test

With our build environment ready, we're ready to build our debug and release targets. Similar to above, we use a template to set up repeated steps within our build jobs, avoiding duplication. Within this section, the first thing we do is set the image to the build environment container image we built in the previous step.

.build_job template

.build_job:
  image: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  stage: build

...

Next up is a step that's specific to Gitter, but if you use shared assets between a iOS and Android build you might consider doing something similar. What we're doing here is grabbing the latest mobile artifacts built by the web application pipeline and placing them in the appropriate location.

  before_script:
    - wget --output-document=artifacts.zip --quiet "https://gitlab.com/gitlab-org/gitter/webapp/-/jobs/artifacts/master/download?job=mobile-asset-build"
    - unzip artifacts.zip
    - mkdir -p app/src/main/assets/www
    - mv output/android/www/* app/src/main/assets/www/

Next, we use project-level variables containing a binary (hex) dump of our signing keystore file and convert it back to a binary file. This allows us to inject the file into the build at runtime instead of checking it into source control, a potential security concern. To get the signing_jks_file_hex variable hex value, we use this binary -> hex command, xxd -p gitter-android-app.jks

    # We store this binary file in a variable as hex with this command, `xxd -p gitter-android-app.jks`
    # Then we convert the hex back to a binary file
    - echo "$signing_jks_file_hex" | xxd -r -p - > android-signing-keystore.jks

Here we're setting the version at runtime – these environment variables will be used by the Gradle build as implemented above. Because $CI_PIPELINE_IID increments on each pipeline, we can guarantee our versionCode is always higher than the last and be able to publish to the Google Play Store.

    # We add 100 to get this high enough above current versionCodes that are published
    - "export VERSION_CODE=$((100 + $CI_PIPELINE_IID)) && echo $VERSION_CODE"
    - "export VERSION_SHA=`echo ${CI_COMMIT_SHORT_SHA}` && echo $VERSION_SHA"

Next, we automatically generate a changelog to include by copying whatever you have in CURRENT_VERSION.txt to the current <versionCode>.text. You can update CURRENT_VERSION.txt as necessary. I won't dive into the the details of the MR creation script here since it's somewhat specific to Gitter, but if you're interested in how something like this might work check out the create-changlog-mr.sh script.

    # Make the changelog
    - cp ./fastlane/metadata/android/en-GB/changelogs/CURRENT_VERSION.txt "./fastlane/metadata/android/en-GB/changelogs/$VERSION_CODE.txt"
    # We allow the remote push and MR creation to fail because the other job could create it
    # and it's not strictly necessary (we just need the file locally for the CI build)
    - ./ci-scripts/create-changlog-mr.sh || true
    # Because we allow the MR creation to fail, just make sure we are back in the right repo state
    - git checkout "$CI_COMMIT_SHA"

Just a couple of final items: First, whenever a build job is done, we remove the jks file just to be sure it doesn't get saved to artifacts, and second we set up the artifact directory from where the output of the build (.apk) will be saved.

  after_script:
    - rm android-signing-keystore.jks || true
  artifacts:
    paths:
    - app/build/outputs

buildDebug and buildRelease jobs

Most of the complexity here was set up in the template, so as you can see our buildDebug and buildRelease job definitions are very clear. Both just call the appropriate fastlane task (which, if you remember, then calls the appropriate Gradle task). The buildRelease output is associated with the production environment so we can define an extra production-scoped set of project-level variables which are different from our testing variables.

Since we set up code signing in the Gradle config (build.gradle) earlier, we can be confident here that our release builds are appropriately signed and ready for publishing.

buildDebug:
  extends: .build_job
  script:
    - bundle exec fastlane buildDebug

buildRelease:
  extends: .build_job
  script:
    - bundle exec fastlane buildRelease
  environment:
    name: production

Testing is really just another instance of the same thing, but instead of calling one of the build lanes we call the test lane. Note that we are using a dependency from the buildDebug job to ensure we don't need to rebuild anything.

testDebug:
  image: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  stage: test
  dependencies:
    - buildDebug
  script:
    - bundle exec fastlane test

Publish

Now that our code is being built, we're ready to publish to the Google Play Store. We only publish to the internal testing track and promote this same build to the rest of the tracks.

This is achieved through the fastlane integration, using a pre-built action to handle the job. In this case we are using a dependency on the buildRelease job, and creating a local copy of the Google API JSON keyfile (again stored in a project-level variable instead of checking it into source control.) We have this job (and all subsequent jobs) set to run only on manual action so we have full human control/intervention from this point forward. If you prefer to continuously deliver to your internal track you'd simply need to remove the when: manual entry and you'd have achieved your goal.

If you're like me, this may seem too easy to work. With everything we've configured in GitLab and fastlane to this point, it's really this simple!

publishInternal:
  image: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  stage: internal
  dependencies:
    - buildRelease
  when: manual
  before_script:
    - echo $google_play_service_account_api_key_json > ~/google_play_api_key.json
  after_script:
    - rm ~/google_play_api_key.json
  script:
    - bundle exec fastlane internal

Promote

As indicated earlier, promotion through alpha, beta, and production are all manual jobs. If internal testing is good, it can be promoted one step forward in sequence all the way through to production using these manual jobs.

If you're with me to this point, there's really nothing new here and this really highlights the power of GitLab with fastlane. We have a .promote_job template job which creates the local Google API JSON key file and the promote jobs themselves are basically identical.

.promote_job:
  image: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
  when: manual
  dependencies: []
  only:
    - master
  before_script:
    - echo $google_play_service_account_api_key_json > ~/google_play_api_key.json
  after_script:
    - rm ~/google_play_api_key.json

promoteAlpha:
  extends: .promote_job
  stage: alpha
  script:
    - bundle exec fastlane promote_internal_to_alpha

promoteBeta:
  extends: .promote_job
  stage: beta
  script:
    - bundle exec fastlane promote_alpha_to_beta

promoteProduction:
  extends: .promote_job
  stage: production
  script:
    - bundle exec fastlane promote_beta_to_production

Note that we're only allowing production promotion from the master branch, instead of from any branch. This is to ensure that the production build uses the separate set of production environment variables which only happens for the buildRelease job. We also have these variables set as protected so we can enforce that they are only used on the master branch which is protected.

Variables

The last step is to make sure you set up the project-level variables we used throughout the configuration above:

  • google_play_service_account_api_key_json: see https://docs.fastlane.tools/getting-started/android/setup/#collect-your-google-credentials
  • oauth_client_id
  • oauth_client_id, protected, production environment
  • oauth_client_secret
  • oauth_client_secret, protected, production environment
  • oauth_redirect_uri
  • oauth_redirect_uri, protected, production environment
  • signing_jks_file_hex: xxd -p gitter-android-app.jks
  • signing_key_alias
  • signing_key_password
  • signing_keystore_password

If you are using the same create-changlog-mr.sh script as us,

Project variables for Gitter for Android

What's next

Using this configuration we've got Gitter for Android building, signing, deploying to our internal track, and publishing to production as frequently as we like. Next up will be to do the same for iOS, so watch this space for our next post!

Photo by Patrick Tomasso on Unsplash

30 Jan 17:32

#1844 – Questions

by Chris

#1844 – Questions

30 Jan 17:32

TechCrunch: Facebook Pays Teenagers to Install VPN That Spies on Them

by John Gruber

Josh Constine, reporting for TechCrunch:

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

Unless I’m missing something, running this through their enterprise developer certificate is a flagrant violation of Apple’s policies. Apple shut down Facebook’s Ovano VPN in August for collecting this exact type of data. Doing it outside the App Store doesn’t make it any better. As Constine points out:

However, Facebook’s claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises.

Security expert Will Strafach, quoted by TechCrunch:

“This hands Facebook continuous access to the most sensitive data about you, and most users are unable to reasonably consent. There is no good way to articulate just how much power is handed to Facebook when you do this.”

What apps you’re using, all of your network data, your location — Facebook takes all of it with this app. (Strafach is tweeting up a storm tonight on this story.)

Genuinely interested to see how Apple responds to this. To my eyes, this action constitutes Facebook declaring war on Apple’s iOS privacy protections. I don’t think it would be out of line for Apple to revoke Facebook’s developer certificate, maybe even pull their apps from the App Store. No regular developer would get away with this. Facebook is betting that their apps are too popular, that they can do what they want and Apple has to sit back and take it. I keep saying Facebook is a criminal enterprise, and I’m not exaggerating. Sometimes a bully needs to be punched in the face, not just told to knock it off.

28 Jan 21:16

Savage Wendy's Comic

Savage Wendy's Comic

 

OOOOOOH BURN! KainanH drew this funny Wendy's vs. McDonalds fast food mascot comic and said: "Wendy is most definitely a smug anime girl."

Savage Wendys Comic

Artist: KainanH

Follow us on:
 

January 25 2019
28 Jan 20:29

Patron Saints Of Rock Prayer Candles

by elssah12

patron saints of rock prayer candles Patron Saints of Rock Prayer Candles – Complete your alter to Rock and Roll with one of these digitally illustrated, parody art prayer candles. Inspired by some of the greatest figures in music.

The post Patron Saints Of Rock Prayer Candles appeared first on Shut Up And Take My Money.

28 Jan 20:27

Cookie Monster Thoughts

Cookie Monster Thoughts

 

LOL! Cookie Monster's deep thoughts are amusing af...

Cookie Monster Thoughts

(via: HarryTTL)

Follow us on:
 

January 27 2019
28 Jan 20:27

Segway Roller Skates

by info@dudeiwantthat.com Erin Carstens
28 Jan 02:30

Streamline and shorten error remediation with Sentry’s new GitLab integration

by Eva Sasson
Dan Jones

Sentry is great.

Sentry is open source error tracking that gives visibility across your entire stack and provides the details you need to fix bugs, ASAP. Because the only thing better than visibility and details is more visibility and details, Sentry improved their GitLab integration by adding release and commit tracking as well as suspect commits.

Streamline your workflow with issue management and creation

When you receive an alert about an error, the last thing you want to do is to jump around 20 different tools trying to find out exactly what happened and where. Developers with both Sentry and GitLab in their application lifecycle benefit from issue management and issue creation to their GitLab accounts directly in the Sentry UI, alleviating some of the hassle of back-and-forth tool toggling.

GitLab account in Sentry

Of course, less tool jumping results in a more streamlined triaging process and shortened time to issue resolution – something that benefits the whole team.

Creating GitLab issue

Have a GitLab issue that wasn’t created in Sentry? No problem. Existing issues are also easily linked.

Import GitLab issue

Find and fix bugs faster with release and commit tracking

Why stop at streamlining the triaging process, when we can also make issue resolution more efficient? Sentry’s GitLab integration now utilizes GitLab commits to find and fix bugs faster.

With the newly added release and commit tracking, an enhanced release overview page uncovers new and resolved issues, files changed, and authors. Developers can also resolve issues via commit messages or merge requests, see suggested assignees for issues, and receive detailed deploy emails.

Want a big flashing arrow that points to an error’s root cause? Sentry’s suspect commits feature exposes the commit that likely introduced an error as well as the developer who wrote the broken code.

Suspect commits feature

Keep in mind that this feature is available for Sentry users on “Teams” plans and above.

Check out Sentry’s GitLab integration documentation to get started.

What’s next?

Again, why stop there, when we can do even more? GitLab is currently working to bring Sentry into the GitLab interface. Soon, GitLab and Sentry users will see their Sentry errors listed in their GitLab projects. Read the documentation on the integration here.

About the guest author

Eva Sasson is a Product Marketer at Sentry.io, an open source error-tracking tool that gives developers the contextual information they need to resolve issues quickly, and integrates with the other development tools across the stack.

25 Jan 13:25

After Party.

I'm pretty sure I was yelling the entire time.
25 Jan 13:25

#1842 – Kettle

by Chris
Dan Jones

English people

#1842 – Kettle

22 Jan 16:57

Wavering.

Bamboozled by the third dimension once again.
22 Jan 16:57

A Delightfully Fourth-Wall-Breaking “Nancy” Comic from Olivia Jaimes

by Jason Kottke

Comics fans and the internet at large have been enchanted by the new author of the classic Nancy comic, Olivia Jaimes. This comic from Sunday shows why:

Nancy Recursive

If you check out the thread for the comic on Twitter, there are several instances of comics that mess with time and space like this, but the final panel by Jaimes is particularly strong. I definitely Laughed Out Loud.

Tags: comics   Olivia Jaimes   time travel
22 Jan 15:51

CodeSOD: Why Is This Here?

by Remy Porter

Oma was tracking down a bug where the application complained about the wrong parameters being passed to an API. As she traced through the Java code, she spotted a construct like this:

Long s = foo.getStatusCode(); if (s != null) { //do stuff } else { //raise an error }

Now, this wasn't the block which was throwing the error Oma was looking for, but she noticed a lot of if (s != null) type lines in the code. It reeked of copy/paste coding. Knowing what was about to happen, Oma checked the implementation of getStatusCode.

protected Long getStatusCode() throws ProductCustomException{ Long statusCode = null; try{ statusCode = Long.valueOf(1); //Why is this here? } catch (Exception ex) { throw new ProductCustomException(ProductMessages.GENERIC_PRODUCT_MESSAGES, ex); } return statusCode; }

//Why is this here? is part of the original code. It's also the first question which popped into my head, followed by "why am I here" and "what am I doing with my life?"

For bonus points, what's not immediately clear here is that the indenting is provided via a mix of spaces and tabs.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!
18 Jan 20:13

Mandarin.

by languagehat

Sarah Zhang uses the recent appearance of a mandarin duck in Central Park as a springboard to share an interesting bit of etymology:

Yes, true, mandarin ducks are native to China, where Mandarin is the official language. But the word mandarin has a more roundabout origin. It does not come from Mandarin Chinese, which refers to itself as putonghua (or “common speech”) and China, the country, as zhongguo (or “Middle Kingdom”). It doesn’t come from any other variant of Chinese, either. Its origins are Portuguese.

This one word encapsulates an entire colonial history. In the 16th century, Portuguese explorers were among the first Europeans to reach China. Traders and missionaries followed, settling into Macau on land leased from China’s Ming dynasty rulers. The Portuguese called the Ming officials they met mandarim, which comes from menteri in Malay and, before that, mantrī in Sanskrit, both of which mean “minister” or “counselor.” It makes sense that Portuguese would borrow from Malay; they were simultaneously colonizing Malacca on the Malay peninsula. […]

Over time, the Portuguese coinage of “mandarin” took on other meanings. The Ming dynasty officials wore yellow robes, which may be why “mandarin” came to mean a type of citrus. “Mandarin” also lent its names to colorful animals native to Asia but new to Europeans, like wasps and snakes and, of course, ducks. And the language the Chinese officials spoke became “Mandarin,” which is how the English name for the language more than 1 billion people in China speak still comes from Portuguese.

(For more on the history of Mandarin Chinese itself, see the very interesting comment by Bathrobe in this LH thread.) Thanks, Trevor!

18 Jan 20:13

Meet Samson the Ladle

Meet Samson the Ladle

 

Meet the adorable and busy little kitchen gadget, Samson the Ladle (if you want to get your own the Nessie Ladles are available here)...

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Meet Samson the Ladle

Source: Samson the Ladle

Get Your Own Nessie Ladle Here!

Follow us on:
 

January 17 2019
18 Jan 20:13

Menendez Brothers Found Courtside on 1990 Basketball Card

by Aaron Cohen

About 30 years ago, the Menendez brothers of Beverly Hills murdered their parents, collected a hefty life insurance policy, and then went on an 8 month spending spree. The brothers bought cars, watches, opulent vacations, restaurants (what?!), and… courtside tickets to see the Knicks play. Incidentally, a photo of Mark Jackson from that game was used as his 1990 basketball card, and you’ll never guess who was in the background

Mark Jackson 1990 Basketball Card

The guy who found it, Stephen Zerance, isn’t an NBA fan but a fan of true-crime. He’d read in court documents the brothers had bought the tickets and went looking for proof. When archival photo and video searches were fruitless, he thought about basketball cards. After looking on eBay, Zerance found his match and announced it this past August, 29 years after the murders. It’s some sort of real-life Time Travelers in Historic Photos bananas coincidence.

As an aside, I learned while writing this post the Menendez brothers weren’t initially considered suspects and got caught after one of the brothers admitted the murders to his psychologist, who told his mistress (the psychologist’s, not the brother’s), who told the cops. Eventually, the affair between the mistress and the psychologist ended, perhaps on account of the stress related to being an ancillary part of a high profile murder case, and likely badly as evidenced by the fact the mistress attended the Menendez trial as a witness for the defense with the intention of impugning the character of the psychologist. What a ride.

Tags: basketball   Mark Jackson   Menendez Brothers   photography
16 Jan 21:34

Comic for 2019.01.15

16 Jan 14:31

Meet the Black Market Dropgangs

by Jason Kottke

Ok, this is fascinating. In “dropgangs, or the future of darknet markets”, Jonathan Logan shares how vendors on the darknet have evolved in recent years. Instead of relying on markets like Silk Road to connect with customers and the post office to deliver, vendors have brought customer communications in-house and utilize public dead drop locations for delivery, just like espionage organizations.

To prevent the problems of customer binding, and losing business when darknet markets go down, merchants have begun to leave the specialized and centralized platforms and instead ventured to use widely accessible technology to build their own communications and operational back-ends.

Instead of using websites on the darknet, merchants are now operating invite-only channels on widely available mobile messaging systems like Telegram. This allows the merchant to control the reach of their communication better and be less vulnerable to system take-downs. To further stabilize the connection between merchant and customer, repeat customers are given unique messaging contacts that are independent of shared channels and thus even less likely to be found and taken down. Channels are often operated by automated bots that allow customers to inquire about offers and initiate the purchase, often even allowing a fully bot-driven experience without human intervention on the merchant’s side.

The use of messaging platforms provides a much better user experience to the customers, who can now reach their suppliers with mobile applications they are used to already. It also means that a larger part of the communication isn’t routed through the Tor or I2P networks anymore but each side - merchant and customer - employ their own protection technology, often using widely spread VPNs.

The other major change is the use of “dead drops” instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means - he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn’t have to safeguard it anymore. Less data means less risk for everyone.

Logan expects this type of thing to become more widespread in the near future and it will be difficult to know what effect it will have on society. Maybe one of those effects is that being a corner hopper (like in The Wire) will be more widely available to young people (emphasis mine):

More people will find their livelihoods in taking part in these distribution networks, since required skills and risks are low, while a steady income for the industrious can be expected. Instead of delivering papers, teenagers will service dead drops.

(via @pomeranian99)

Tags: crime   drugs   Jonathan Logan
16 Jan 14:30

The Best Products to Dig Tunnels & Scale Walls

by info@dudeiwantthat.com Erin Carstens
14 Jan 18:08

Sunshine Considered Harmful? Perhaps Not.

by Jason Kottke

For Outside magazine, Rowan Jacobsen talks to scientists whose research suggests that the current guidelines for protecting human skin from exposure to the sun are backwards. Despite the skin cancer risk, we should be getting more sun, not less.

When I spoke with Weller, I made the mistake of characterizing this notion as counterintuitive. “It’s entirely intuitive,” he responded. “Homo sapiens have been around for 200,000 years. Until the industrial revolution, we lived outside. How did we get through the Neolithic Era without sunscreen? Actually, perfectly well. What’s counterintuitive is that dermatologists run around saying, ‘Don’t go outside, you might die.’”

When you spend much of your day treating patients with terrible melanomas, it’s natural to focus on preventing them, but you need to keep the big picture in mind. Orthopedic surgeons, after all, don’t advise their patients to avoid exercise in order to reduce the risk of knee injuries.

Meanwhile, that big picture just keeps getting more interesting. Vitamin D now looks like the tip of the solar iceberg. Sunlight triggers the release of a number of other important compounds in the body, not only nitric oxide but also serotonin and endorphins. It reduces the risk of prostate, breast, colorectal, and pancreatic cancers. It improves circadian rhythms. It reduces inflammation and dampens autoimmune responses. It improves virtually every mental condition you can think of. And it’s free.

These seem like benefits everyone should be able to take advantage of. But not all people process sunlight the same way. And the current U.S. sun-exposure guidelines were written for the whitest people on earth.

Exposure and sunscreen recommendations for people with dark skin may be particularly misleading.

People of color rarely get melanoma. The rate is 26 per 100,000 in Caucasians, 5 per 100,000 in Hispanics, and 1 per 100,000 in African Americans. On the rare occasion when African Americans do get melanoma, it’s particularly lethal — but it’s mostly a kind that occurs on the palms, soles, or under the nails and is not caused by sun exposure.

At the same time, African Americans suffer high rates of diabetes, heart disease, stroke, internal cancers, and other diseases that seem to improve in the presence of sunlight, of which they may well not be getting enough. Because of their genetically higher levels of melanin, they require more sun exposure to produce compounds like vitamin D, and they are less able to store that vitamin for darker days. They have much to gain from the sun and little to fear.

Tags: medicine   Rowan Jacobsen   Sun
14 Jan 16:54

Minimalism

by Cale

Recently I've been watching shows on real estate
I want to buy something because rent is a waste
So Netflix gives me ideas that could saturate
My design instinct to architect taste.
Minimalism as a lifestyle is attractive to me
Probably because I reject material things
But smaller spaces should not turn into pleas
Of urgent shrieks for spatial bling.
Smaller is fine and so is less stuff
But we ought not to evangelize so
Because eventually we'll cut to the rough
No space at all and 5' ceiling low.
But with urban population growing like mad
And the world aware of mortage and debt
Maybe it's ok to dismember as a fad
Maybe it's ok to adapt to the set!
So let the progress rein
Bring on the surgical solution
We can live with horrendous of pain
For the sake of humanity's pollution!

CLICK FOR BONUS

Looking for the poem? Hover over the comic. On mobile Chrome, tap and hold the comic.
He is the best at minimalism, no one can deny, when all he is is a head, just in a jar our minimal guy.

Post navigation

Share this comic

The post Minimalism appeared first on Things in Squares.

14 Jan 15:57

Pure Hell All White 1000 Piece Jigsaw Puzzle

by elssah12

pure hell white jigsaw puzzle

Pure Hell All White 1000 Piece Jigsaw Puzzle – Good luck finishing this puzzle there’s a good chance it will drive you insane within the first 20 minutes. 

all white puzzle

The post Pure Hell All White 1000 Piece Jigsaw Puzzle appeared first on Shut Up And Take My Money.

11 Jan 13:56

Meet and Greet.

11 Jan 13:54

The Flag of the Popular Vote

by Jason Kottke

Flag Of The Popular Vote

Toph Tucker has designed an algorithmic version of the US flag called the Flag of the Popular Vote, where the size of the stars and stripes are proportional to the current populations of the original 13 colonies (stripes) and current 50 states (stars). There’s also an animated version with tiny new stars appearing when new states are admitted into the union and the stars & stripes shift in size as populations grow. This New Aesthetic flag reminds me a bit of Rem Koolhaas’ proposed EU flag.

Tags: design   Toph Tucker   USA
11 Jan 13:54

Frosty the Snowmonster

Frosty the Snowmonster

 

Zack Frost built this terrifying "snowman" monster this winter and we are loving it in all its horrific glory... 😂

Frosty the Snowmonster

Frosty the Snowmonster

And it's starting to melt...

Frosty the Snowmonster

Source: Zack Frost

(via: Geeks are Sexy)

Follow us on:
 

January 11 2019