A few years ago, Ian started at one of the many investment banks based out of London. This particular bank was quite proud of how they integrated “the latest technology” into all their processes, “favoring the bleeding edge,” and “are always focusing on Agile methods, and cross-functional collaboration.”
That last bit is why every software developer was on a tech support rotation. Every two weeks, they’d have to spend a day sitting with the end users, watching them work. Ostensibly, by seeing how the software was actually used, the developers would have a better sense of the users’ needs. In practice, they mostly showed people how to delete emails or recover files from the recycling bin.
Unfortunately, these end users also directly or indirectly controlled the bank’s budgeting process, so keeping them happy was a big part of ensuring continued employment. Not just service, but service with a smile- or else.
Ian’s problem customer was Jacob. Jacob had been with the bank at least thirty years, and still longed for the days of lunchtime brandy and casual sexual harassment. He did not like computers. He did not like the people who serviced his computer. He did not like it when a web page displayed incorrectly, and he especially did not like it when you explained that you couldn’t edit the web page you didn’t own, and couldn’t tell Microsoft to change Internet Explorer to work with that particular website.
“I understand you smart technical kids are just a cost of doing business,” Jacob would often say, “but your budget is out of control. Something must be done!”
Various IT projects proceeded apace. Jacob continued to try and cut their budget. And then the Windows 7 rollout happened.
This was a massive effort. They had been on Windows XP. A variety of intranet and proprietary applications didn’t work on Windows 7, and needed to be upgraded. Even with those upgrades, everyone knew that there would be more problems. These big changes never came without unexpected side effects.
The day Jacob got Windows 7 imaged onto his computer also happened to be the day Ian was on helldesk duty. Ian got a frantic email:
My screen is broken! Everything is wrong! COME TO MY DESK RIGHT NOW, YOUNG MAN
Ian had already prepared, and went right ahead and changed Jacob’s desktop settings so that they as closely mimicked Windows XP as possible.
“That’s all fine and good,” Jacob said, “but it’s still broken.”
Ian looked at the computer. Nothing was broken. “What… what exactly is the problem?”
“Internet Explorer is broken!”
Ian double clicked the IE icon. The browser launched just fine, and pulled up the company home page.
“No! Close that window, and look at the desktop!”
Ian did so, waiting for Jacob to explain the problem. Jacob waited for Ian to see the problem. They both sat there, waiting, no one willing to move until the other had gone.
Jacob broke first. “The icon is wrong!”
Ah, yes, the big-blue-E of Windows XP had been replaced by the big-blue-E of Windows 7.
“This is unacceptable!” Jacob said.
Ian had already been here for most of the morning, so a few more minutes made no difference. He fired up image search, grabbed the first image which was an XP era IE icon, and then set that as the icon on the desktop.
Jacob squinted. “Nope. No, I don't like that. It’s too smooth.”
Of course. Ian had grabbed the first image, which was much higher resolution than the original icon file. “I… see. Give me a minute.”
Ian went back to his desk, resized the image, threw it on a network share, went back to Jacob’s desk, and changed the icon.
“There we are,” Jacob said. “At least someone on your team knows how to support their users. It’s not just about making changes willy-nilly, you know. Good work!”
That was the first and only honest compliment Jacob ever gave Ian. Two years later, Ian moved on to a new job, leaving Jacob with his old IE icon, sitting at the same desk he’d been since before the Internet was even a “thing”.
[Advertisement]
BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!
The copay came out of my HSA account so I didn't pay anything out of pocket. Other than the anxiety and the extreme pain, I'm doing pretty well! Two procedures down and only two or three more to go!
Sass just launched a major new feature you might recognize from other languages: a module system. This is a big step forward for @import. one of the most-used Sass-features. While the current @import rule allows you to pull in third-party packages, and split your Sass into manageable "partials," it has a few limitations:
@import is also a CSS feature, and the differences can be confusing
If you @import the same file multiple times, it can slow down compilation, cause override conflicts, and generate duplicate output.
Everything is in the global namespace, including third-party packages – so my color() function might override your existing color() function, or vice versa.
When you use a function like color(). it’s impossible to know exactly where it was defined. Which @import does it come from?
Sass package authors (like me) have tried to work around the namespace issues by manually prefixing our variables and functions — but Sass modules are a much more powerful solution. In brief, @import is being replaced with more explicit @use and @forward rules. Over the next few years Sass @import will be deprecated, and then removed. You can still use CSS imports, but they won’t be compiled by Sass. Don’t worry, there’s a migration tool to help you upgrade!
Import files with @use
@use 'buttons';
The new @use is similar to @import. but has some notable differences:
The file is only imported once, no matter how many times you @use it in a project.
Variables, mixins, and functions (what Sass calls "members") that start with an underscore (_) or hyphen (-) are considered private, and not imported.
Members from the used file (buttons.scss in this case) are only made available locally, but not passed along to future imports.
Similarly, @extends will only apply up the chain; extending selectors in imported files, but not extending files that import this one.
All imported members are namespaced by default.
When we @use a file, Sass automatically generates a namespace based on the file name:
@use 'buttons'; // creates a `buttons` namespace
@use 'forms'; // creates a `forms` namespace
We now have access to members from both buttons.scss and forms.scss — but that access is not transferred between the imports: forms.scss still has no access to the variables defined in buttons.scss. Because the imported features are namespaced, we have to use a new period-divided syntax to access them:
We can change or remove the default namespace by adding as <name> to the import:
@use 'buttons' as *; // the star removes any namespace
@use 'forms' as 'f';
$btn-color: $color; // buttons.$color without a namespace
$form-border: f.$input-border; // forms.$input-border with a custom namespace
Using as * adds a module to the root namespace, so no prefix is required, but those members are still locally scoped to the current document.
Import built-in Sass modules
Internal Sass features have also moved into the module system, so we have complete control over the global namespace. There are several built-in modules — math, color, string, list, map, selector, and meta — which have to be imported explicitly in a file before they are used:
@use 'sass:math';
$half: math.percentage(1/2);
Sass modules can also be imported to the global namespace:
@use 'sass:math' as *;
$half: percentage(1/2);
Internal functions that already had prefixed names, like map-get or str-index. can be used without duplicating that prefix:
You can find a full list of built-in modules, functions, and name changes in the Sass module specification.
New and changed core features
As a side benefit, this means that Sass can safely add new internal mixins and functions without causing name conflicts. The most exciting example in this release is a sass:meta mixin called load-css(). This works similar to @use but it only returns generated CSS output, and it can be used dynamically anywhere in our code:
The first argument is a module URL (like @use) but it can be dynamically changed by variables, and even include interpolation, like theme-#{$name}. The second (optional) argument accepts a map of configuration values:
// Configure the $base-color variable in 'theme/dark' before loading
@include meta.load-css(
'theme/dark',
$with: ('base-color': rebeccapurple)
);
The $with argument accepts configuration keys and values for any variable in the loaded module, if it is both:
A global variable that doesn’t start with _ or - (now used to signify privacy)
Marked as a !default value, to be configured
// theme/_dark.scss
$base-color: black !default; // available for configuration
$_private: true !default; // not available because private
$config: false; // not available because not marked as a !default
Note that the 'base-color' key will set the $base-color variable.
There are two more sass:meta functions that are new: module-variables() and module-functions(). Each returns a map of member names and values from an already-imported module. These accept a single argument matching the module namespace:
Several other sass:meta functions — global-variable-exists(), function-exists(), mixin-exists(), and get-function() — will get additional $module arguments, allowing us to inspect each namespace explicitly.
Adjusting and scaling colors
The sass:color module also has some interesting caveats, as we try to move away from some legacy issues. Many of the legacy shortcuts like lighten(). or adjust-hue() are deprecated for now in favor of explicit color.adjust() and color.scale() functions:
Some of those old functions (like adjust-hue) are redundant and unnecessary. Others — like lighten. darken. saturate. and so on — need to be re-built with better internal logic. The original functions were based on adjust(). which uses linear math: adding 20% to the current lightness of red in our example above. In most cases, we actually want to scale() the lightness by a percentage, relative to the current value:
// 20% of the distance to white, rather than current-lightness + 20
$light-red: color.scale(red, $lightness: 20%);
Once fully deprecated and removed, these shortcut functions will eventually re-appear in sass:color with new behavior based on color.scale() rather than color.adjust(). This is happening in stages to avoid sudden backwards-breaking changes. In the meantime, I recommend manually checking your code to see where color.scale() might work better for you.
Configure imported libraries
Third-party or re-usable libraries will often come with default global configuration variables for you to override. We used to do that with variables before an import:
Since used modules no longer have access to local variables, we need a new way to set those defaults. We can do that by adding a configuration map to @use:
@use 'buttons' with (
$color: red,
$style: 'flat',
);
This is similar to the $with argument in load-css(). but rather than using variable-names as keys, we use the variable itself, starting with $.
I love how explicit this makes configuration, but there’s one rule that has tripped me up several times: a module can only be configured once, the first time it is used. Import order has always been important for Sass, even with @import. but those issues always failed silently. Now we get an explicit error, which is both good and sometimes surprising. Make sure to @use and configure libraries first thing in any "entrypoint" file (the central document that imports all partials), so that those configurations compile before other @use of the libraries.
It’s (currently) impossible to "chain" configurations together while keeping them editable, but you can wrap a configured module along with extensions, and pass that along as a new module.
Pass along files with @forward
We don’t always need to use a file, and access its members. Sometimes we just want to pass it along to future imports. Let’s say we have multiple form-related partials, and we want to import all of them together as one namespace. We can do that with @forward:
Members of the forwarded files are not available in the current document and no namespace is created, but those variables, functions, and mixins will be available when another file wants to @use or @forward the entire collection. If the forwarded partials contain actual CSS, that will also be passed along without generating output until the package is used. At that point it will all be treated as a single module with a single namespace:
// styles.scss
@use 'forms'; // imports all of the forwarded members in the `forms` namespace
Note: if you ask Sass to import a directory, it will look for a file named index or _index)
By default, all public members will forward with a module. But we can be more selective by adding show or hide clauses, and naming specific members to include or exclude:
// forward only the 'input' border() mixin, and $border-color variable
@forward 'input' show border, $border-color;
// forward all 'buttons' members *except* the gradient() function
@forward 'buttons' hide gradient;
Note: when functions and mixins share a name, they are shown and hidden together.
In order to clarify source, or avoid naming conflicts between forwarded modules, we can use as to prefix members of a partial as we forward:
// forms/_index.scss
// @forward "<url>" as <prefix>-*;
// assume both modules include a background() mixin
@forward 'input' as input-*;
@forward 'buttons' as btn-*;
// style.scss
@use 'forms';
@include forms.input-background();
@include forms.btn-background();
And, if we need, we can always @use and @forward the same module by adding both rules:
@forward 'forms';
@use 'forms';
That’s particularly useful if you want to wrap a library with configuration or any additional tools, before passing it along to your other files. It can even help simplify import paths:
// _tools.scss
// only use the library once, with configuration
@use 'accoutrement/sass/tools' with (
$font-path: '../fonts/',
);
// forward the configured library with this partial
@forward 'accoutrement/sass/tools';
// add any extensions here...
// _anywhere-else.scss
// import the wrapped-and-extended library, already configured
@use 'tools';
Both @use and @forward must be declared at the root of the document (not nested), and at the start of the file. Only @charset and simple variable definitions can appear before the import commands.
Moving to modules
In order to test the new syntax, I built a new open source Sass library (Cascading Color Systems) and a new website for my band — both still under construction. I wanted to understand modules as both a library and website author. Let’s start with the "end user" experience of writing site styles with the module syntax…
Maintaining and writing styles
Using modules on the website was a pleasure. The new syntax encourages a code architecture that I already use. All my global configuration and tool imports live in a single directory (I call it config), with an index file that forwards everything I need:
This even works with my existing Sass libraries, like Accoutrement and Herman, that still use the old @import syntax. Since the @import rule will not be replaced everywhere overnight, Sass has built in a transition period. Modules are available now, but @import will not be deprecated for another year or two — and only removed from the language a year after that. In the meantime, the two systems will work together in either direction:
If we @import a file that contains the new @use/@forward syntax, only the public members are imported, without namespace.
If we @use or @forward a file that contains legacy @import syntax, we get access to all the nested imports as a single namespace.
That means you can start using the new module syntax right away, without waiting for a new release of your favorite libraries: and I can take some time to update all my libraries!
Migration tool
Upgrading shouldn’t take long if we use the Migration Tool built by Jennifer Thakar. It can be installed with Node, Chocolatey, or Homebrew:
This is not a single-use tool for migrating to modules. Now that Sass is back in active development (see below), the migration tool will also get regular updates to help migrate each new feature. It’s a good idea to install this globally, and keep it around for future use.
The migrator can be run from the command line, and will hopefully be added to third-party applications like CodeKit and Scout as well. Point it at a single Sass file, like style.scss. and tell it what migration(s) to apply. At this point there’s only one migration called module:
By default, the migrator will only update a single file, but in most cases we’ll want to update the main file and all its dependencies: any partials that are imported, forwarded, or used. We can do that by mentioning each file individually, or by adding the --migrate-deps flag:
sass-migrator --migrate-deps module style.scss
For a test-run, we can add --dry-run --verbose (or -nv for short), and see the results without changing any files. There are a number of other options that we can use to customize the migration — even one specifically for helping library authors remove old manual namespaces — but I won’t cover all of them here. The migration tool is fully documented on the Sass website.
Updating published libraries
I ran into a few issues on the library side, specifically trying to make user-configurations available across multiple files, and working around the missing chained-configurations. The ordering errors can be difficult to debug, but the results are worth the effort, and I think we’ll see some additional patches coming soon. I still have to experiment with the migration tool on complex packages, and possibly write a follow-up post for library authors.
The important thing to know right now is that Sass has us covered during the transition period. Not only can imports and modules work together, but we can create "import-only" files to provide a better experience for legacy users still @importing our libraries. In most cases, this will be an alternative version of the main package file, and you’ll want them side-by-side: <name>.scss for module users, and <name>.import.scss for legacy users. Any time a user calls @import <name>, it will load the .import version of the file:
This is particularly useful for adding prefixes for non-module users:
// _forms.import.scss
// Forward the main module, while adding a prefix
@forward "forms" as forms-*;
Upgrading Sass
You may remember that Sass had a feature-freeze a few years back, to get various implementations (LibSass, Node Sass, Dart Sass) all caught up, and eventually retired the original Ruby implementation. That freeze ended last year, with several new features and active discussions and development on GitHub – but not much fanfare. If you missed those releases, you can get caught up on the Sass Blog:
Dart Sass is now the canonical implementation, and will generally be the first to implement new features. If you want the latest, I recommend making the switch. You can install Dart Sass with Node, Chocolatey, or Homebrew. It also works great with existing gulp-sass build steps.
Much like CSS (since CSS3), there is no longer a single unified version-number for new releases. All Sass implementations are working from the same specification, but each one has a unique release schedule and numbering, reflected with support information in the beautiful new documentation designed by Jina.
Sass Modules are available as of October 1st, 2019 in Dart Sass 1.23.0.
This is me looking at the HTML <dialog> element for the first time. I've been aware of it for a while, but haven't taken it for a spin yet. It has some pretty cool and compelling features. I can't decide for you if you should use it in production on your sites, but I'd think it's starting to be possible.
It's not just a semantic element, it has APIs and special CSS.
We'll get to that stuff in a moment, but it's notable because it makes the browser support stuff significant.
When we first got HTML5 elements like <article>, it pretty much didn't matter if the browser supported it or not because nothing was worse-off in those scenarios if you used it. You could make it block-level and it was just like a meaningless div you would have used anyway.
That said, I wouldn't just use <dialog> as a "more semantic <div> replacement." It's got too much functionality for that.
Let's do the browser support thing.
As I write:
Chrome's got it (37+), so Edge is about to get it.
Firefox has the User-Agent (UA) styles in place (69+), but the functionality is behind a dom.dialog_element.enabled flag. Even with the flag, it doesn't look like we get the CSS stuff yet.
It's certainly more compelling to use features with a better support than this, but I'd say it's close and it might just cross the line if you're the polyfilling type anyway.
Like any UA styles, you'll almost surely override them with your own fancy dialog styles — shadows and typography and whatever else matches your site's style.
There is a JavaScript API for opening and closing them.
Say you have a reference to the element named dialog:
You should probably use this more explicit command though:
dialog.showModal();
That's what makes the backdrop work (and we'll get to that soon). I'm not sure I quite grok it, but the the spec talks about a "pending dialog stack" and this API will open the modal pending that stack. Here's a modal that can open a second stacking modal:
Notice that if you programmatically open the dialog, you get a backdrop cover.
This has always been one of the more finicky things about building your own dialogs. A common UI pattern is to darken the background behind the dialog to focus attention on the dialog.
We get that for free with <dialog>, assuming you open it via JavaScript. You control the look of it with the ::backdrop pseudo-element. Instead of the low-opacity black default, let's do red with stripes:
I don't know much about this stuff, but I can fire up VoiceOver on my Mac and see the dialog come into focus see that when I trigger the button that opens the modal.
Rob Dodson said: "modals are actually the boss battle at the end of web accessibility." Kinda nice that the native browser version helps with a lot of that. You even automatically get the Escape key closing functionality, which is great. There's no click outside to close, though. Perhaps someday pending user feedback.
Kris Cheng, reporting for the Hong Kong Free Press:
The Republic of China flag emoji has disappeared from Apple
iPhone’s keyboard for Hong Kong and Macau users. The change
happened for users who updated their phones to the latest
operating system.
Updating iPhones to iOS 13.1.1 or above caused the flag emoji to
disappear from the emoji keyboard. The flag, commonly used by
users to denote Taiwan, can still be displayed by typing “Taiwan”
in English, and choosing the flag in prediction candidates.
This is either a bug on Apple’s part, or kowtowing to China.
This guy was whining about the casting of Halle Bailey as Ariel in Disney's upcoming live-action remake of The Little Mermaid and he accidently came up with a wonderful idea for an Indian Cinderella which is discussed in this thread...
WASHINGTON, D.C.—As the end of Daylight Saving Time approaches, President Trump has declared that instead of turning the clocks back one hour, Americans will be turning them back to January 20, 2017, granting him an entire redo of his first term in office.
I think the people that get upset when someone refers to a pet as their child are just jealous because it’s not socially acceptable to let your human children eat the food you dropped on the kitchen floor.
NEW YORK, NY—Saturday Night Live has experienced some falling ratings recently, so producers needed something new to freshen things up. The idea they landed on? Making fun of President Trump.
I can’t tell you what’s going to happen to his blockbuster complaint about the president’s behavior, but I can tell you that the whistle-blower’s college writing instructor would be very proud of him.
As a writing instructor myself for 20 years, I look at the complaint and see a model of clear writing that offers important lessons for aspiring writers. Here are a few.
I thought the same thing reading the letter and its appendix — it’s a model of clarity and concision.
The PHP programming language is bizarre and, if nothing else, worthy of
anthropological study. The only consistent property of PHP is how badly
it’s designed, yet it somehow remains widely popular. There’s
a social dynamic at play here that science has yet to unlock.
I don’t say this because I hate PHP. There’s no reason for that: I don’t
write programs in PHP, never had to use it, and don’t expect to ever
need it. Despite this, I just can’t look away from PHP in the same way I
can’t look away from a car accident.
I recently came across a link to the PHP manual, and morbid curiosity
that caused me to look through it. It’s fun to pick an arbitrary section
of the manual and see how many crazy design choices I can spot, or at
least see what sort of strange terminology the manual has invented to
describe a common concept. This time around, one such section was on
anonymous functions, including closures. It was even worse than
I expected.
In some circumstances, closures can be a litmus test. Closure semantics
are not complex, but they’re subtle and a little tricky until you
get hang of them. If you’re interviewing a candidate, toss in a question
or two about closures. Either they’re familiar and get it right away, or
they’re unfamiliar and get nothing right. The latter is when it’s most
informative. PHP itself falls clearly into the latter. Not only that,
the example of a “closure” in the manual demonstrates a “closure”
closing over a global variable!
I’d been told for years that PHP has closures, and I took that claim at
face value. In fact, PHP has had “closures” since 5.3.0, released in
June 2009, so I’m over a decade late in investigating it. However, as
far as I can tell, nobody’s ever pointed out that PHP “closures” are, in
fact, not actually closures.
Anonymous functions and closures
Before getting into why they’re not closures, let’s go over how it
works, starting with a plain old anonymous function. PHP does have
anonymous functions — the easy part.
function foo() {
return function() {
return 1;
};
}
The function foo returns a function that returns 1. In PHP 7 you can
call the returned function immediately like so:
In a well-designed language, you’d expect that this could also be a
closure. That is, it closes over local variables, and the function may
continue to access those variables later. For example:
function bar($n) {
return function() {
return $n;
};
}
bar(1)(); // error: Undefined variable: n
This fails because you must explicitly tell PHP what variables you
intend to access inside the anonymous function with use:
function bar($n) {
return function() use ($n) {
return $n;
};
}
bar(1)(); // 1
If this actually closed over $n, this would be a legitimate closure.
Having to tell the language exactly which variables are being closed
over would be pretty dumb, but it still meets the definition of a
closure.
But here’s the catch: It’s not actually closing over any variables. The
names listed in use are actually extra, hidden parameters bound to the
current value of those variables. In other words, this is nothing more
than partial function evaluation.
function bar($n) {
$f = function() use ($n) {
return $n;
};
$n++; // never used!
return $f;
}
$r = bar(1)(); // $r = 1
function bar(n) {
let f = function(m) {
return m;
};
return f.bind(null, n);
}
This is actually more powerful than PHP’s “closures” since any arbitrary
expression can be used for the bound argument. In PHP it’s limited to a
couple of specific forms. If JavaScript didn’t have proper closures, and
instead we all had to rely on bind(), nobody would claim that
JavaScript had closures. It shouldn’t be different for PHP.
References
PHP does have references, and binding a reference to an anonymous
function is kinda, sorta like a closure. But that’s still just partial
function evaluation, but where that argument is a reference.
Here’s how to tell these reference captures aren’t actually closures:
They work equally well for global variables as local variables. So it’s
still not closing over a lexical environment, just binding a reference
to a parameter.
$counter = 0;
function bar($n) {
global $counter;
$f = function() use (&$n, &$counter) {
$counter++;
return $n;
};
$n++; // now has an effect
return $f;
}
$r = bar(1)(); // $r = 2, $counter = 1
In the example above, there’s no difference between $n, a local
variable, and $counter, a global variable. It wouldn’t make sense for
a closure to close over a global variable.
Emacs Lisp partial function application
Emacs Lisp famously didn’t get lexical scope, and therefore closures,
until fairly recently. It was — and still is by default — a
dynamic scope oddball. However, it’s long had an apply-partially
function for partial function application. It returns a closure-like
object, and did so when the language didn’t have proper closures. So it
can be used to create a “closure” just like PHP:
(defun bar (n)
(apply-partially (lambda (m) m) n))
This works regardless of lexical or dynamic scope, which is because this
construct isn’t really a closure, just like PHP’s isn’t a closure. In
PHP, its partial function evaluation is built directly into the language
with special use syntax.
Monkey see, monkey do
Why does the shell command language use sigils? Because it’s built atop
interactive command line usage, where bare words are taken literally and
variables are the exception. Why does Perl use sigils? Because it was
originally designed as an alternative to shell scripts, so it mimicked
that syntax. Why does PHP use sigils? Because Perl did.
The situation with closures follows that pattern, and it comes up all
over PHP. Its designers see a feature in another language, but don’t
really understand its purpose or semantics. So when they attempt to add
that feature to PHP, they get it disastrously wrong.
Nearly 15 years ago, the nofollow attribute was introduced as a means to help fight comment spam. It also quickly became one of Google’s recommended methods for flagging advertising-related or sponsored links. The web has evolved since nofollow was introduced in 2005 and it’s time for nofollow to evolve as well. Today, we’re announcing two new link attributes that provide webmasters with additional ways to identify to Google Search the nature of particular links. These, along with nofollow, are summarized below:
rel="sponsored": Use the sponsored attribute to identify links on your site that were created as part of advertisements, sponsorships or other compensation agreements.
rel="ugc": UGC stands for User Generated Content, and the ugc attribute value is recommended for links within user generated content, such as comments and forum posts.
rel="nofollow": Use this attribute for cases where you want to link to a page but don’t want to imply any type of endorsement, including passing along ranking credit to another page.
When nofollow was introduced, Google would not count any link marked this way as a signal to use within our search algorithms. This has now changed. All the link attributes -- sponsored, UGC and nofollow -- are treated as hints about which links to consider or exclude within Search. We’ll use these hints -- along with other signals -- as a way to better understand how to appropriately analyze and use links within our systems. Why not completely ignore such links, as had been the case with nofollow? Links contain valuable information that can help us improve search, such as how the words within links describe content they point at. Looking at all the links we encounter can also help us better understand unnatural linking patterns. By shifting to a hint model, we no longer lose this important information, while still allowing site owners to indicate that some links shouldn’t be given the weight of a first-party endorsement. We know these new attributes will generate questions, so here’s a FAQ that we hope covers most of those.
Do I need to change my existing nofollows? No. If you use nofollow now as a way to block sponsored links, or to signify that you don’t vouch for a page you link to, that will continue to be supported. There’s absolutely no need to change any nofollow links that you already have.
Can I use more than one rel value on a link? Yes, you can use more than one rel value on a link. For example, rel="ugc sponsored" is a perfectly valid attribute which hints that the link came from user-generated content and is sponsored. It’s also valid to use nofollow with the new attributes -- such as rel="nofollow ugc" -- if you wish to be backwards-compatible with services that don’t support the new attributes.
If I use nofollow for ads or sponsored links, do I need to change those? No. You can keep using nofollow as a method for flagging such links to avoid possible link scheme penalties. You don't need to change any existing markup. If you have systems that append this to new links, they can continue to do so. However, we recommend switching over to rel=”sponsored” if or when it is convenient.
Do I still need to flag ad or sponsored links? Yes. If you want to avoid a possible link scheme action, use rel=“sponsored” or rel=“nofollow” to flag these links. We prefer the use of “sponsored,” but either is fine and will be treated the same, for this purpose.
What happens if I use the wrong attribute on a link? There’s no wrong attribute except in the case of sponsored links. If you flag a UGC link or a non-ad link as “sponsored,” we’ll see that hint but the impact -- if any at all -- would be at most that we might not count the link as a credit for another page. In this regard, it’s no different than the status quo of many UGC and non-ad links already marked as nofollow. It is an issue going the opposite way. Any link that is clearly an ad or sponsored should use “sponsored” or “nofollow,” as described above. Using “sponsored” is preferred, but “nofollow” is acceptable.
Why should I bother using any of these new attributes? Using the new attributes allows us to better process links for analysis of the web. That can include your own content, if people who link to you make use of these attributes.
Won’t changing to a “hint” approach encourage link spam in comments and UGC content? Many sites that allow third-parties to contribute to content already deter link spam in a variety of ways, including moderation tools that can be integrated into many blogging platforms and human review. The link attributes of “ugc” and “nofollow” will continue to be a further deterrent. In most cases, the move to a hint model won’t change the nature of how we treat such links. We’ll generally treat them as we did with nofollow before and not consider them for ranking purposes. We will still continue to carefully assess how to use links within Search, just as we always have and as we’ve had to do for situations where no attributions were provided.
When do these attributes and changes go into effect? All the link attributes, sponsored, ugc and nofollow, now work today as hints for us to incorporate for ranking purposes. For crawling and indexing purposes, nofollow will become a hint as of March 1, 2020. Those depending on nofollow solely to block a page from being indexed (which was never recommended) should use one of the much more robust mechanisms listed on our Learn how to block URLs from Google help page.
As a mom of three, I take a lot of photos. This past weekend alone I took 280 photos and videos—and any parent can empathize with trying to get all kids to look at the camera, let alone smile, at the same time. With this many photos from everyday life, my Google Photos library is full of moments—many worth remembering—but sifting through all of these photos can be hard. To address this, we came up with a few new ways for you to get more out of Google Photos and relive the moments that matter.
A stroll down memory lane, right from the app
Certain points in the year make me extra nostalgic—birthdays, trips and holidays most of all—so I pull out my phone to look at old photos. You lose the warm and fuzzy nostalgic feeling when you have to scroll through hundreds of duplicate photos, so we’re putting your memories front and center in Google Photos.
Starting today, you’ll see photos and videos from previous years at the top of your gallery in a new feature we’re calling Memories. While you might recognize this stories format from social media, these memories are your personal media, privately presented to you so you can sit back and enjoy some of your best moments.
We’re using machine learning to curate what appears in Memories, so you don’t have to parse through many duplicate shots, and you can instead reflect on the best ones, where the photos have good quality and all the kids are smiling. We understand that you might not want to revisit all of your memories, so you’ll be able to hide certain people or time periods, and you have the option to turn this feature off entirely.
Sometimes, when you’re looking back, you know exactly what photo you’re looking for and our search in Google Photos makes it easy to find specific photos. If you want to find photos of your dad’s birthday you can just search his name and “birthday” to find all the relevant shots. But what about those photos where you don’t remember the exact date or occasion? To make it easy to find photos or screenshots that contain text—like a recipe—you can now search by the text in your photos. When you feel nostalgic for home cooking you can just search “carrot cake” and find your mom’s recipe right away.
Streamlined sharing with the people who matter
One of the best parts of revisiting your memories is sharing them with the people who made those moments special. In the coming months, it’ll be even easier to send photos directly to your friends or family within the app. Those photos will now be added to an ongoing, private conversation so there’s one place to find the photos you’ve shared with each other and keep the conversation going. And as always, photos you share in Google Photos are the same quality as the photos you back up and you can easily save photos shared with you to your library.
Off of your phone and into your home
Decorating your home with printed photos serves as a daily reminder of life's meaningful moments--big and small. You can already use Google Photos to quickly find and make your memories into a photo book. Now, you can use the same time-saving magic to print individual photos.
Starting today, you can order 4x6 photo prints directly from Google Photos and pick them up same day at CVS Pharmacy or Walmart, at over 11,000 locations with print centers across the U.S. Since your photos are automatically organized and searchable in Google Photos, you can order prints in just a few easy steps.
To brighten up any room with some of your favorite memories, like your summer vacation or your daughter’s Halloween costume last year—you can now also order canvas prints from Google Photos in the U.S., and they’ll be delivered straight to your home. We’ll also give you suggestions for the best photos to print on canvas. Canvas prints start at $19.99 and come in three different sizes, 8x8, 11x14, and 16x20, so they work for all types of spaces. You can put them on a shelf, prop them up at your desk, or hang them in your living room for everyone to see.
With all of these new features, you can relive your best memories, share them with the people that matter, and get them off of your phone and into your home.
ONTARIO, CA—The stories about Chick-fil-A employees' miraculous healing powers have spread far and wide. From employees saving people from choking to raising the dead, it's unclear how much is legend and how much is fact.
That’s the title of a new book by Gretchen McCulloch, a linguist I’ve posted about a number of times (first, I think, here), and The Walrus has a lengthy excerpt that’s full of interesting stuff, for example:
Remember how you learned about swearing? It was probably from a kid around your age, maybe an older sibling, and not from an educator or authority figure. And you were probably in early adolescence: the stage when linguistic influence tends to shift from caregivers to peers. Linguistic innovation follows a similar pattern, and the linguist who first noticed it was Henrietta Cedergren. She was doing a study in Panama City, where younger people had begun pronouncing “ch” as “sh”—saying chica (girl) as shica. When she drew a graph of which ages were using the new “sh” pronunciation, Cedergren noticed that sixteen-year-olds were the most likely to use the new version—more likely than the twelve-year-olds were. So did that mean that “sh” wasn’t the trendy new linguistic innovation after all, since the youngest age group wasn’t really adopting it?
Cedergren returned to Panama a decade later to find out. The formerly un-trendy twelve-year-olds had grown up into hyperinnovative twenty-two-year-olds. They now had the new “sh” pronunciation at even higher levels than the original trendy cohort of sixteen-year-olds, now twenty-six-year-olds, who sounded the same as they had a decade earlier. What’s more, the new group of sixteen-year-olds was even further advanced, and the new twelve-year-olds still looked a bit behind. Cedergren figured out that twelve-year-olds still have some linguistic growth to do: they keep imitating and building on the linguistic habits of their slightly older, cooler peers as they go through their teens, and then plateau in their twenties.
* * *
Researchers from Georgia Tech, Columbia, and Microsoft looked at how many times a person had to see a word in order to start using it, using a group of words that was distinctively popular among Twitter users in a particular city in 2013–2014. As we’d expect, they noticed that people who follow each other on Twitter are likely to pick up words from each other. But there was an important difference in how people learned different kinds of words. People sometimes picked up words that are also found in speech—like “cookout,” “hella,” “jawn,” and “phony”—from their internet friends, but it didn’t really matter how many times they saw them.
For rising words that are primarily written, not spoken—abbreviations like “tfti” (thanks for the information), “lls” (laughing like shit), and “ctfu” (cracking the fuck up) and phonetic spellings like “inna” (in a / in the) and “ard” (alright)—the number of times people saw them mattered a lot. Every additional exposure made someone twice as likely to start using them. The study pointed out that people encounter spoken slang both online and offline, so when we’re only measuring exposure via Twitter, we miss half or more of the exposures, and the trend looks murky. But people mostly encounter the written slang online, so pretty much all of those exposures become measurable for a Twitter study. The researchers also found that you’re more likely to start using a new word from Friendy McNetwork, who shares a lot of mutual friends with you, and less likely to pick it up from Rando McRandomFace, who doesn’t share any of your friends, even if you and Rando follow each other just like you and Friendy do.
* * *
Research in other centuries, languages, and regions continues to find that women lead linguistic change, in dozens of specific changes in specific cities and regions. Young women are also consistently on the bleeding edge of those linguistic changes that periodically sweep through media trend sections, from uptalk (the distinctive rising intonation at the end of sentences?) to the use of “like” to introduce a quotation (“And then I was like, ‘Innovation’”). The role that young women play as language disruptors is so clearly established at this point that it’s practically boring to linguists who study this topic: well-known sociolinguist William Labov estimated that women lead 90 percent of linguistic change in a paper he wrote in 1990. (I’ve attended more than a few talks at sociolinguistics conferences about a particular change in vowels or vocabulary, and it barely gets even a full sentence of explanation: “And here, as expected, we can see that the women are more advanced on this change than the men. Next slide.”) Men tend to follow a generation later: in other words, women tend to learn language from their peers; men learn it from their mothers.
She discusses gender skew, age and race, clusters (sports fans, parents, etc.), strong and weak ties (more weak ties leads to more linguistic change), a computer simulation with a network of 900 hypothetical people (“The researchers concluded that both strong and weak ties have an important role to play in linguistic change: the weak ties introduce new forms in the first place, while the strong ties spread them once they’re introduced”), and the like; it’s well worth reading the whole thing. Thanks, Kobi!
I’ve written an unfortunate amount of “useless” code in my career. In my personal experience, that’s code where I write it for a good reason at the time- like it’s a user request for a feature- but it turns out nobody actually needed or wanted that feature. Or, perhaps, if I’m being naughty, it’s a feature I want to implement just for the sake of doing it, not because anybody asked for it.
The code’s useless because it never actually gets used.
Claude R found some code which got used a lot, but was useless from the moment it was coded. Scattered throughout the codebase were calls to getInstance(), as in, Task myTask = aTask.getInstance().
At first glance, Claude didn’t think much of it. At second glance, Claude worried that there was some weird case of deep indirection where aTask wasn’t actually a concrete Task object and instead was a wrapper around some factory-instantiated concrete class or something. It didn’t seem likely, but this was Java, and a lot of Java code will follow patterns like that.
So Claude took a third glance, and found some code that’s about as useful as a football bat.
public Task getInstance(){
return this;
}
To invoke getInstance you need a variable that references the object, which means you have a variable referencing the same thing as this. That is to say, this is unnecessary.
[Advertisement]
Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!
WASHINGTON, D.C.—At a Trump campaign fundraising supper Thursday night, the mood was somber. Trump had just informed everybody that he would soon be impeached but would be reinstated three days later.