Shared posts

11 May 21:20

A Personal Panopticon

by ed

Here is how you can use your Google Search History and jq to create a top 10 list of the things you’ve googled for the most.

First download your data from your Google Search History. Yeah, creepy. Then install jq. Wait for the email from Google that your archive is ready and download then unzip it. Open a terminal window in the Searches directory, and run this:

jq --raw-output '.event[].query.query_text' *.json \
  | sort | uniq -c | sort -rn | head -10

Here’s what I see for the 75,687 queries I’ve typed into google since July 2005.

309 google analytics
 130 hacker news
 116 this is my jam
  83 site:chroniclingamerica.loc.gov
  68 jquery
  54 bagit
  48 twitter api
  44 google translate
  37 wikistream
  37 opds

These are (mostly) things that I hadn’t bothered to bookmark, but visited regularly. I suspect there is something more compelling and interesting that could be done with the data. A personal panopticon perhaps.

Oh, and I’d delete the archive from your Google Drive after you’ve downloaded it. If you ever grant other apps the ability to read from your drive they could read your search history. Actually maybe this whole exercise is fraught with peril. You should just ignore it.

30 Apr 21:31

Advice from a Badass: How to make users awesome

by Mita Williams

Previously, whenever I have spoken or written about user experience and the web, I have recommended only one book: Don’t Make Me Think by Steve Krug.

Whenever I did so, I did so with a caveat: one of the largest drawbacks of Don’t Make Me Think is captured in the title itself : it is an endorsement of web design that strives to remove all cognitive friction from the process of navigating information. This philosophy serves business who are trying to sell products with a website but doesn’t sit well with who are trying to support teaching and learning.

Today I would like to announce that I hereby retire this UX book recommendation because I have found something better. Something several orders of magnitude better.

I would like to push into your hands instead a copy of Kathy Sierra’s Badass: Making users awesome. In this work, Kathy has distilled the research on learning, expertise and the human behaviors that make both of these things possible.



You can use the lessons in Badass towards web design. Like Don’t Make Me Think, Badass also recognizes there are times when cognitive resources need to be preserved, but unlike the Don’t Make Me Think, Badass Kathy Sierra advises when and where these moments in specific points should be placed in the larger context of the learner’s journey towards expertise.


You see, Badass: Making Users Awesome isn’t about making websites. It’s about making an expert Badass.


In her book, Sierra establishes why helping users become awesome can directly lead to the success of a product or service and and then builds a model with the reader to achieve this. I think it’s an exceptional book that wisely advises how to address the emotional and behavioural setbacks to learning new things without having to resort to bribery or gamification, neither of which work after the novelty wears off. The language of the book is informal but the research behind the words is formidable.

One topic that Badass covers that personally resonated was the section on the Performance Progress Path Map as a key to motivation and progress. I know that there is resistance in some quarters to the articulation of of learning outcomes by those who suspect that the exercise is a gateway to the implementation of institutional standards that will eliminate teacher autonomy, or eliminate teachers altogether. But these fears shouldn't come into play as it doesn't apply in this context and should not inhibit individuals from sharing their personal learning paths.


The reason why this topic hit so close to home was because I found learning to program particularly perilous because of the various ‘missing chapters’ of learning computing (a phrase I picked up from Selena Marie’s not unrelated Code4Lib 2015 Keynote, What Beginners Teach Us - you can find part of the script here from a related talk).


I think it’s particularly telling that some months ago, friends were circulating this picture with the caption: This is what learning to program feels like.


http://knowyourmeme.com/memes/how-to-draw-an-owl?ref=image-entry-link



There’s a real need with the FOSS moment to invest into more projects like the Drupal Ladder project, which seeks to specifically articulate how a person can start from being a beginner to become a core contributor.

Furthermore, I think there’s a real opportunity for libraries to be involved in sharing learning strategies, especially public libraries. I think the Hamilton Public Library is really on to something with their upcoming ‘Learn Something’ Festival.


Let’s not forget,

The real value of libraries is not the hardware. It has never been the hardware. Your members don’t come to the library to find books, or magazines, journals, films or musical recordings. They come to be informed, inspired, horrified, enchanted or amused. They come to hide from reality or understand its true nature. They come to find solace or excitement, companionship or solitude. They come for the software.

While the umbrella concept of User Experience has somewhat permeated into librarianship, I would argue that it has not traveled deep enough and have not made the inroads into the profession that it could. I’ve been thinking why and I’ve come up with a couple of theories why this is the case.

One theory is that many academic librarians who are involved in teaching have a strong aversion to ‘teaching the tool’. In fact, I’ve heard that the difference between ‘bibliographic instruction’ and ‘information literacy’ is that the former deals with the mechanics of searching, while ‘information literacy’ addresses higher-level concepts. While I am sympathetic to this stance (librarians are not product trainers), I also resist the ‘don’t teach the technology' mindset. The library is a technology. We can, and we have, taught higher level concepts through our tools.


As Sierra states, “Tools matter”.


But she wisely goes on to state:


“But being a master of the tool is rarely our user’s ultimate goal. Most tools (products, services) enable and support the user’s true -- and more motivating - goal.

Nobody wants to be a tripod master. We went to use tripods to make amazing videos.”


The largest challenge to the adoption of the lessons of Badass into the vernacular of librarianship is that Badass is focused squarely on practice.

“Experts are not what they know but what they do. Repeatedly.”


A statement like the above may be quickly dismissed by those in academia as the idea of practice sounds too much like the idea of tool use. (If it makes you feel better, dear colleagues, consider this restatement in the book: “Experts make superior choices (And they do it more reliably than experienced non-experts).”

Each discipline has a practice associated with it. I have previously made the case that the librarians regular activity of searching for information of others at the reference desk was the practice where our expertise was once made (the technical services equivalent would be the cataloguing of materials). 


But as our reference desk stats have plummeted (and our catalogue records copied from elsewhere), I still think the profession need to ask ourselves, where does the our expertise come from? Many of us don’t have a good answer for this, which is why I think so many librarians - academic librarians in particular - are frequently and viciously attacking the current state of library school and its curriculum, demanding rigor. To that I say, take your professional anxieties out on something else. A good educational foundation is ideal, but professional expertise is built through practice.

What the new practice of librarianship is from beyond the reference desk is still evolving. It appears that digital publishing and digitization is becoming part of this new practice. Guidance with data management and data visualizations appears to be part of our profession now too. For myself, I’m currently trying to level up my skills in citation management and its integration with the research and writing process.

That's because there has been more fundamental shift in my thinking about academic librarianship as of late that Kathy’s book has only encouraged. I would like to make the case that the most important library to our users isn’t the one that they are sitting in, but the one on their laptop. Their collection of notes, papers, images and research materials is really the only library that really matters to them. The institutional library (that they are likely only temporarily affiliated with) may feed into this library, but its contents cannot be trusted to be there for them always.


For an example, consider this: two weeks ago, I helped a faculty member with an Endnote formatting question. As I looked over her shoulder, I saw that her Endnote library on her laptop contained hundreds and hundreds of citations that had been collected and organized over the years and how this collection was completely integrated with her writing process. This was her library.

And despite not having worked in Endnote for years, I was able to help her with formatting question so she could submit her paper to a journal with its particularly creative and personal citation style. It seems that I have developed some expertise by working with a variety of citation managers over the years.

I wouldn’t call myself a Badass. Not yet. But I’m working on it.


And I’m working on helping others finding and becoming their own Badass self.


It’s been many years now, and so it bears repeating.
My professional mission as a librarian is this: Help people build their own libraries.

Because this is the business we’ve chosen

07 Apr 15:25

ER&L 2015 – Link Resolvers and Analytics: Using Analytics Tools to Identify Usage Trends and Access Problems

by Anna

Google Analytics (3rd ed)

Speaker: Amelia Mowry, Wayne State University

Setting up Google Analytics on a link resolver:

  1. Create a new account in Analytics and put the core URL in for your link resolver, which will give you the tracking ID.
  2. Add the tracking code to the header or footer in the branding portion of the link resolver.

Google Analytics was designed for business. If someone spends a lot of time on a business site it’s good, but not necessarily for library sites. Brief interactions are considered to be bounces, which is bad for business, but longer times spent on a link resolver page could be a sign of confusion or frustration rather than success.

The base URL refers to several different pages the user interacts with. Google Analytics, by default, doesn’t distinguish them. This can hide some important usage and trends.

Using custom reports, you can tease out some specific pieces of information. This is where you can filter down to specific kinds of pages within the link resolver tool.

You can create views that will allow you to see what a set of IP ranges are using, which she used to filter to the use by computers in the library and computers not in the library. IP data is not collected by default, so if you want to do this, set it up at the beginning.

To learn where users were coming from to the link resolver, she created another custom report with parameters that would include the referring URLs. She also created a custom view that included the error parameter “SS_Error”. Some were from LibGuides pages, some were from the catalog, and some were from databases.

Ask specific and relevant questions of your data. Apply filters carefully and logically. Your data is a starting point to improving your service.

Google Analytics (3rd edition) by Ledford, Tyler, and Teixeira (Wiley) is a good resource, though it is business focused.

You might also want to read:

  1. ER&L 2015 – Understanding Your Users: Using Google Analytics and Forms Speakers: Jaclyn Bedoya & Michael DeMars, CSU Fullerton There are...
  2. ER&L 2015 – Did We Forget Something? The Need to Improve Linking at the Core of the Library’s Discovery Strategy Speaker: Eddie Neuwirth, ProQuest Linking is one of the top...
  3. ER&L 2015 – Monday Short Talks: ERM topics [I missed the first talk due to a slightly longer...
07 Apr 15:08

“Streamlining access to Scholarly Resources”

by jrochkind

A new Ithaka report, Meeting Researchers Where They Start: Streamlining Access to Scholarly Resources [thanks to Robin Sinn for the pointer], makes some observations about researcher behavior that many of us probably know, but that most of our organizations haven’t succesfully responded to yet:

  • Most researchers work from off campus.
  • Most researchers do not start from library web pages, but from google, the open web, and occasionally licensed platform search pages.
  • More and more of researcher use is on smaller screens, mobile/tablet/touch.

The problem posed by the first two points is the difficulty in getting access to licensed resources. If you start from the open web, from off campus, and wind up at a paywalled licensed platform — you will not be recognized as a licensed user.  Becuase you started from the open web, you won’t be going through EZProxy. As the Ithaka report says, “The proxy is not the answer… the researcher must click through the proxy server before arriving at the licensed content resource. When a researcher arrives at a content platform in another way, as in the example above, it is therefore a dead-end.”

Shibboleth and UI problems

Theoretically, Shibboleth federated login is an answer to some of that. You get to a licensed platform from the open web, you click on a ‘login’ link, and you have the choice to login via your university (or other host organization), using your institutional login at your home organization, which can authenticate you via Shibboleth to the third party licensed platform.

The problem here that the Ithaka report notes is that these Shibboleth federated login interfaces at our  licensed content providers — are terrible.

Most of them even use the word “Shibboleth” as if our patrons have any idea what this means. As the Ithaka report notes, “This login page is a mystery to most researchers. They can be excused for wondering “what is Shibboleth?” even if their institution is part of a Shibboleth federation that is working with the vendor, which can be determined on a case by case basis by pulling down the “Choose your institution” menu.”

Ironically, this exact same issue was pointed out in the NISO “Establishing Suggested Practices Regarding Single Sign-on” (ESPReSSO) report from 2011. The ESPReSSO report goes on to not only identify the problem but suggest some specific UI practices that licensed content providers could take to improve things.

Four years later, almost none have. (One exception is JStor, which actually acted on the ESPReSSO report, and as a result actually has an intelligible federated sign-on UI, which I suspect our users manage to figure out. It would have been nice if the Ithaka report had pointed out good examples, not just bad ones. edit: I just discovered JStor is actually currently owned by Ithaka, perhaps they didn’t want to toot their own horn.).

Four years from now, will the Ithaka report have had any more impact?  What would make it so?

There is one more especially frustrating thing to me regarding Shibboleth, that isn’t about UI.  It’s that even vendors that say they support Shibboleth, support it very unreliably. Here at my place of work we’ve been very aggressive at configuring Shibboleth with any vendor that supports it. And we’ve found that Shibboleth often simply stops working at various vendors. They don’t notice until we report it — Shibboleth is not widely used, apparently.  Then maybe they’ll fix it, maybe they won’t. In another example, Proquest’s shibboleth login requires the browser to access a web page on a four-digit non-standard port, and even though we told them several years ago that a significant portion of our patrons are behind a firewall that does not allow access to such ports, they’ve been uninterested in fixing/changing it. After all, what are we going to do, cancel our license?  As the several years since we first complained about this issue show, obviously not.  Which brings us to the next issue…

Relying on Vendors

As the Ithaka report notes, library systems have been effectively disintermediated in our researchers workflows. Our researchers go directly to third-party licensed platforms. We pay for these platforms, but we have very little control of them.

If a platform does not work well on a small screen/mobile device, there’s nothing we can do but plead. If a platform’s authentication system UI is incomprehensible to our patrons, likewise.

The Ithaka report recognizes this, and basically recommends that… we get serious when we tell our vendors to improve their UI’s:

Libraries need to develop a completely different approach to acquiring and licensing digital content, platforms, and services. They simply must move beyond the false choice that sees only the solutions currently available and instead push for a vision that is right for their researchers. They cannot celebrate content over interface and experience, when interface and experience are baseline requirements for a content platform just as much as a binding is for a book. Libraries need to build entirely new acquisitions processes for content and infrastructure alike that foreground these principles.

Sure. The problem is, this is completely, entirely, incredibly unrealistic.

If we were for real to stop “celebreating content over interface and experience”, and have that effected in our acquisitions process, what would that look like?

It might look like us refusing to license something with a terrible UX, even if it’s content our faculty need electronically. Can you imagine us telling faculty that? It’s not going to fly. The faculty wants the content even if it has a bad interface. And they want their pet database even if 90% of our patrons find it incomprehensible. And we are unable to tell them “no”.

Let’s imagine a situation that should be even easier. Let’s say we’re lucky enough to be able to get the same package of content from two different vendors with two different platforms. Let’s ignore the fact that “big deal” licensing makes this almost impossible (a problem which has only gotten worse since a D-Lib article pointed it out 14 years ago). Even in this fantasy land, where we say we could get the same content from two differnet platforms — let’s say one platform costs more but has a much better UX.  In this continued time of library austerity budgets (which nobody sees ending anytime soon), could we possibly pick the more expensive one with the better UX? Will our stakeholders, funders, faculty, deans, ever let us do that? Again, we can’t say “no”.

edit: Is it any surprise, then, that our vendors find business success in not spending any resources on improving their UX?  One exception again is JStor, which really has a pretty decent and sometimes outstanding UI.  Is the fact that they are a non-profit endeavor relevant? But there are other non-profit content platform vendors which have UX’s at the bottom of the heap.

Somehow we’ve gotten ourselves in a situation where we are completely unable to do anything to give our patrons what we know they need.  Increasingly, to researchers, we are just a bank account for licensing electronic platforms. We perform the “valuable service” of being the entity you can blame for how much the vendors are charging, the entity you require to somehow keep licensing all this stuff on smaller budgets.

I don’t think the future of academic libraries is bright, and I don’t even see a way out. Any way out would take strategic leadership and risk-taking from library and university administrators… that, frankly, institutional pressures seem to make it impossible for us to ever get.

Is there anything we can do?

First, let’s make it even worse — there’s a ‘technical’ problem that the Ithaka report doesn’t even mention that makes it even worse. If the user arrives at a paywall from the open web, even if they can figure out how to authenticate, they may find that our institution does not have a license from that particular vendor, but may very well have access to the same article on another platform. And we have no good way to get them to it.

Theoretically, the OpenURL standard is meant to address exactly this “appropriate copy” problem. OpenURL has been a very succesful standard in some ways, but the ways it’s deployed simply stop working when users don’t start from library web pages, when they start from the open web, and every place they end up has no idea what institution they belong to or their appropriate institutional OpenURL link resolver.

I think the only technical path we have (until/unless we can get vendors to improve their UI’s, and I’m not holding my breath) is to intervene in the UI.  What do I mean by intervene?

The LibX toolbar is one example — a toolbar you install in your browser that adds instititutionally specific content and links to web pages, links that can help the user authenticate against a platform arrived to via the open web, even links that can scrape the citation details from a page and help the user get to another ‘appropriate copy’ with authentication.

The problem with LibX specifically is that browser toolbars seem to be a technical dead-end.  It has proven pretty challenging to get a browser toolbar to keep working accross browser versions. The LibX project seems more and more moribund — it may still be developed, but it’s documentation hasn’t kept pace, it’s unclear what it can do or how to configure it, fewer browsers are supported. And especially as our users turn more and more to mobile (as the Ithaka report notes), they more and more often are using browsers in which plugins can’t be installed.

A “bookmarklet” approach might be worth considering, for targetting a wider range of browsers with less technical investment. Bookmarklets aren’t completely closed off in mobile browsers, although they are a pain in the neck for the user to add in many.

Zotero is another interesting example.  Zotero, as well as it’s competitors including Mendeley, can succesfully scrape citation details from many licensed platform pages. We’re used to thinking of Zotero as ‘bibliographic management’, but once it’s scraped those citation details, it can also send the user to the institutionally-appropriate link resolver with those citation details — which is what can get the user to the appropriate licensed copy, in an authenticated way.  Here at my place of work we don’t officially support Zotero or Mendeley, and haven’t spent much time figuring out how to get the most out of even the bibliographic management packages we do officially support.

Perhaps we should spend more time with these, not just to support ‘bibliographic management’ needs, but as a method to get users from the open web to authenticated access to an appropriate copy.  And perhaps we should do other R&D in ‘bookmarklets’; in machine learning for citation parsing so users can just paste a citation into a box (perhaps via bookmarklet) to get authenticated access to appropriate copy; in anything else we can think of to:

Get the user from the open web to licensed copies.  To be able to provide some useful help for accessing scholarly resources to our patrons, instead of just serving as a checkbook. With some library branding, so they recognize us as doing something useful after all.


Filed under: General
26 Mar 21:40

JavaScript and Archives

by ed

Tantek Çelik has some strong words about the use of JavaScript in Web publishing, specifically regarding it’s accessibility and longevity:

… in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web

It is a dire warning. It sounds and feels true. I am in the middle of writing a webapp that happens to use React, so Tantek’s words are particularly sobering.

And yet, consider for a moment how Twitter make personal downloadable archives available. When you request your archive you eventually get a zip file. When you unzip it, you open an index.html file in your browser, and are provided you with a view of all the tweets you’ve ever sent.

If you take a look under the covers you’ll see it is actually a JavaScript application called Grailbird. If you have JavaScript turned on it looks something like this:

JavaScript On

If you have JavaScript turned off it looks something like this:

JavaScript Off

But remember this is a static site. There is no server side piece. Everything is happening in your browser. You can disconnect from the Internet and as long as your browser has JavaScript turned on it is fully functional. (Well the avatar URLs break, but that could be fixed). You can search across your tweets. You can drill into particular time periods. You can view your account summary. It feels pretty durable. I could stash it away on a hard drive somewhere, and come back in 10 years and (assuming there are still web browsers with a working JavaScript runtime) I could still look at it right?

So is Tantek right about JavaScript being at odds with preservation of Web content? I think he is, but I also think JavaScript can be used in the service of archiving, and that there are starting to be some options out there that make archiving JavaScript heavy websites possible.

The real problem that Tantek is talking about is when human readable content isn’t available in the HTML and is getting loaded dynamically from Web APIs using JavaScript. This started to get popular back in 2005 when Jesse James Garrett coined the term AJAX for building app-like web pages using asynchronous requests for XML, which is now mostly JSON. The scene has since exploded with all sorts of client side JavaScript frameworks for building web applications as opposed to web pages.

So if someone (e.g. Internet Archive) comes along and tries to archive a URL it will get the HTML and associated images, stylesheets and JavaScript files that are referenced in that HTML. These will get saved just fine. But when the content is played back later in (e.g. Wayback Machine) the JavaScript will run and try to talk to these external Web APIs to load content. If those APIs no longer exist, the content won’t load.

One solution to this problem is for the web archiving process to execute the JavaScript and to archive any of the dynamic content that was retrieved. This can be done using headless browsers like PhantomJS, and supposedly Google has started executing JavaScript. Like Tantek I’m dubious about how widely they execute JavaScript. I’ve had trouble getting Google to index a JavaScript heavy site that I’ve inherited at work. But even if the crawler does execute the JavaScript, user interactions can cause different content to load. So does the bot start clicking around in the application to get content to load? This is yet more work for a archiving bot to do, and could potentially result in write operations which might not be great.

Another option is to change or at least augment the current web archiving paradigm by adding curator driven web archiving to the mix. The best examples I’ve seen of this are Ilya Kreymer’s work on pywb and pywb-recorder. Ilya is a former Internet Archive engineer, and is well aware of the limitations in the most common forms of web archiving today. pywb is a new player for web archives and pywb-recorder is a new recording environment. Both work in concert to let archivists interactively select web content that needs to be archived, and then for that content to be played back. The best example of this is his demo service webrecorder.io which composes pywb and pywb-recorder so that anyone can create a web archive of a highly dynamic website, download the WARC archive file, and then reupload it for playback.

The nice thing about Ilya’s work is that it is geared at archiving this JavaScript heavy content. Rhizome and the New Museum in New York City have started working with Ilya to use pywb to archive highly dynamic Web content. I think this represents a possible bright future for archives, where curators or archivists are more part of the equation, and where Web archives are more distributed, not just at Internet Archive and some major national libraries. I think the work Genius are doing to annotate the Web, archived versions of the Web is in a similar space. It’s exciting times for Web archiving. You know, exciting if you happen to be an archivist and/or archiving things.

At any rate, getting back to Tantek’s point about JavaScript. If you are in the business of building archives on the Web definitely think twice about using client side JavaScript frameworks. If you do, make sure your site degrades so that the majority of the content is still available. You want to make it easy for Internet Archive to archive your content (lots of copies keeps stuff safe) and you want to make it easy for Google et al to index it, so people looking for your content can actually find it. Stanford University’s Web Archiving team have a super set of pages describing archivability of websites. We can’t control how other people publish on the Web, but I think as archivists we have a responsibility to think about these issues as we create archives on the Web.

26 Mar 21:21

A Bracelet DIY Using Old Comics

by costumewrangler

ComicBookUpCycleBracelet

I have a friend who creates awesome jewelry by recycling old comics, which got me thinking . . . what else can a person make with old comics? That’s how I found this tutorial on Oh! Rubbish! Blog. It’s a super easy DIY, with great pictures.  Plus, I’d bet magazines, newspapers, and old photos would probably work well with this idea too.  Just think of the possibilities!

02 Mar 22:37

A Very QWERTY Birthday

by KaraSchwee

My boyfriend’s birthday is this weekend, and I’m getting him a new smartphone(!) which has been quite the ordeal.  And apparently while I have been transfixed on trying to select and purchase the perfect gift on the sly, my shiftiness has become evident to my boyfriend, who smugly mentioned this afternoon that he “knew” I was throwing him a surprise party this weekend.  While this revelation was quite distant from reality, I acted like he’d caught me (to throw him off my smartphone scent)…but now I have to throw him a party, too.  So since everything in my life has been smartphone-centric for the past few days, I thought: Why not theme the party around….emojis?!

Fortunately, the decoration portion of my party planning was made incredibly simple when I came across these spot-on DIY Emoji Balloons from Studio DIY, which I’ll definitely be whipping up this weekend:
A Very QWERTY Birthday1
…and these hilarious Dancing Girls piñatas, also from Studio DIY:
A Very QWERTY Birthday2
Whew!  I’d better be receiving beaucoup smiley, heart, and kissy-filled texts next week for all this. :)

05 Feb 22:11

Agile Development: What is a User Story?

by Leo Stezano
Image courtesy of Paul Downey's photostream.
Image courtesy of Paul Downey’s photostream.

So far in this series, I’ve talked about the pros and cons of Agile, and reviewed the methodology’s core values. Today I want to move beyond the “what” and into more of the “how.” I’ll start by looking at user stories.

A user story is the basic unit of Agile development. User stories should be written by the business, not by the development team. They should clearly state the business value that the project is expected to create, as well as the user that will benefit. The focus should be on the problem being solved, not the software being built. This not only increases efficiency, but also provides flexibility for the development team: how they solve the problem is up to them.

There’s a generally accepted template for writing user stories: “As a [user type], I want to [specific functionality] so that [tangible benefit].” I’m not crazy about using this convention because it seems contrived to me, but it does make it easier to understand the priorities of Agile development: a feature exists to provide a benefit for a specific user or user group. If you can’t express functionality in this manner, then it is either superfluous or a technical requirement (there’s a separate document for those, which is written during and after development, not before).

A great user story should follow the INVEST model: user stories should be Independent, Negotiable, Valuable, Estimatable, Small, and Testable (you can read about this in more detail in the links provided below). The main thing to remember, though, is that we’re really just trying to create software where every component can be proven to solve a specific problem for a specific user. It all comes back to justifying programming effort in terms of the value it provides once it’s been released into the wild. Let’s look at some examples, based on developing a tool to keep track of tasks:

  • “As a task list creator, I can see all of my tasks together.” This story is too vague, and will result in developers guessing about the true purpose of this feature.
  • “As a task list creator, I can see all of my tasks together so I can download them to MS Excel.” This one is too specific. MS Excel is a technical requirement, and should not be part of the user story text. The real need is for a downloadable version of the task list; limiting it to one specific format at this point may lead to problems later on.
  • “As a task list creator, I can see all of my tasks together so I can download them.” This is better, but it still doesn’t answer the question of value. Why do I need to download the tasks? This looks ok, but reality I have created a false dependency between two separate features.
  • “As a task list creator, I can download a task list to so I can share it with project stakeholders.” Now we’re getting somewhere! The user needs to share the list with other members of the team, which is why she needs a downloadable version.

User story writing is iterative and investigative. At this point, I could argue that downloading, just like display, is an unnecessary step, and that the real feature is for some mechanism that allows all project members to see the task list, and the true need is for the team to work on the list together. That’s where the value-add is. Everything else is bells and whistles. Maybe downloading is the most efficient way to share the list, but that decision is part of the development process and should be documented in the technical requirements. Maybe there are other reasons to add a download feature; those belong on separate stories.

As a business-side stakeholder with an engineering background, my first attempts at creating user stories did not go well. I like to tinker and get my hands dirty, so trying to keep design out of my user stories proved difficult; it’s easy for me to get bogged down in technical details and focus on fixing what’s in front of me, rather than asking whether it’s even necessary in the first place. Any time design creeps into product requirements, it adds a layer of abstraction that makes it harder for a development team to understand what it is you really want. It took me a while to learn that lesson (you could argue that I’m still learning it). Besides, when a product owner gets involved in designing software, it’s hard to avoid creating an attachment to the specific design, regardless of whether it meets user needs or not. It’s best to stay out of that process altogether (no matter how much fun it may be) and maintain the focus on the user.

Writing user stories can be frustrating, especially if you’re new to Agile, but they are a great way to discover the true user needs that should drive your software development project. If you want to learn more about user stories, you can go here, here, or here. I’ll be back next month to talk about prioritization and scheduling.

What’s your experience with user stories? Do you have any tips on writing a great user story?

16 Jan 20:53

4 grants for innovation, libraries, and active learning

by admin

By Laura Devaney, eSchool News

School funding is a challenge even in the post prosperous of times, especially when it comes to ed tech–technology is always changing, and maintaining or upgrading initiatives, tools, or resources is not always free. Many educators and administrators rely on school grants to fund important projects and opportunities for students. Each month, eSchool News compiles a list of new education grant opportunities. This month’s grants address learning environments, innovation, and more. Check out these funding opportunities for teachers, students, parents, and administrators–there’s likely to be a grant that’s relevant to your needs.

http://www.eschoolnews.com/2015/01/09/january-school-grants-923/

Share on Facebook
16 Jan 20:30

6 Psychological Reasons Behind People’s Online Behavior

by Issa Mirandilla

At some point in your online life, you might have wondered: Why do trolls troll? Why does my friend have to flood my Facebook feed with by-the-minute updates about the weather? Why are forum discussions so heated?

Let’s take a closer look at these questions as psychology offers some answers.

Recommended Reading: 5 Ways “Tech Addiction” Is Changing Human Behaviour

The Internet Makes Us Less Inhibited

We know that people are more likely to “act out” – whether positively or negatively – online than in real life. The question is: Why? Psychologist John Suler thinks the answer lies in the phenomenon known as the online disinhibition effect.

In his paper, Suler postulates that the aforementioned effect happens due to 6 factors: dissociative anonymity (“They’ll never know who I really am”), invisibility (“We can’t see each other online”), asynchronicity (“I can always leave my message behind without consequence”), solipsistic introjection (“This is how I see you, in my mind”), dissociative imagination (“My online persona is different from who I am in real life”), and minimization of authority (“I can do whatever I want online”). Basically, the Internet blurs the boundaries that keep our behavior in check in real life.

So, the next time you have to deal with yet another online troll, take a deep breath, chalk it up to the “online disinhibition effect”, and either respond to the other person in a constructive manner, or just don’t feed the troll altogether.

We Share Stuff That Arouses Strong Emotions

In newsrooms, “bad news sells” is considered conventional wisdom. After all, people are hardwired to be more sensitive to the bad than the good, and are therefore more responsive to topics like terrorism and worldwide epidemics.

But if it’s true that we lean more towards negativity, how is it that stories of newcomers falling in love in NYC, gifsets of cute puppies, and articles like “The Ultimate Guide to Happiness” are as viral as – if not more viral than – bad news?

According to Jonah Berger of the University of Pennsylvania, it’s not the aroused emotion per se that makes us share, but rather the intensity of that aroused emotion. “Physiological arousal can plausibly explain transmission of news or information in a wide range of settings,” he writes. “Situations that heighten arousal should boost social transmission, regardless of whether they are positive (e.g. inaugurations) or negative (e.g. panics) in nature.”

(Over)sharing Is Intrinsically Rewarding

You probably cringe, at least once, at that friend who likes to post inane statuses like “OMG, why is the weather so hot today?”. But before you type something like “Who cares?” into your friend’s “Comments” section, consider this: It may be your friend’s way of feeling better about him/herself.

That’s the conclusion of two researchers from Harvard University, who found that self-disclosure activated brain regions associated with feelings of pleasure. By sharing opinions with others, people have the opportunity to (1) validate these opinions; (2) bond with others who share the same views; and (3) learn from those who may have opposing views.

We’re Either “Integrators” Or “Segmentors”

Not everyone is predisposed to over-sharing, though. According to this article , people either separate their personal and professional lives on social media, or they don’t. The former are known as “segmentors”, while the latter are called “integrators”.

Most people are segmentors, with good reason. Employers are known to use social media to screen candidates , and if they see even a single photo of you acting in a less-than-professional manner (e.g. getting drunk and vomiting all over your friend’s dinner table), you’re automatically weeded out of the employment pool.

On the other hand, there are people who care more about self-expression than the opinions of others. Teenagers and millennials, in particular, fit this profile, which is why these people tend to be integrators. Being an integrator can be a good or a bad thing, depending on the information shared (or, in most cases, over-shared).

We Rely on Gut Feelings, Rather than Facts, to Discern the Truth

We all like to think we’re rational beings. We laugh at stories of people who do things that are, in hindsight, stupid. But that’s in hindsight.

Actually, we’re all subject to biases that influence the way we evaluate the “truthiness” of things, as Stephen Colbert puts it .  For instance, people are more likely to believe a statement if it’s written in a “high contrast” manner (black words on white background) than a “low contrast” one (white words on an aqua blue background). That may sound ridiculous at first, until you consider how one of them is easier to read than the other. When a statement feels easier to process, it’s easier to think of that statement as the truth.

We See What We Want To See

Even if we’re presented with strong evidence against our personal beliefs, we hold on to those beliefs anyway. It’s not necessarily because we’re stupid; it’s because that’s the easiest way to respond to cognitive dissonance, or the discomfort caused by two conflicting ideas held within the same mind.

As a result, we often unconsciously twist facts to support our beliefs, rather than the other way around. This is known as confirmation bias , which – if left unchecked – can cause overly long and heated discussions in places like comments sections. Also, our tendency to assume that other people think the way we do (a.k.a. false consensus effect) complicates matters.

It’s not wrong to have opinions, per se. What’s wrong is when we insist that our opinions are superior to those of others, not because of facts, but because those are our opinions.

Conclusion

Understanding why people behave the way they do online can go a long way. It helps you get into the mindset of the vicious troll, the oversharing friend, and the people who don’t seem to have anything better to do than post kilometric discussions in forums. Best of all, it helps you understand yourself – and, by extension, other people – and figure out how to act accordingly.

No related posts.








09 Jan 20:11

5 Tools To Help Audit & Optimize Your CSS Codes

by Agus

Once your website starts to grow, so will your code. As your code expands, CSS may suddenly become hard to maintain, and you may end up overwriting one CSS rule with another. This complicates things and you will probably end up with plenty of bugs.

If this is happening to you, it’s time for you to audit your site’s CSS. Auditing your CSS will allow you to identify portions of your CSS that is not optimized. You can also reduce the stylesheet filesize by eliminating lines of code that is slowing down your site’s performance.

Recommended Reading: Why CSS Could Be The Hardest Language Of All

Here are 5 good tools to help you audit and optimize CSS.

1. Type-o-matic

Type-o-matic is a Firebug plugin to analyze fonts that are being used in a website. This plugin gives a visual report in a table, bearing font properties such as the font family, the size, weight, color, and also the number of times the font is used in the web page. Through the report table, you can easily optimize the font use, remove what is unnecessary, or combine styles that are way too similar.

Type-o-matic

2. CSS Lint

CSS Lint is a linting tool that analyzes the CSS syntax based on specific parameters that address for performance, accessibility, and compatibility of your CSS. You would be surprised with the results, expect a lot of warnings in your CSS. However, these errors will eventually help you fix the CSS syntax, and make it more efficient. Additionally, you will also be a better CSS writer.

CSS Lint

3. CSS ColorGuard

CSS ColorGuard is a relatively new tool. It’s built as a Node module and it runs across all platforms: Windows, OS X, and Linux. CSS ColorGuard is a command line tool that will notify you if you are using similar colors in your stylesheet; e.g. #f3f3f3 is pretty close to #f4f4f4, so you might want to consider merging the two. CSS ColorGuard is configurable, you can set the similarity threshold as well as set the colors you want the tool to ignore.

4. CSS Dig

CSS Dig is a Python script and works locally on your computer. CSS Dig will run a thorough examination in your CSS. It will read and combine properties e.g. all background color declarations will go underneath the background section. That way you can easily make decisions based on the report when trying to standardize your CSS syntax e.g. you may find color across styles with the following color declaration.

 color: #ccc; color: #cccccc; color: #CCC; color: #CCCCCC; 

These color declarations do the same thing. You might as well go with the #ccc or with the capital #CCC as the standard. CSS Dig can expose this redundancy for other CSS properties too, and you will be able to make your code be more consistent.

CSS Dig

5. Dust-Me

Dust-Me is an add-on for Firefox and Opera that will show unused selectors in your stylesheet. It will grab all the stylesheets and selectors that are found in your website and find which selectors you are actually using in the web page. This will be shown in a report, you can then press the Clean button and it will clean up those unused selectors and save it to a new CSS file.

You can download this tools from Firefox Addons page or the developer’s site, and if you are Opera fans you can get it from the Opera Extensions Gallery page.








09 Jan 20:10

8 Free Pattern Generators to Create Seamless Patterns

by Agus
Patterns are widely used in web design as a background. Basically, patterns can be defined as graphics used in repeated form on a field. If you find yourself facing difficulties in creating natural...

Visit hongkiat.com for full content.
09 Jan 19:48

Find In-Depth Articles on Google with a URL Trick

by Whitson Gordon

Find In-Depth Articles on Google with a URL Trick

If your Google search just isn't returning the quality content you want, this little URL trick might find more in-depth articles on the subject you're searching for.

Read more...








09 Jan 19:10

Excellent NPR Invisibilia finally hits the wires

by vaughanbell

A sublime new radio show on mind, brain and behaviour has launched today. It’s called Invisibilia and is both profound and brilliant.

It’s produced by ex-Radiolab alumni Lulu Miller and radio journalist Alix Spiegel – responsible for some of the best mind and brain material on the radio in the last decade.

The first episode is excellent and I’ve had a sneak preview of some other material for future broadcast which is equally as good.

It’s on weekly, and you can download or stream from the link below, and you can follow the show on the Twitter @nprinvisibilia.

Recommended.
 
Link to NPR Invisibilia.


17 Mar 22:06

A Monochrome Pattern of Uni-Kitty

by missy-tannenbaum
Hello! This isn't going to be a very long post, since I was doing homework all afternoon and it fried my brain a little. I did, however, want to post my lovely monochrome pattern of Uni-Kitty from The Lego Movie! I made this pattern because I thought it would be cute, so some very deep thought went into my creation of this work. I know Uni-Kitty has other fans, though, so hopefully, I'm not the only one who stitches this.

This pattern is sized near identically to the Legend of Zelda monochrome patterns, so it should end up being a little smaller than Benny if you're going to make both Lego Movie patterns that I've posted. If there are any more characters that you guys would like, let me know! These were fun to make, and I think they'll look nice stitched up. For now, though, here is the lovely cross stitch pattern of Uni-Kitty.

With that posted, I have another WIP of my cross stitch of Kyubey (from Madoka Magica), which is the only cross stitch that I've been working. It's not very exciting to look at, but I'm getting really far on this project. I might have it done within a week!
I'm done blogging now, but I still have stuff to post and stuff I'm working on, so I'll be back pretty soon! Until then, have a good start to your week!
14 Feb 20:51

CFP: Journal of Creative Library Practice

by Corey Seeman
The Journal of Creative Library Practice (JCLP) is accepting papers and manuscripts concerning library instruction.  We know that many librarians employ creative techniques to teach library instruction sessions, so if you have had success implementing a new method or idea, we would love to hear about it.  Below are some other topic ideas.  Did you:

·         Teach elementary school students how to code?
·         Demonstrate how to use a 3D printer?
·         Implement board game concepts into a session?
·         Create a new way to assess learning outcomes?
·         Teach a class using an established technique but with a twist?
·         Have students use a unique piece of software?

JCLP is an open access journal that publishes articles, opinion pieces, and peer-reviewed research upon acceptance. We encourage submissions about creative practices in all kinds of libraries. For more about the journal, see http://creativelibrarypractice.org/.

Barbara Fister
on behalf of the editors of the JCLP
14 Jan 16:17

A Place for Place

by Mita Williams
There has only been one department in the 375+ year history of Harvard that has ever been dismantled and that was the Geography Department.  Since then many other Geography Departments have been dealt a similar fate including the one at my My Own Place of Work which disappeared some years before I started my employment there. Some of its faculty remain at the university, either exiled to Sociology or Political Science or regrouped as Earth Sciences, depending on which of The Two Cultures they pledged allegiance to.

I have an undergraduate degree in Geography and Environmental Science and as such I sometimes feel that I'm part of an academic diaspora.

So after almost 20 years of librarianship I've made one of my sabbatical goals to ‘re-find my inner geographer.’ My hope is that through my readings I will be able to find and apply some of the theories and tools that geographers use in my own practice.

I think I have already found a good example to use a starting point as I try to explain in this post what sort of ground I'm hoping to explore and how it may apply to librarianship.



It came to me as I was browsing through the most recent issue of Antipode: The Radical Journal of Geography when my eyes immediately fell on an article whose topic was literally close to home. It was an article about migrant worker experiences in “South-Western Ontario”.

I had to download and scan most of the article before I could learn that what was being referred to as ‘South-Western Ontario’ was actually East of where I live. And that’s when I noticed that the official keywords associated with the article (migrant workers; agriculture; labour control; Seasonal Agricultural Workers Program) made no mention of place. And this struck me as a curious practice for a journal dedicated to *geography*.

But I know better to blame the editors of Antipode for this oversight. The journal is on the Wiley publishing platform (which they call the “Wiley Online Library”, huh) which provides a largely standardized reading experience across the disciplines. On one hand, it’s understandable that location isn't a standardized metadata field for academic articles as many papers in many disciplines aren't concerned with a particular place. On the hand, I do think that is telling that the within academia there is  much more care and effort dedicated to clarifying the location of the author rather than that of that of the subject at hand.

(I will, however, blame the editors for using the phrase ‘South-Western Ontario’ when the entire world uses ‘Southwestern Ontario” in reference to these parts. Their choice of spelling means if you search the “Wiley Online Library” for Southwestern Ontario, the article in question does not even show up.)

There is another reason why I'm concerned that the article at hand doesn't have a metadata field to express location and that is this: without a given location, the work cannot be found on a map. And that’s going to increasingly be a problem because the map is increasingly where we will live.

Let me explain what I mean by that.

You may know that Google became the pre-eminent search engine based on the strength of its PageRank algorithm which, unlike its peers at the time, created relevance rankings that takes into account the number of incoming links to that page as a means to establish authority and popularity and make it less immune to spam.

In those heady, early days of the Internet finding news and more from around the world was deliriously easy. Oddly enough one of the challenges of using the Internet back then was that it was hard to find info about the features of your small town. The Internet was wonderfully global but not very good at the local.

But now, in 2014, when I search for the word ‘library’ using Google and I receive my local library system as the first result.



This is because Google is now thought to incorporate 200 some factors in its page ranking.

And one of the most important factors is location.

In fact, I would go so far to say that, just like real estate, the three of the most important factors for search is location, location, location.

It's location because if you search for political information while in Beiing your experience using the Internet is going to be significantly different from that of Berlin because of government enforced filtering and geofencing.

It's location because if you search for Notre Dame in the United States you are probably going to get something related to football rather than a cathedral in Paris.

And it's location because so much of our of information seeking is contextual based. If I'm searching for information about a particular chemical additives while at a drug store, it’s probably because I'm about to make a consumer choice about a particular shampoo and not because I need to know that chemical's melting point.

(An aside: imagine if by the very act of entering a library space, the context of your searches were automatically returned as more scholarly. Imagine if you travelled to different spaces on a campus, your searches results would be factored automatically by the context of a particular scholarly discipline?)

While it’s difficult to imagine navigating a map of research papers, it is much easier to understand and appreciate how a geographical facet could prove useful in other interfaces. For example, if I'm looking for articles about about a whether particular social work practice conforms to a particular provincial law in Canada, then the ability to either pre-select articles from that province or filter articles to a list of results pertaining to that province could prove quite useful.

It's surprising how few of our library interfaces have this ability to limit by region. Summon doesn't. Neither does Primo. But Evergreen does and so does Blacklight.





There are other examples of using maps to discover texts. OCLC has been experimenting with placing books on a map. They were able to do so by geocoding Library of Congress Subject Heading Geographical Subdivisions that they parsed so that they can be found on a map on a desktop or nearby where you are while holding a mobile phone.





And there are many, many projects that seek to place digitized objects on a map, such as the delightful HistoryPin which allows you to find old photos of a particular place but of a different time visible only when when you look through the world through the magical lens of your computer or your mobile phone.

Less common are those projects which seek to make available actual texts (or as we say in the profession the full-text) accessible in particular places outside of the library. One of my favourite of such projects is the work of Peter Rukavina who has placed a Piratebox near a park bench in Charlottetown PEI that makes available a wide variety of texts: works of fiction (yes, about that red-headed girl), a set of city bylaws, and a complete set of community histories from the IslandLives repository.

When you think about embedding the world with a hidden layers of images and text that can only be unlocked if know its secrets, well that sounds to me like a gateway to a whole other world of experience, namely, games, and ARGs or alternative reality games in particular. Artists, museums, and historians have created alternative reality games that merged the physical exploration of place with narratives and as such have created new forms of story writing and storytelling.

Personally, I think its very important that libraries become more aware of the possibilities of in situ searching and discovery in the field and there are many fields worth considering.  Over the holiday break, I bought the Audubon Birding App which acts as field guide, reference work, includes a set of vocal tracks for each bird to help with identification, allows the creation of to store my personal birding life list, and a provides means to report geocoded bird sightings to eBird -- while being half the price of a comparable print work.  We, the people of print have a tendency to dismiss and scoff at talk of the end of the print book, but I don't see any of our reference works on our shelves providing this degree of value to our readers like this app does.

In my opinion, there’s not enough understanding of this potential future of works that take into account the context of place. Otherwise, why would our institutions force our users to visit the a physical library in order to access a digitalize copy of historical material that we might have already had in our collection but in microfilm?

So, as you can see, there’s a lot of territory for myself to explore during the next 12 months and I think I'm going to start by going madly off in all directions.

I do hope that by the end of this time I will have made a convincing argument to my peers that we have an opportunity here to do better.  I hope that one day the article in question that I started this train of thought - the one about migrant agricultural workers in South-Western Ontario -  should, when and if its included in an in a library maintained institutional repositories, have a filled out location field.

And then perhaps one day, those in the future who will work those fields in South-Western Ontario can discover it where they work.
14 Jan 15:03

TARDIS Gingerbread House

by AmyLynn98

It’s not too far past the holidays for gingerbread yet, is it?

No? Alright, then check out this gingerbread TARDIS by Reddit user Fortunekitty. She went so far as to post her own tutorial for other Whovians to make their own holiday treat.

tardis-gingerbread

Fortunekitty’s TARDIS is made from white chocolate mixed with blue food coloring for the icing, gingerbread, and edible sugar paper for the printed signs.

Would this count at all towards the Doctor someday being a ginger? Probably not, but what a way to try!

14 Jan 14:56

Tardar Sauce the Grumpy Cat Amigurumi Doll

by AmyLynn98

Everyone’s favorite internet cat, Tardar Sauce, aka Grumpy Cat, returns to Geek Crafts! Now, she’s been crocheted into a grumpy amigurumi toy.

grumpy crochet

Created by Npantz22, has Grumpy’s crochet pattern for sale at her Etsy shop. This pattern makes a doll about 6 inches tall.

09 Dec 23:16

Educational Technology and Related Education Conferences

by admin

by Patrick R. Lowenthal, Instructional Technologist

Upcoming Educational-Technology-and-Education-Conferences (January to June 2014 and beyond)–including instructional design and technology and online learning conferences. The original list was prepared by Clayton R. Wright, November 13, 2013. I shortened it listing conferences that interest me (either due to the content and/or location).

http://patricklowenthal.com/2013/11/educational-technology-related-education-conferences/

Share on Facebook
09 Dec 22:37

30+ Tumblr Tips Tricks, and Tools (2019)

by Hongkiat.com
Tumblr has been one of the most famous social media platforms and the 450 million blogs and 167 billion posts (Dec, 2018 stats) will vouch for its popularity. You can either create your own blog on...

Visit hongkiat.com for full content.
03 Dec 21:59

Awesome Fridge of Magnets

by Starrley

I love that all these perler magnets were planned out specifically for putting them on the refrigerator.

via[imgur]

02 Dec 22:47

A Look Into: HTML5 Download Attribute

by Thoriq Firdaus

Creating a download link is usually an easy task. All we need to do is use an anchor tag <a>, and add the reference URL pointing to the file. But some file types pose a technical problem – PDF, image and text files will open in the browser instead of being downloaded when a user clicks on the relevant link(s).

In the past, complicated setups and hacks on the server side were required to download these files (PDF, image, text, etc) by force. For that reason, HTML5 has a new attribute called download, which is much easier to implement.

Recommended Reading: A Look Into: HTML5 <Article> And <Section> Elements

Using Download Attribute

The download attribute does two things: download a file by force, and rename the file with the name specified in the attribute upon downloading.

For example, we have here a PDF and an Image file that are named randomly.

 <a href="file/e4ptK9qd7bGT24e.pdf">Download PDF</a> <a href="file/KU7Ba93M7t7ghbi.jpg">Download Image</a> 

So, without the download attribute, these two files will open in the browser.

But when we add the download attribute like so:

 <a href="file/e4ptK9qd7bGT24e.pdf" download="10 Things You Should Know About Passion.pdf">Download PDF</a> <a href="file/KU7Ba93M7t7ghbi.jpg" download="wii.jpg">Download Image</a> 

The files will be downloaded and renamed, as shown in the following screenshot.

We have created a demo page for you to see this attribute in action.

Conclusion

HTML5 has introdued some new elements and attributes that make life easier for web developers. This download attribute is indeed a very handy addition. Unfortunately, the browsers are slow to catch up – it’s currently only supported on Firefox 20+, Chrome 14.0 and Opera 15.0.


    






02 Dec 22:45

How to Convert Photoshop Text Into SVG [Quicktip]

by Thoriq Firdaus

With the advent of high-definition screens, web designers now have to make sure that the images they use are optimized for HD. Skipping this process may render websites blurry or pixelated, leaving a not so good impression on visitors. One of the best ways to deal with HD screens is to use Vector Graphic whenever possible.

Vector Graphic is scaleable at any size, so it will look great on an HD screen. In this post, we would like to share a quick tip on how to convert your Photoshop Text into SVG. If you have, for example, a text-based logo in your design, you’ll probably find this tip very useful.

Recommended Reading: Scalable Vector Graphic Series

Photoshop Stage

For this example, we will use a simple text-based logo created using the Pacifico font family (screenshot).

On the the Layers tab in Photoshop, right-click on the text layer and select Convert to Shape (screenshot).

Then, save the file in Photoshop EPS format.

Illustrator Stage

Open the EPS file in Adobe Illustrator. You should see that the text is now a vector object.

In this stage, you can make a few adjustments such as removing unnecessary layers, changing the background colors, or resizing. To resize the document in Illustrator, just go to File > Document Setup and select Edit Artboards.

You can use your mouse to resize the Artboard, or specify the size more accurately by filling in the Width (W) and the Height (H).

Next, save the file in SVG format, which is the default option. And that’s it.

Conclusion

The act of converting the font into Shape is to ensure compatibility across multiple computers – when you are working remotely, different fonts installed on different systems may cause the "Font missing" error. This may also happens with different versions of Adobe Illustrator or Photoshop. Turning text to SVG will help minimize or eliminate issues to do with compatibility.


    






26 Nov 21:49

Google scholar adds citation management

by jrochkind

Thanks to Clarke who pointed out in a comment on a recent post here, that Google Scholar now has a “saved citations” citation management feature.

I haven’t done any experimenting with it; anyone have a review? What do you think, is this going to end up drawing a significant portion of our patron’s use away from other citation management alternatives (including some we pay for)?

Google Scholar Library. 

Today we’re launching Scholar Library, your personal collection of articles in Scholar. You can save articles right from the search page, organize them by topic, and use the power of Scholar’s full-text search & ranking to quickly find just the one you want – at any time and from anywhere. You decide what goes into your library and we’ll provide all the goodies that come with Scholar search results – up to date article links, citing articles, related articles, formatted citations, links to your university’s subscriptions, and more. And if you have a public Scholar profile, it’s easy to quickly set up your library with the articles you want – with a single click, you can import all the articles in your profile as well as all the articles they cite.


Filed under: General
22 Nov 20:06

Juxtaposing random Tweets with unprotected IP-based CCTV intercepts

by Cory Doctorow


Michael writes, "When working on SurveillanceSaver (a screen saver displaying random unprotected IP cameras) in 2008, I placed early Twitter messages on the surveillance cameras' images. The results ranged from hillarious to Ballardian.

Random Twitter messages on surveillance cameras’ images (Thanks, Michael!)




    






22 Nov 17:50

Be A Web Developer From Scratch Video Course [Deal]

by Hongkiat.com

Although there are plenty of great services online that let you build without knowing code, there is always a limit to what the service (and hence, you) can do. However, if you know code, you can step out of that box and let your creativity take you to places where you have never been. But let’s not get to far ahead of ourselves.

Want to learn to code? You might like what we have for you in this Deal.

Become a Web Developer from scratch with this video course which contains more than 230 video lectures and over 40 hours of actionable content. You will learn:

  • XHTML
  • CSS
  • JavaScript
  • PHP
  • XML
  • jSON,
  • AJAX
  • jQuery
  • MySQL database
  • HTML5 / CSS3

The instructor for this bundle is Victor Basos, web developer and designer with 6 years of experience, having worked with developmental companies in various countries.

At the end of each chapter, Basos will show you how to put what you’ve learned into practical use – in building an app from scratch! Better yet, the source code is yours to keep and can be downloaded.

Along with that you have a support team who are at hand to answer your questions within 24 hours, as well as an awesome community to bounce your ideas off. The course is accessible on the iPad, iPhone and computer and is now available at $39.99 (79% off).


    






21 Nov 18:56

Foxgloves - gardening gloves that don't dull your sense of touch

by Cool Tools

I have a very hard time keeping gloves on my hands when I’m gardening, my fingers seem to long to skip and go naked in the dirt. Foxgloves are the exception to the rule, in part because of their extraordinary sensitivity. You can feel the texture of the dirt, grab remarkably fine weeds for pulling, and when you’re done, the skin on your hands is not dried, dirty, or cracked, and there is no dirt under your fingernails. They protect your hands from blisters, and provide a modicum of warmth. Best of all, they’re gloves I actually wear!

That said, these are not the gloves for dealing with spiky thistles or blackberry vines. The thorns pass right through these gloves as though they aren’t even there. But for grubbing in the dirt and weeding everything that doesn’t have spikes, these gloves are excellent. -- Amy Thomson

Foxgloves Original $21


    






21 Nov 18:55

Because is a new, Internet-driven preposition, because grammar

by Cory Doctorow

The English language has a new preposition, driven by Internet conventions: "Because." It's not clear where this originates, but I like the theory that's it's a contraction of "$SOMETHING is $MESSED_UP, because, hey, politics!"

However it originated, though, the usage of "because-noun" (and of "because-adjective" and "because-gerund") is one of those distinctly of-the-Internet, by-the-Internet movements of language. It conveys focus (linguist Gretchen McCulloch: "It means something like 'I'm so busy being totally absorbed by X that I don’t need to explain further, and you should know about this because it's a completely valid incredibly important thing to be doing'"). It conveys brevity (Carey: "It has a snappy, jocular feel, with a syntactic jolt that allows long explanations to be forgone").

But it also conveys a certain universality. When I say, for example, "The talks broke down because politics," I'm not just describing a circumstance. I'm also describing a category. I'm making grand and yet ironized claims, announcing a situation and commenting on that situation at the same time. I'm offering an explanation and rolling my eyes—and I'm able to do it with one little word. Because variety. Because Internet. Because language. 

English Has a New Preposition, Because Internet [Megan Garber/The Atlantic]

(via Making Light)

    






21 Nov 18:40

library vendor wars

by jrochkind

We libraries as customers would prefer to be able to ‘de-couple’ content and presentation.  We want to be able to decide what content and services to purchase based on our users needs and our budgets; and separately to decide what software to use for UX and presentation — whether proprietary or open source — based on the features and quality of the software, and our budgets.

To make matters more complicated, we want to try and take our content and services — purchased from a variety of different vendors — and present them to our users as if it were one single ‘app’, one single environment, as if the library were one single business.  This makes matters more complicated, but it also makes this ‘de-coupling’ of UX layer from underlying content and services — even more important. Because if the content and services we purchase from various vendors are tied only to those vendors own custom interfaces and platforms, there’s no way to present it to users as a unified integrated whole. (How would you feel about Amazon or Netflix if they made you use one website for Science Fiction, and a completely different website that looked and behaved completely differently for History?).

Of course, our vendors have different interests.  A vendor of content and services could decide that the more places their content and services can be used, the more valuable those content and services are — so they’d want to allow their content and services, once purchased, to be used in as wide a variety of proprietary and open source UX’s as possible. Or a vendor could decide that approach dilutes their brand, instead they should use their content and services as ‘lock in’ to try and ‘vertically integrate’ and get existing customers to buy even more products from them. You want these journals to be available in your ‘discovery’? Then you better buy our discovery platform, because that’s the only place these journals are available, and besides we’ll cut you a ‘big deal’ discount when you buy our discovery product too.

I am honestly not really sure which approach is better for the vendors. But I know which approach is better for the libraries. Library and vendor interests may not be aligned here, at least in the short- and medium-terms. In the long range view, certainly our vendors need us to survive as customers, and we need some vendors to exist to sell us things we can’t feasibly provide in-house or through consortium alone.

The attempt to ‘lock in’ by various vendors will make it impossible for us to present services in the integrated UX that is necceasary for us to remain credible and valuable to our users. We’ll have vendor-purchased content and services available only in a number of separate vendor ‘silos’ or ‘walled gardens’.  It’s not actually a question of purchase costs, it’s an issue of pure technical feasibility.  We’ll either start limiting our purchases to one vertically integrated vendor (which every vendor would be happy with, as long as we pick them),  or we’ll continue to deliver content and services as a patchwork of different pieces fitting poorly together, confusing our users and further degrading the perception of the library as a competent organization.

Here’s an email sent out today from Ex Libris, I don’t know of any reason I would not be allowed to share it publicly, I hope there’s no reason I am not aware of.

Dear Primo Central Customers,

This is to inform you that Thomson Reuters has decided to withdraw its Web of Science content from Primo Central starting January 1, 2014. We understand this decision encompasses all the major library discovery solutions.

Thomson Reuters informed us that they are not planning a broad market communication of any sort; rather, they will communicate through their representatives on an individual customer basis. The message below is adapted from the information that Thomson Reuters is sharing with individual customers:

“Thomson Reuters has decided to focus on enabling customers and end users to use the Web of Science research discovery environment as the primary interface for authoritative search and evaluation of citation connected research. For this reason Thomson Reuters will no longer make Web of Science content available for indexing within EBSCO, Summon, or Primo Central. Thomson Reuters will, however, continue to support Web of Science accessibility via integrated federated search tools that are available in Primo or other systems.”

The impact of this decision on your end users will be limited because the vast majority of the Web of Science records are available in Primo Central via Elsevier Scopus and other resources of similar quality. The Scopus collection is now fully indexed in the Primo Central Index and is searchable by mutual customers of Scopus and Primo Central.

If you have any comments or additional questions, please feel free to contact [omitted]

Kind Regards,

Primo Central Team

 


Filed under: General