Shared posts

07 Oct 18:41

40 Maps That Explain the Internet

by James Fee

I have no idea what a map story or story map is but I do love maps. Timothly Lee over at Vox has compiled 40 maps that show the growth of the Internet since 1969.

The internet increasingly pervades our lives, delivering information to us no matter where we are. It takes a complex system of cables, servers, towers, and other infrastructure, developed over decades, to allow us to stay in touch with our friends and family so effortlessly. Here are 40 maps that will help you better understand the internet — where it came from, how it works, and how it’s used by people around the world.

20 Oct 10:30

Women in Data Science Are Invisible. We Can Change That

by Claudia Perlich
Women in Data Science Are Invisible. We Can Change That

I have to admit that I never really gave the number of women in data science much thought until recently. Maybe it was because, by some lucky accident, my NYU faculty advisor’s two other PhD students also happened to be female. And about half of my predictive modeling group peers at IBM Research were female. […]

The post Women in Data Science Are Invisible. We Can Change That appeared first on WIRED.

20 Oct 17:07

Open data for open lands

by Tim O'Reilly

President Obama’s well-publicized national open data policy (pdf) makes it clear that government data is a valuable public resource for which the government should be making efforts to maximize access and use. This policy was based on lessons from previous government open data success stories, such as weather data and GPS, which form the basis for countless commercial services that we take for granted today and that deliver enormous value to society. (You can see an impressive list of companies reliant on open government data via GovLab’s Open Data 500 project.)

Based on this open data policy, I’ve been encouraging entrepreneurs to invest their time and ingenuity to explore entrepreneurial opportunities based on government data. I’ve even invested (through O’Reilly AlphaTech Ventures) in one such start-up, Hipcamp, which provides user-friendly interfaces to making reservations at national and state parks.

A better system is sorely needed. The current reservation system, managed by the Active Network / Reserve America is clunky and almost unusable. Hipcamp changes all that, making it a breeze to reserve camping spots.

But now this is under threat. Active Network / Reserve America’s 10-year contract is up for renewal, and the Department of the Interior had promised an RFP for a new contract that conformed with the open data mandate. Ideally, that RFP would require an API so that independent companies could provide alternate interfaces, just like travel sites provide booking interfaces for air travel, hotels, and more. That explosion of consumer convenience should be happening for customers of our nation’s parks as well, don’t you think?

Unfortunately, as drafted, the RFP allows the winning contractor to determine whether this type of API is feasible — meaning the entity who benefits most from keeping all the reservations to themselves has the authority to do so. And the draft even removes data that is currently accessible — availability data — from public sharing. So, it’s a huge step backward, and completely out of step with the administration’s open data policy.

The founder of Hipcamp, Alyssa Ravasio, has written up some thoughts on how this RFP could be improved that I want to share with you. And after reading them, I’m hoping you will be moved to send comments asking the Department of the Interior to reconsider their RFP.

Comments on this draft are accepted through Wednesday October 22, so if you agree, please email and let him know that opening up this system is important.

Here is Alyssa’s more detailed write-up.

Open data for open lands

The federal government has released a draft contract that will define how we access our public lands for the next decade. They are seeking a private contractor to build software that will provide online access to our nation’s parks, forests, monuments, campsites, cabins, and tours.

The problem + the context

As drafted, this contract places all this inventory and its associated revenue into the hands of one contractor and one website, creating a closed silo and a monopoly.

This would be a huge step backward from all of the Obama administration’s great work encouraging federal agencies to use open data to build platforms. In the words of President Obama, he wants to “make sure that we’re giving entrepreneurs the ability, if we build an effective platform, to essentially develop apps that work off this new information.”

The key to making open data accessible is an API, which allows different applications to share data. The government now offers federal agencies an API management service called API.Data.Gov. The first principle of our National Data Policy is “openness,” and the Digital Services Playbook tells agencies to “default to open.”

Examples of government platforms

The open data platform model of public-private partnerships has already seen huge successes. Public transit agencies opened up their data to popular mapping applications like Google Maps, making public transit accessible and relevant to a wider demographic.

The IRS created e-file, which has been called “the blueprint for public-private partnerships.” The e-file platform enabled and inspired services like TurboTax and TaxAct to build easy ways for citizens to file their taxes online. Market forces of competition incentivize these companies to constantly improve their products, resulting in better experiences for the taxpayers and less paperwork for the government.

The solution for

Instead of consolidating data and revenue into one contractor building one website, the government should design as an open data platform with a revenue incentive, giving “entrepreneurs the ability…to essentially develop apps that work off this new information.”

This would allow an ecosystem bloom, inspiring multiple services to compete in helping people get outdoors. The benefits of getting more people outdoors include increased revenue for the government; a boost in business for the outdoor industry; and perhaps most importantly, a larger population connecting with nature and developing a passion for ensuring its protection and preservation for future generations.

There are two core requirements missing in this draft contract that are needed to unlock this potential.

  1. API first

    An API needs to be a primary requirement of this contract, to be completed first. It should include static data (park name, location), real-time data (availability, pricing), and “write” functionalities that allow for the automated processing of transactions.

    The new website should use this API, as should the general public and licensed third parties (all with different access levels). Starting with an API as the core and using it for both internal and external purposes is a software industry best practice known as “eating your own dog food,” and has a tremendously positive impact on security, sustainability, and flexibility of the software.

    As drafted now, the contract leaves it up to the contractor to determine whether this type of API is feasible, meaning the contractor who can profit from keeping this data to themselves has the authority to do so. This is a clear conflict of interest and must be changed.

  2. Align incentives
  3. The winning contractor should receive an annual fee per year for the service and support of the system, but then should compete for the transactional revenue. If a sale occurs on (the website they’ve built), then they will earn the commission. But if the sale occurs on a third party site (which has successfully applied for a license to process transactions), then the third party will earn this commission instead. The government receives the same amount of revenue either way.

    This is the only way to incentivize the contractor to offer an outstanding service. If they are given a monopoly, they will do the bare minimum and nothing more. By adding competition for transactional revenue, the contractor will be subject to market forces and will naturally be driven to improve their product and services.

The dream & its impact

Imagine a world where popular applications like Waze, Roadtrippers, AllTrails, and many more new applications that don’t even exist yet are all competing to get people outside and enjoying our parks. An entrepreneur will develop a website featuring the parks in Spanish. Another will develop a mobile app targeting the rising trend of bike camping, and yet another for motorcycle camping. A middle school student will develop an app where her friends compete to collect badges for all the parks they attend.

This may sound overly optimistic, but when you align incentives and allow open market forces to work, the “invisible hand” has a way of serving its own. For example, TurboTax is fully available in Spanish, and that’s not because they’ve been given a mandate to reach “underserved” communities; it’s because it makes sense to do so financially. Read the entire 93-page contract draft — nowhere does it mention Spanish, despite the fact that the Latino population represents 17% of our population and is expected to reach 31% by 2060. Market forces will accomplish what government cannot — developing all the different ways the public needs to be reached.

If we break open this monopoly and let an ecosystem bloom, we’ll allow the creativity and innovation of the American people to benefit our public lands. We will inspire a new generation to explore the outdoors and commit to protecting it for future generations.

How you can help

None of this will happen if we don’t first persuade the government that should be a platform, not a silo. Comments are being accepted for a couple more days, until October 22. Please email (the official recipient for public comments) and let him know that this issue matters to you and why. They’re on the fence about this issue, and our feedback can make all the difference in the world.

Submit your comments to no later than October 22.

Author’s disclaimer: My name is Alyssa, and I’m the founder of Hipcamp — we offer a comprehensive search engine for campgrounds across government agencies. One of the main frustrations that led to the founding of Hipcamp was that the information I needed to plan a camping trip was locked in various siloed agency websites. When I go camping, I’m looking for an ocean or mountains — I don’t want to choose between BLM and state parks. Hipcamp certainly stands to benefit from opening up this system, but no more than our many competitors (both present and future) do.
Alyssa Ravasio, founder of HipCamp

19 Oct 17:00

This is Halloween: Danny Elfman Reading The Nightmare Before Christmas - Halloween! Halloween! Halloween!

by Jill Pantozzi

I mean, come on.

(via Nerdist)

Are you following The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?

19 Oct 19:00

The Entirety Of The Hobbit Book In LEGO In 72 Seconds - Yes, P. Jax, it is possible.

by Sam Maggs

It’s 33,000ish seconds shorter than the Peter Jackson version, and it includes (pretty much) no Tauriel. Make of that what you will. If you want to marathon the cinematic-length version, you can do so when Battle of the Five Armies comes out on December 17th.

(via THR)

Previously in Hobbitses

Are you following The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?

19 Oct 21:44

Typewriter-parts cat

by Cory Doctorow

New from Jeremy Meyer, who makes brilliant assemblage sculptures out of typewriter parts: Cat XXI, a classic Hallowe'en cat rendered skeletal and wistful through the medium of obsolete mechanical components.

16 Oct 00:53

Italian Scientists Appeal Absurd Conviction for Quake Deaths

by David Wolman
Italian Scientists Appeal Absurd Conviction for Quake Deaths

This month in Italy, three judges have a chance to undo the Kafkaesque nightmare that has ensnared some of the country’s top scientists for almost five years. So far it looks doubtful they will. In 2012, seven scientists and engineers were convicted of manslaughter for things they said and did not say in the days […]

The post Italian Scientists Appeal Absurd Conviction for Quake Deaths appeared first on WIRED.

16 Oct 14:03

T-shirt printer spits them out in seconds

by Rob Beschizza

Epson's SureColor F2000 can print a t-shirt in just a few seconds. At Comic-Con in New York, they had one on display: I emailed a picture, rested my iPhone on its transparent lid, and recorded this real-time video of it running one off in less than a minute.

Read the rest
17 Oct 16:35

How Do We Know How Old The Earth Is?

by Robbie Gonzalez

The folks at SciShow teamed up with Google and YouTube to bring us answers to 10 of the most commonly searched questions on the Internet. Today's question? How old is the earth?, or, an even better question: How do we know how old the earth is?


02 Oct 12:23

Data management fairy tales

by Matt Hall

On Tuesday I read this refreshing post in LinkedIn by Jeffrey Maskell of Westheimer Energy Consultants. It's a pretty damning assessment of the current state of data management in the petroleum industry:

The fact is that no major technology advances have been seen in the DM sector for some 20 years. The [data management] gap between acquisition and processing/interpretation is now a void and is impacting the industry across the board...

I agree with him. But I don't think he goes far enough on the subject of what we should do about it. Maskell is, I believe, advocating more effort (and more budget) developing what the data management crowd have been pushing for years. In a nutshell:

I agree that standards, process, procedures, workflows, data models are all important; I also agree that DM certification is a long term desirable outcome. 

These words make me sad. I'd go so far as to say that it's the pursuit of these mythical ideas that's brought about today's pitiful scene. If you need proof, just look around you. Go look at your shared drive. Go ask someone for a well file. Go and (a) find then (b) read your IT policies and workflow documents — they're all fairy tales.

Maskell acknowledges at least that these are not enough; he goes on:

However I believe the KEY to achieving a breakthrough is to prove positively that data management can make a difference and that the cost of good quality data management is but a very small price to pay...

No, the key to achieving a breakthrough is a change of plan. Another value of information study just adds to the misery.

Here's what I think: 'data management' is an impossible fiction. A fairy tale.

You can't manage data

I'm talking to you, big-company-data-management-person.

Data is a mess, and it's distributed across your organization (and your partners, and your government, and your data vendors), and it's full of inconsistencies, and people store local copies of everything because of your broken SharePoint permissions, and... data is a mess.

The terrible truth you're fighting is that subsurface data wants to be a mess. Subsurface geoscience is not accounting. It's multi-dimensional. It's interdependent. Some of it is analog. There are dozens, maybe hundreds of formats, many of which are proprietary. Every single thing is unquantifiably uncertain. There are dozens of units. Interpretation generates more data, often iteratively. Your organization won't fund anything to do with IT properly — "We're an oil company, not a technology company!" — but it's OK because VPs only last 2 years. Well, subsurface facts last for decades.

You can't manage data. Try something else.

The principle here is: cope don't fix.

People earnestly trying to manage data reminds me of Yahoo trying to catalog the Internet in 1995. Bizarrely, they're still doing it... for 3 more months anyway. But we all know there's only one way to find things on the web today: search. Search transcends the catalog. 

So what transcends data management? I've got my own ideas, but first I really, really want to know what you think. What's one thing we could do — or stop doing — to make things a bit better?

07 Oct 18:01

Geologists: The Cowboys of Science

by Silver Fox
A colleague of my former (deceased) husband once said, "Archaeologists are the Cowboys of Science." (I don't know if he actually said it in Capital Letters like that, but that's how it came across to me.) DH disagreed and said, "No, Geologists are the Cowboys of Science. Archaeologists don't even come close."

The first character that actually comes to my mind, for some reason, is Bret Maverick's brother Bart. He's not a geologist, but he goes off gallivanting around the countryside on his horse, somehow managing, often, to get into trouble. He reminds me of some geologists I know, who go off the same way in their 4WD pickups.

Also read:

Sherlock Holmes, Forensic Geologist
The Romance of Geology in Russia: A Tribute to Alexander Ainemer
Here's to You: Geological Heroes
Dust Hole (previous post)
16 Oct 23:16

New iPads come with special, multi-carrier “Apple SIM”

by Andrew Cunningham
The "Apple SIM" allows AT&T, T-Mobile, Sprint, or EE support without switching SIM cards.
Andrew Cunningham

Apple's iPad announcements today focused overwhelmingly on the iPad Air 2's thickness, its screen, and its internals, but it and the iPad Mini 3 got some other quieter upgrades too. One such upgrade is a new "Apple SIM," a nano SIM card that allows the cellular models to switch between multiple mobile carriers without changing the actual card. At launch the card supports AT&T, Sprint, T-Mobile, and UK carrier EE. Though Apple is still selling Verizon-compatible iPads, the US' biggest carrier remains conspicuously (though perhaps not surprisingly) absent from the list.

One thing Apple is emphasizing with the SIM is that it can be used to secure short-term data commitments, rather than the regular monthly charges most cellular tablets generally assume. In theory, you can jump between carriers based on the one that's offering the data you need for the price you want, and you never have to swap out the SIM card to do it. Apple is also playing up the ability to buy data from international carriers when traveling, though obviously the carrier list will need to expand before this is practical.

Though the Apple SIM is launching in the iPad Air 2 and the iPad Mini 3, we would expect it to start showing up in other Apple products eventually. Simplifying the product line instead of shipping carrier-specific versions of iPhones and iPads seems like the right move for Apple to make; let's hope carriers continue to climb on board.

Read on Ars Technica | Comments

17 Oct 00:03

Stanford engineers built an earthquake-resistant house. Watch the shake test.

by Matthew Williams

Engineers at Stanford have designed, built, and tested an earthquake-resistant house which has stayed upright even when subjected to three times the intensity of the 1989 Loma Prieta earthquake.

03 Oct 21:16 Blame Tech Diversity On Education Pipeline, Not Hiring Discrimination

by Soulskill
theodp writes: "The biggest reason for a lack of diversity in tech," says's Hadi Partovi in a featured Re/code story, "isn't discrimination in hiring or retention. It's the education pipeline." ( just disclosed "we have no African Americans or Hispanics on our team of 30.") Supporting his argument, Partovi added: "In 2013, not one female student took the AP computer science exam in Mississippi." (Left unsaid is that only one male student took the exam in Mississippi). Microsoft earlier vilified the CS education pipeline in its U.S. Talent Strategy as it sought "targeted, short-term, high-skilled immigration reforms" from lawmakers. And Facebook COO and "Lean In" author Sheryl Sandberg recently suggested the pipeline is to blame for Facebook's lack of diversity. "Girls are at 18% of computer science college majors," Sandberg told USA Today in August. "We can't go much above 18% in our coders [Facebook has 7,185 total employees] if there's only 18% coming into the workplace."

Share on Google+

Read more of this story at Slashdot.

03 Oct 15:09

The Sound So Loud That It Circled the Earth Four Times

by S. Abbas Raza

Aatish Bhatia in Nautilus:

4363_c0a62e133894cdce435bcb4a5df1db2dOn 27 August 1883, the Earth let out a noise louder than any it has made since.

It was 10:02 AM local time when the sound emerged from the island of Krakatoa, which sits between Java and Sumatra in Indonesia. It was heard 1,300 miles away in the Andaman and Nicobar islands (“extraordinary sounds were heard, as of guns firing”); 2,000 miles away in New Guinea and Western Australia (“a series of loud reports, resembling those of artillery in a north-westerly direction”); and even 3,000 miles away in the Indian Ocean island of Rodrigues, near Mauritius* (“coming from the eastward, like the distant roar of heavy guns.”1) In all, it was heard by people in over 50 different geographical locations, together spanning an area covering a thirteenth of the globe.

Think, for a moment, just how crazy this is. If you’re in Boston and someone tells you that they heard a sound coming from New York City, you’re probably going to give them a funny look. But Boston is a mere 200 miles from New York. What we’re talking about here is like being in Boston and clearly hearing a noise coming from Dublin, Ireland. Travelling at the speed of sound (766 miles or 1,233 kilometers per hour), it takes a noise about 4 hours to cover that distance. This is the most distant sound that has ever been heard in recorded history.

So what could possibly create such an earth-shatteringly loud bang? A volcano on Krakatoa had just erupted with a force so great that it tore the island apart, emitting a plume of smoke that reached 17 miles into the atmosphere, according to a geologist who witnessed it.

More here.

04 Oct 17:35

Why is software OS specific?

by Stack Exchange
Stack Exchange

here asks:

I'm trying to determine the technical details of why software produced using programming languages for certain operating systems only work with them.

It is my understanding that binaries are specific to certain processors due to the processor specific machine language they understand and the differing instruction sets between different processors. But where does the operating system specificity come from? I used to assume it was APIs provided by the OS but then I saw this diagram in a book.

Read 18 remaining paragraphs | Comments

05 Oct 18:27

Download the “Great American Comic Sci Fi Novel,” Buddy Holly is Alive and Well on Ganymede

by Dan Colman

fre buddy holly is alive and well

Back in 1991, Bradley Denton published Buddy Holly is Alive and Well on Ganymede. The next year, it won the John W. Campbell Memorial Award for Best Science Fiction Novel.

Writes Cory Doctorow on BoingBoingBuddy Holly is Alive and Well on Ganymede “is the great American comic science fiction novel, a book about the quest to exhume Buddy Holly’s corpse from Lubbock, TX to prove that he can’t possibly be broadcasting an all-powerful jamming signal from a hermetically sealed bubble on a distant, airless moon.”

Taking advantage of new innovations (new since 1991), Denton has made the novel available for free download on his website, publishing it under a Creative Commons Attribution-NonCommercial-NoDerivatives license. You can access the text in four parts here: Part 1, Part 2, Part 3, Part 4.

If you become a fan, keep an eye out for a film adaptation of the novel starring Jon Heder. It’s been in development for some time, but you can watch a trailer online.

Buddy Holly is Alive and Well on Ganymede will be added to our collection, 600 Free eBooks for iPad, Kindle & Other Devices.

You can find more free sci-fi below:

Free Science Fiction Classics on the Web: Huxley, Orwell, Asimov, Gaiman & Beyond

33 Sci-Fi Stories by Philip K. Dick as Free Audio Books & Free eBooks

Read Hundreds of Free Sci-Fi Stories from Asimov, Lovecraft, Bradbury, Dick, Clarke & More

The Ware Tetralogy: Free SciFi Download

Download the “Great American Comic Sci Fi Novel,” Buddy Holly is Alive and Well on Ganymede is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

The post Download the “Great American Comic Sci Fi Novel,” Buddy Holly is Alive and Well on Ganymede appeared first on Open Culture.

29 Sep 20:09

Geophysics: Taking signal and noise to new dimensions (call for papers)

by Laurent Duval
The journal Geophysics (SCImago journal ranking) will soon issue a call for papers for a special issue devoted to signal processing, deadline end January 2015.

Original seismic stack
Large Gaussian noise corruption
Denoised with dual-tree wavelets

Taking signal and noise to new dimensions

Editors: Laurent Duval (IFP Energies nouvelles), Sergey Fomel (University of Texas, Austin), Mostafa Naghizadeh (Shell Canada), Mauricio Sacchi (University of Alberta)

The inherent complexity of seismic data has sparked, since about half a century, the development of innovative techniques to separate signal and noise. Localized time-scale representations (e. g. wavelets), parsimonious deconvolution, sparsity-promoted restoration and reconstruction are now at the core of modern signal processing and image analysis algorithms. Together with advances from computer science and machine learning, they shaped the field of data science, aiming at retrieving the inside structure of feature-rich, complex and high-dimensional datasets. This special issue is devoted to novel methodologies and strategies capable of tackling the large data volumes necessary to harness the future of subsurface exploration. A common trait resides in the possibility to seize at the same time both signal and noise properties along lower dimensional spaces, with dedicated metrics, to allow their joint use for seismic information enhancement. The traditional frontier between signal and noise is dimming, as incoherent seismic perturbations and formerly detrimental coherent wavefields, such as multiple reflections, are nowadays recognized as additional information for seismic processing, imaging and interpretation. We welcome contribution pertaining (but not limited) to:

  • emerging seismic data acquisition and management technologies
  • more compact, sparser and optimized multidimensional seismic data representations
  • filtering, noise attenuation, signal enhancement and source separation
  • advanced optimization methods and related metrics, regularizations and penalizations
  • artificial intelligence and machine learning applications for seismic data characterization
  • novel applications of signal and image processing to geophysics
  • hardware and software algorithmic breakthroughs
Timeline (tentative) for the Geophysics call of papers
  • Submission deadline: 31 Jan 2015
  • Peer review complete: 10 July 2015
  • All files submitted for production: 15 August 2015
  • Publication of issue: November-December 2015
This session could be seen as a follow-up to the 2014 International conference on Acoustics, Speech and Signal Processing (ICASSP 2014) special session on geophysics.

Gravity survey @ LandTech

    29 Sep 22:49

    3-D printing of rocks and fossils

    But there it was, a Mars model about the size of a golf ball and just detailed enough to show Olympus Mons, a Martian volcano nearly 14 miles high and three times the height of Mt. Everest.

    "You get a sense of how high it sticks up from the rest of the planet," said Hasiuk, an Iowa State University assistant professor of geological and atmospheric sciences and David Morehouse Faculty Fellow. "It's just spectacular."

    That little globe is just one product of Hasiuk's Geological Fabrication Laboratory (or GeoFabLab), a narrow corner room in the basement of Iowa State's Science Hall. The lab specializes in 3-D scanning and printing -- as it says on the lab website, it's all about "making things geological!"

    Hasiuk, who came to Iowa State two years ago from the oil and gas industry, said a research and teaching goal has been to find "projects that students could work on that would make them interesting to industry -- and employable."

    When he worked in industry, Hasiuk used 2-D CT scans to study the pores within limestone reservoir rocks. It's a major industry challenge to understand fluid flow through the pore networks of rocks so oil can be extracted from the smallest pores.

    When Hasiuk arrived on campus, he found researcher Joseph Gray and the CT tools at the Center for Nondestructive Evaluation. That led to 3-D scans and then 3-D prints of rock porosity.

    "We're taking really small holes in rock and then printing them at magnification," Hasiuk said. "We're not getting perfect photocopying yet, but we're getting there."

    With better scans, data management and 3-D prints, "We can make models of pore networks and see how fluids flow through them," he said. "Eventually, we'll get to the point of making predictions and increasing the accuracy of predictions. What geology does for the economy is reduce uncertainty when you need to get something from underground -- like oil and gas."

    Hasiuk said the oil and gas industry is showing significant interest in the research project.

    Communicating geology While printing the reservoir rocks, Hasiuk discovered something about the tools he was using:

    "3-D printing is a great communication tool," he said. "You don't have to teach someone a shape. You can understand by touching."

    And so he's also using the GeoFabLab's scanner and two 3-D printers (purchased with his faculty startup package and student computer fees) to print materials for geology classes.

    As he wrote in a paper ("Making Things Geological: 3-D Printing in the Geosciences") published this summer by the Geological Society of America, "Geoscientists are some of the most prolific producers of three-dimensional (3-D) data. These data do not belong in our computers -- they belong in our hands."

    "Importantly," he wrote, "3-D printing produces tangible objects that are obviously intuitive to students, non-geoscientists, and decision makers."

    In his own classes, Hasiuk has printed plastic fossils, crystals, dinosaur bones and even the topography of Ames south of campus, including Jack Trice Stadium. To spread the word about 3-D printing for classrooms, he's made the data for about 100 of his 3-D models available on the Internet. And he's collaborated with the Science Education Resource Center at Carleton College in Northfield, Minn.

    Hasiuk notes that it makes a lot of sense to replace fragile, $10 to $50 classroom specimens with 25-cent pieces of printed plastic. But that's not the first point he makes about the advantages of printing 3-D models for the classroom.

    "These sort of things get people engaged," said Hasiuk, pointing to a T. rex skull with a moving jawbone that he printed in Iowa State cardinal and gold. "These are chomp-able, flexible fossils. Using this technology, the GeoFabLab can bring dead things to life."

    Story Source:

    The above story is based on materials provided by Iowa State University. Note: Materials may be edited for content and length.

    Journal Reference:

    Cite This Page:

    29 Sep 12:00

    How To Master Copywriting

    by John Zeratsky

    Google Ventures partner John Zeratsky offers four tips for seamlessly integrating text into your design process.

    After publishing an essay on five principles for great interface copywriting, I heard from hundreds of entrepreneurs, designers, and product managers who wanted to learn more about copywriting.

    Read Full Story

    29 Sep 22:00

    Photoshop Comes To Chromebooks

    by Mark Wilson

    That's right, Google's $200 laptop can now run Photoshop.

    Chances are, you don't own one of Google's cheap, cloud-based "Chromebook" laptops, but they accounted for 21% of all laptops sold last year in the US. And designers may have new incentive to drop $200 on their next knock-around computer because Photoshop is coming to Chromebooks.

    Read Full Story

    14 Aug 04:48

    dirty his hands, it comes right off

    by brimful
    This is just a quick additional thing I have to get out of my head. A couple of years ago, I was sitting in a seminar about racism, institutional and otherwise, sitting with a bunch of very bored interns who were suffering through it as a necessary evil. Many of them thought it was a waste of time. I was inclined to agree with them, mostly because I don't think the seminar changed anyone's outlook, since it was preaching to the choir for some of us, and easily tuned out by the rest.

    Anyway, though, there was a statistic that was quoted, and I'm not going to quote it correctly, and I'm too tired to hunt it down. But it had to do with the amount of time people spend thinking about race. The result of the research showed that the average white person thinks about race hardly at all compared to a person of color who thinks about race several times a day.

    And that is very, very true. I think about that every time someone makes the always-irritating 'I don't see race' remark, because that is such a sign of privilege, to have the luxury not to think about race. I always just want to reply, 'how nice for you.'

    It's exhausting. I truly, honestly despise how often I think about it, about being not-white, and not-male. And let me tell you, I myself am speaking from a place of extreme privilege, I am well aware. I have never been treated ludicrously by the police. I have not been shot. I have not been tear gassed. I live in a part of the country where I hope we would not tolerate this Ferguson noise. But then again, there was Oscar Grant. So who knows really.

    And that's the thing. We just can't stop thinking about it.
    23 Sep 15:18

    Picking parameters

    by Matt Hall

    One of the reasons I got interested in programming was to get smarter about broken workflows like this one from a generic seismic interpretation tool (I'm thinking of Poststack-PAL, but does that even exist any more?)...

    1. I want to make a coherence volume, which requires me to choose a window length.
    2. I use the default on a single line and see how it looks, then try some other values at random.
    3. I can't remember what I did so I get systematic: I try 8 ms, 16 ms, 32 ms, and 64 ms, saving each one as a result with _XXms appended so I can remember what I did
    4. I display them side by side but the windows are annoying to line up and resize, so instead I do it once, display them one at a time, grab screenshots, and import the images into PowerPoint because let's face it I'll need that slide eventually anyway
    5. I can't decide between 16 ms and 32 ms so I try 20 ms, 24 ms, and 28 ms as well, and do it all again, and gaaah I HATE THIS STUPID SOFTWARE

    There has to be a better way.

    Stumbling towards optimization

    Regular readers will know that this is the time to break out the IPython Notebook. Fear not: I will focus on the outcomes here — for the real meat, go to the Notebook. Or click on these images to see larger versions, and code.

    Let's run through using the Canny edge detector in scikit-image, a brilliant image processing Python library. The algo uses the derivative of a Gaussian to compute gradient, and I have to choose 3 parameters. First, we'll try to optimize 'sigma', the width of the Gaussian. Let's try the default value of 1:

    Clearly, there is too much noise in the result. Let's try the interval method that drove me crazy in desktop software:

    Well, I think something between 8 and 16 might work. I could compute the average intensity of each image, choose a value in between them, and then use the sigma that gives that result. OK, it's a horrible hack, but turns out to be 10:

    But the whole point of scientific computing is the efficient application of informed human judgment. So let's try adding some interactivity — then we can explore the 3D parameter space in a near-parallel instead of purely serial way:

    I finally feel like we're getting somewhere... But it still feels a bit arbitrary. I still don't know I'm getting the optimal result.

    What can I try next? I could try to extend the 'goal seek' option, and come up with a more sophisticated cost function. If I could define something well enough — for edge detection, like coherence, I might be interested in contrast — then I could potentially just find the best answers, in much the same way that a digital camera autofocuses (indeed, many of them look for the highest contrast image). But goal seeking, if the cost function is too literal, in a way begs the question. I mean, you sort of have to know the answer — or something about the answer — before you find it.

    Social machines

    Social machines are the hot new thing in computing (Big Data is so 2013). Perhaps instead I can turn to other humans, in my social and professional networks. I could...

    • Ask my colleagues — perhaps my company has a knowledge sharing network I can go to.
    • Ask t'Internet — I could ask Twitter, or my friends on Facebook, or a seismic interpretation group in LinkedIn. Better yet, Earth Science Stack Exchange!
    • What if the software I was using just told me what other people had used for these parameters? Maybe this is only one step up from the programmer's default... especially if most people just use the programmer's default.
    • But what if people could rate the outcome of the algorithm? What if their colleagues or managers could rate the outcome? Then I could weight the results with these ratings.
    • What if there was a game that involved optimizing images (OK, maybe a bit of a stretch... maybe more like a Mechanical Turk). Then we might have a vast crowd of people all interested in really pushing the edge of what is intuitively reasonable, and maybe exploring the part of the parameter space I'm most interested in.

    What if I could combine the best of all these approaches? Interactive exploration, with guided optimization, constrained by some cost function or other expectation. That could be interesting, but unfortunately I have absolutely no idea how that would work. I do think the optimization workflow of the future will contain all of these elements.

    What do you think? Do you have an awesome way to optimize the parameters of seismic attributes? Do you have a vision for how it could be better? It occurs to me this could be a great topic for a future hackathon...

    Click here for an IPython Notebook version of this blog post. If you don't have it, IPython is easy to install. The easiest way is to install all of scientific Python, or use Canopy or Anaconda.
    24 Sep 19:08

    Pensive monkey, a photo shared in the Boing Boing Flickr Pool

    by Xeni Jardin


    Reader Alice Heck shared this lovely photograph, and this one, in our Flickr Pool. (more…)

    04 Sep 04:38

    My Favorite Charts

    by Robert Kosara

    There are many charts I hate, because they’re badly done, sloppy, meaningless, deceiving, ugly, or for any number of other reasons. But then there are the ones I keep coming back to because they’re just so clear, well-designed, and effective.

    All of these are a few years old. Like a fine wine analogy that I could insert here, it probably takes a while for a chart to come up again and again in conversation and when looking for examples to realize how good it is.


    My favorite scatterplot, and perhaps my favorite chart ever, is Why Is Her Paycheck Smaller? by Hannah Fairfield and Graham Roberts. It shows men’s versus women’s weekly earnings, with men on the horizontal axis and women on the vertical. A heavy black diagonal line shows equal wages, three additional lines show where women make 10%, 20%, and 30% less. Any point to the bottom right of the line means that women make less money than men.

    Why Is Her Paycheck Smaller?

    The diagonal lines are a stroke of genius (pun fully intended). When you see a line in a scatterplot, it’s usually a regression line that models the data; i.e., a line that follows the points. But such a line only helps reinforce the difficulty of judging the differences between the two axes, which is something we’re not good at, and which is not typically something you do in a scatterplot anyway.

    But the diagonal line, as simple as it is, makes it not just possible, but effortless. It’s such a simple device and yet so clear and effective. All the points on the line indicate occupations where men and women make the same amount of money. To the top left of the line is the area where women make more money than men, and to the bottom right where women make less.

    The additional lines show 10%, 20%, and 30% less for women. If it’s hard to tell if a point is lying on the main diagonal of a scatterplot, it becomes impossible to guess the percentage it is off. The additional lines make it possible to guesstimate that number to within a few percent. That is a remarkable level of precision, and it is achieved with three simple lines.

    There is some interactivity: mousing over points brings up a tooltip that shows the occupation the point represents and how much more one gender makes than the other. Filters in the top left corner let you focus on just a small number of occupations, which include annotations for a few select jobs.

    But the key element is the inclusion of the reference lines that help people make sense of the scatterplot and read it with a high level of precision. Simple but effective, and powerful.

    Line Chart

    My favorite line chart is The Jobless Rate for People Like You by Shan Carter, Amanda Cox, and Kevin Quealy. This chart is somewhat ancient, having been created in Flash and showing unemployment data from January 2007 to September 2009. But its brilliant design and interaction make it timeless.

    The Jobless Rate for People Like You

    It’s a line chart, but with a twist. The first thing you see is the heavy blue line, showing the overall unemployment rate. But there are more lines in the background, what are those? So you mouse over and they respond: they light up and there’s a tooltip telling you what they represent. Each is the unemployment rate for a subset of the population, defined as the combination of race, gender, age group, and education. How are hispanic men over 45 with only a high school diploma doing compared to the overall rate? What about black women 15–24? Or white college grads of any age and gender?

    Clicking on a line moves the blue line there so it’s easier to see, but the overall rate stays easily visible. The y axis also rescales nicely when the values go above what it can currently display.

    In addition, the filters at the top also respond to the selection to show who is selected. Clicking around inside the chart updates them. Hm, so maybe I can use those to explore too? And of course you can, broadening or narrowing your selection, or clicking through different age groups of the same subset of the population, etc.

    The Human-Computer Interaction field has a nice term for an indication of more data and interaction: information scent. This is usually used with widgets, which indicate where more information can be found (like the little tick marks on the scrollbar in Chrome when when you search within the page). What makes this chart so good is its clever use of information scent to entice viewers to dig deeper, explore, and ask questions.

    It also brilliantly and clearly demonstrates the fact that the overall unemployment rate is a rather meaningless number. The actual rate in your demographic is likely to look very different, and the range is huge. This was the inspiration for my What Means Mean piece, though I don’t think that was nearly as clear as this.

    The chart shows interesting data, explains a somewhat advanced concept, and invites people to interact with it. This comes in a package that is understated and elegant in its design. Best line chart ever.

    Bar Chart

    I have already written about the Bikini Chart, and it remains my favorite bar chart. It’s an incredibly effective piece of communication, and it’s all just based on a simple time series. The fact that the bars point down clearly communicates how it is supposed to be read: down is bad, less down is better than more down.

    obama job loss chart

    Bar charts are not exactly a common medium for artistic expression, but the designers of this chart managed to subtly but clearly get a message across.

    Bubble Chart/Animated Scatterplot

    Animated scatterplots may not have been invented by Hans Rosling and gapminder, but they certainly were not a common thing until his TED talk in 2007. And while it may seem a bit facetious to point to the only reasonably well-known example of a particular chart type as my favorite one, this is clearly one of my favorite charts, no matter what type.


    The animation may seem like a bit of a gimmick – and it has been criticized as not being terribly effective –, but it works to communicate a number of important pieces of information.

    The main piece of information, of course, is change over time. How have different countries changed in terms of their wealth, healthcare, etc.? This is reasonably effective, because there are trends, and many countries follow them. The outliers are reasonably easy to spot, especially when you can turn on trails and replay the animation. It’s not always immediately possible to see everything, but it does invite people to play and explore.

    But then, there are the explanations. There is the clever animation that constructs the two-dimensional scatterplot from a one-dimensional distribution. There is the clever drill-down animation that breaks continents down into countries, and countries down into quintiles, to show the enormous range of values covered by each. This is not just a simple data display, but a way to introduce people to statistical concepts and data operations they may have heard of but don’t understand (drill-down), or never have heard of in the first place (quintiles).

    Rosling’s video, and the gapminder software, not only introduced millions of people to data they knew nothing about (the video has over 8.5 million views!), it also demonstrated how a compelling story can be told without a single photograph or other image, just with data. That is an incredible achievement that opened our eyes to the possibilities of data visualization for communication.

    Appreciating Good Work

    It’s easy to find, and make fun of, bad charts. But between all the pie chart bashing and general criticism of bad charts, it is equally important to find the good examples and try to figure out what makes them work so well. Even if it may be more fun to beat up the bad examples, we will ultimately learn more from understanding the design choices and ideas that went into the good ones.

    05 Sep 21:09

    Oil, Gas Companies Laying Groundwork for the Internet of Things

    change text size

    At this stage, companies are building programs internally to address how they will adopt Internet of Things technologies and how to get the most benefits in terms of managing risks, reducing costs and bringing in new revenues, said Serhii Konovalov, who leads Cisco’s O&G Global Industry Solutions Practice and O&G Innovations, in an interview with Rigzone.

    Officials with Software AG are also seeing oil and gas companies starting to lay the framework and mapping their business processes to prepare their organizations for Internet of Things monitoring solutions. Software AG, which focuses on helping companies, including oil and gas companies, become digital enterprises, is seeing oil and gas companies pursue Internet of Things solutions to enhance their cash flow, while others are focused on ways to use sensor technology to achieve greater safety records.

    To prepare for the Internet of Things, companies are first seeking to map their business processes to understand how work is done as a way to foster change in their corporate culture. Many firms believe that if they first focus on change management, the tools they use to facilitate operations, including technology, will fall into place.

    This strategy of change includes using technology to enable processes as simple as attaching a document that outlines the methodology for a specific process. For example, a document that specifies the requirements for the type of conduit or drilling pipe a company needs can greatly enhance a buyer’s ability to procure the right supply.

    “This is a long way from the Internet of Things, but is the top layer of the process automation piece that allows for sensor data to be fed into processes that will give a company enhanced visibility over its operations,” said Donald Fisher, senior director of Software AG Energy Solutions, in an interview with Rigzone.

    “Rising CAPEX [capital expenditures] are increasing dramatically in the industry, so firms are very interested in controlling capital costs and being able to prioritize how they spend money,” said Fisher.

    Another driver that Software AG officials are seeing in the move towards Internet of Things technology includes operational efficiency, such as being able to secure operational permits for exploration.


    View Full Article

    Karen Boman has more than 10 years of experience covering the upstream oil and gas sector. Email Karen at


    Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

    05 Sep 21:09

    IoT-enabled devices to top 40 billion by 2020, say researchers

    The number of active wireless connected devices will exceed 40.9 billion by 2020 – more than double the current total, according to ABI Research. The explosion in connected devices will be driven by the internet of things (IoT).

    ABI’s estimate of 40 billion devices is significantly more than the figure suggested by Gartner in 2013, when it forecast that the IoT will see 26 billion units installed by 2020.


    ABI also predicted that the installed base of active wireless connected devices will exceed 16 billion this year – 20% more than in 2013.

    “The driving force behind the surge in connections is that usual buzzword suspect, the internet of things,” said ABI Research principal analyst Aapo Markkanen.

    IoT is a scenario in which objects, animals or people are able to transfer data over a network and talk to each other without requiring human-to-human or human-to-computer interaction.

    “If we look at this year’s installed base, smartphones, PCs and other ‘hub’ devices represent 44% of the active total, but by the end of 2020 their share is set to drop to 32%,” said Markkanen. “In other words, 75% of the growth between today and the end of the decade will come from non-hub devices – sensor nodes and accessories.”

    From every technology supplier’s strategic point of view, the critical question is how this plethora of IoT devices will ultimately be connected, said ABI. Until recently, the choices that OEMs have faced have been fairly straightforward, with cellular, Wi-Fi, Bluetooth and others all generally addressing their relative comfort zones.

    But going forward, they will be in increasing competition with each other, so for the suppliers, the strategic stakes are getting much higher, said Markkanen.

    Practice director Dan Shey added: “The recently introduced Thread protocol, spearheaded by Nest Labs, is the clearest example of this convergence.”

    Thread is a new IP-based wireless networking protocol initiated by IoT players such as Google Nest to find a new and better way to connect products in the home. It is thought that existing wireless networking approaches, devised and executed before the rise of IoT, are not capable of connecting large numbers of devices.

    Thread is an IPv6 networking protocol built on open standards, designed for low-power 802.15.4 mesh networks. But with billions of devices powered by IoT, IPv4 will not be good enough, said Ian McDonald, IT director at Swiftkey.

    Shey said it is not only setting the bar higher for ZigBee in the 802.15.4 space, but is also piling pressure on Bluetooth suppliers to enable mesh networking.

    ZigBee is a specification for wireless personal area networks (WPANs) operating at 868MHz, 902-928MHz and 2.4GHz. Devices can communicate at speeds of up to 250kbps, and can be physically separated by up to 50 metres in typical applications, more in an ideal environment. ZigBee is based on the 802.15 specification approved by the Institute of Electrical and Electronics Engineers Standards Association (IEEE-SA).

    Shey added: “In the meantime, the LTE-MTC and LTE-M initiatives may well expand the market for cellular M2M, while startups like Electric Imp and Spark could do the same for Wi-Fi. And finally, we also should not ignore what is going on with passive, proximity-based connectivity offered by RFID and NFC.”

    Email Alerts

    Register now to receive IT-related news, guides and more, delivered to your inbox.

    By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

    By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
    28 Aug 11:24

    Burrowing by burning

    by Matt Hall

    Most kind of mining are low-yield games. For example, the world's annual gold production would fit in a 55 m2 room. But few mining operations I'm aware of are as low yield as the one that ran in Melle, France, from about 500 till 950 CE, producing silver for the Carolingian empire and Charlemagne's coins. I visited the site on Saturday.

    The tour made it clear just how hard humans had to work to bring about commerce and industry in the Middle Ages. For a start, of course they had no machines, just picks and shovels. But the Middle Jurassic limestone is silicic and very hard, so to weaken the rock they set fires against the face and thermally shocked the rock to bits. The technique, called fire-setting, was common in the Middle Ages, and was described in detail by Georgius Agricola in his book De Re Metallica (right; aside: the best translation of this book is by Herbert Hoover!). Apart from being stupefyingly dangerous, the method is slow: each fire got the miners about 4 cm further into the earth. Incredibly, they excavated about 20 km of galleries this way, all within a few metres of the surface.

    The fires were set against the walls and fuelled with wood, mostly beech. Recent experiments have found that one tonne of wood yielded about one tonne of rock. Since a tonne of rock yields 5 kg of galena, and this in turn yields 10 g of silver, we see that producing 1.1 tonnes of silver per year — enough for 640,000 deniers — was quite a feat!

    There are several limits to such a resource intensive operation: wood, distance from face to works, maintenance, and willing human labour, not to mention the usual geological constraints. It is thought that, in the end, operations ended due to a shortage of wood.

    Several archaeologists visit the site regularly (here's one geospatial paper I found mentioning the site: Arles et al. 2013), and the evidence of their attempts to reproduce the primitive metallurgical methods were on display. Here's my attempt to label everything, based on what I could glean from the tour guide's rapid French:

    The image of the denier coin is licensed CC-BY-SA by Wikipedia user Lequenne Gwendoline

    25 Aug 20:06


    by jeffrey
    It's Katrina Week so get ready for the run of obligatory retrospectives.  Not that there's anything wrong with that.  I don't count myself among the number who would tell you this is old irrelevant news.  On the contrary, there's almost nothing going on in New Orleans today that can be understood without first understanding the effect of the flood.  So it's worth it for everyone to take time for reflection and examination. 

    On the other hand, this is the week where national news outlets and producers of commentary feel obliged to weigh in with gross distortions and harmful agenda-laced false narratives.  For example, here is an article suggesting that the forced upheaval of so many lives actually held a "silver lining" for displaced residents. Get ready to read more of this as we perfect our rationalizations for the subsequent gentrification of New Orleans.  Ask around the (relatively) more affluent corners of New Orleans these days and you might be surprised to hear how many people seem to think Barbara Bush was right all along. Not that anyone you ask will remember that quote... especially if they've only recently moved here.

    Anyway, what else is there?  Oh yeah clickbait. Professional media runs more on clickbait now than it did nine years ago so there will be plenty of that.  Keep an eye out for Buzzfeed's Katrina Remembered In GIFs of Beyonce Making Funny Faces.. or maybe they've got another slideshow of Six Flags photos to share. 

    If you're looking for a less cynical presentation of images, though, this T-P Then and Now gallery is worth a moment of your time.
    13 Aug 14:57

    Wikipedia: Mass Collaboration or Mass Amateurism?

    Because Wikipedia is a process, not a product, it replaces guarantees
    offered by institutions with probabilities supported by process: if enough
    people care enough about an article to read it, then enough people will care
    enough to improve it, and over time this will lead to a large enough body
    of good enough work to begin to take both availability and quality of articles
    for granted, and to integrate Wikipedia into daily use by millions.
    Clay Shirky (2008), Here Comes Everybody

    This study intends to contribute to a better understanding of the Wiki phenomenonas a knowledge managementsystemwhich aggregates private knowledge and check to what extent information generated through anonymous and freely bestowed mass collaboration is reliable as opposed to the traditional approach. To achieve that goal, we develop a comparative study between Wikipedia and Britannica Encyclopedias, in order to confront the quality of the knowledge repository produced by them. That will allow us to reach a conclusion about the efficacy of the business models behind them.
    So, we intend to find out which of the scenarios represented above is the most accurate to describe mass collaboration: the infinite monkeys theorem used by Keen in The Cult of the Amateur or the ode to the “power of the masses" ofTapscott & Williams in Wikinomics.
    We used a representative random sample which is composed by the articles that are comprised in both encyclopedias. Each pair of articles was previously reformatted to hide its source and then graded by an expert in its subject area using a five-point scale. We asked experts to concentrate only on some intrinsic aspects of the articles’ quality, namely accuracy and objectivity, and discard the contextual, representational and accessibility aspects. Whenever possible, the experts invited to participate in the study are University teachers, because they are used to grading students’ work not using the reputation of the source.
    The articles have been divided into four main categories: Arts & Entertainment, History & Society, Science & Technology and Travel & Geography. Each main category has further been subdivided in order to find the most suitable expert to evaluate it. The average results obtained are presented below:

    To calculate the impact of having only one evaluator per article - imposed by the sample size and articles’ length - we collected a small convenience sample which only integrates Management articles. Each pair of articles was graded by several experts in order to determine the uncertainty associated with having diverse gradings of the same article. The uncertainty indicators obtained are α = 0,9 and MAD = 0,6.
    In order to further normalize the results we transformed the {1, 2, 3, 4, 5} scale used by each evaluator in a nine-point scale {-4,-3,-2,-1, 0, 1, 2, 3, 4}, subtracting in each article the grade of the Wikipedia from the correspondent Britannica pair. This step allows us to concentrate on the difference in quality and mitigate the noise induced by eventual differences in interpretation of the absolute scale among the evaluators.
    To deal with the lack of Geography grades, we find a significant nonlinear correlation between the average difference in the number of words per article (grouped by grades difference) and the difference in grades, shown in the following chart:
    Using that relation, we managed to predict that the global average grade difference of the Geography articles will be 1.7. The function cannot be used to predict individual differences for a particular pair of articles, but in terms of global result, and despite all the assumptions and weakness of this approach, it is one indicator that a future assessment of these articles may lead to a global value of the same order of magnitude to those obtained from the assessed areas.
    In global terms, and setting aside the Geography articles, the conclusion was that the average quality of the Wikipedia articles which were analyzed was superior to its peer’s and that this difference was statistically significant. The graphic below shows that 90% of the Wikipedia articles have been considered as having equivalent or better quality than their Britannica counterparts.

    The global value of the difference among the pairs of articles assessed has an average of 1,4 ± 0,9 (average absolute deviation) and a median of 1,0 ± 0,6 (median absolute deviation). Those uncertainty values have been calculated using the uncertainty of the management articles pairs’ grades, assessed by multiple evaluators as mentioned above.
    In parallel with this study, a survey answered by university professors, was used to characterize the universe of evaluators, showed that traditional information sources were used only by a few (25%) as the first approach to seeking information. Nevertheless, this inquiry also made clear that reliance on these sources was considerably larger than reliance on information obtained through Wikipedia or other nontraditional sources.
    This quality perception, as well as the diametrically opposed results of its evaluation through the blind test assessment, reinforces the evaluating panel’s exemption. The following graph will show the reliance on information obtained through alternative sources when compared to classical ones:
    However much the chosen sample is representative of the universe to be studied, results have depended on the evaluators’ personal opinion and chosen criteria. This means that the reproducibility of this study’s conclusions using a different grading panel cannot be guaranteed. Nevertheless, this is not enough of a reason to reject the study results obtained through more than five hundred evaluations.
    One explanation for the success of Wikipedia can be found in the altruism of individuals who come together around themes they share the same passion for, and it is expected they have remarkable knowledge on these. In other words, in the case studied, mass collaboration seems self-organized, leading to an organization of self-assessment and self-correction among peers which produces impressive results, against all odds.

    What if information skills become a mass amateur activity?

    Goto Top of Page