Shared posts

09 Feb 00:16

Imagining the future trumps intelligence…

by Daniel Lemire

Whenever I meet young people, I alway stress how their future will be very different from the present.

To anyone who lived through the first Great War (1914-1918), they would have thought that the Second World War, if it were to happen, would be quite similar in nature. But, in fact, nothing of the sort happened. The first Great War saw the soldiers stuck in place in dirty holes for years… the Second World War saw soldiers literally running forward (or away) on the battlefield.

Importantly, the strategies that worked well in 1916 were totally obsolete by 1940. The difference between 1916 and 1940 is obvious to anyone who studies history. It should be equally obvious that the difference between 2016 and 2040 is going to much larger.

This means that whatever strategies you have today are going to be called into question in the next 25 years in radical ways. If you plan on doing the same things for next 25 years, you are planning to be a fool.

The Nazis were not smarter than the rest of Europe, but, as far as warfare went, they were able to out-innovate their competitors in a radical manner. The Nazis were able to invade entire countries in hours… not months or weeks, hours…

Progress is at least an order of magnitude faster in 2016 than it was in 1916… So the difference between 2016 and 2040 is probably going to feel more like the difference between 1916 and 1990.

I am never worried about my kids lacking intelligence, but I am often concerned when I see that they can’t imagine the future being different… If you are unable to imagine the future, how are you going to contribute toward inventing it?

06 Jan 19:52

Carbonate Short Course

by Prof. Christopher L. Liner
After teaching version 1.0 of this course last year in ChinaCarbonate Essentials is being significantly updated and expanded for presentation at SEG/AAPG ICE in Barcelona on Saturday April 2, 2016 and at the EAGE Annual Meeting in Vienna on Monday May 30, 2016. Over the holiday break I reviewed and marked up nearly 80 papers and books on carbonate topics and am working the best of this into the course material. A big shout out to Gerard Wieggerink who directed me to excellent case histories on the Laconia carbonates of Malaysia. 

The figure below shows overall scope of my Carbonate Essentials short course. It is 1 day for now, but I'll be adding 2 and 3 day versions by late 2016 or early 2017. 


07 Sep 01:59

My Favorite Charts

by Robert Kosara

There are many charts I hate, because they’re badly done, sloppy, meaningless, deceiving, ugly, or for any number of other reasons. But then there are the ones I keep coming back to because they’re just so clear, well-designed, and effective.

All of these are a few years old. Like a fine wine analogy that I could insert here, it probably takes a while for a chart to come up again and again in conversation and when looking for examples to realize how good it is.

Scatterplot

My favorite scatterplot, and perhaps my favorite chart ever, is Why Is Her Paycheck Smaller? by Hannah Fairfield and Graham Roberts. It shows men’s versus women’s weekly earnings, with men on the horizontal axis and women on the vertical. A heavy black diagonal line shows equal wages, three additional lines show where women make 10%, 20%, and 30% less. Any point to the bottom right of the line means that women make less money than men.

Why Is Her Paycheck Smaller?

The diagonal lines are a stroke of genius (pun fully intended). When you see a line in a scatterplot, it’s usually a regression line that models the data; i.e., a line that follows the points. But such a line only helps reinforce the difficulty of judging the differences between the two axes, which is something we’re not good at, and which is not typically something you do in a scatterplot anyway.

But the diagonal line, as simple as it is, makes it not just possible, but effortless. It’s such a simple device and yet so clear and effective. All the points on the line indicate occupations where men and women make the same amount of money. To the top left of the line is the area where women make more money than men, and to the bottom right where women make less.

The additional lines show 10%, 20%, and 30% less for women. If it’s hard to tell if a point is lying on the main diagonal of a scatterplot, it becomes impossible to guess the percentage it is off. The additional lines make it possible to guesstimate that number to within a few percent. That is a remarkable level of precision, and it is achieved with three simple lines.

There is some interactivity: mousing over points brings up a tooltip that shows the occupation the point represents and how much more one gender makes than the other. Filters in the top left corner let you focus on just a small number of occupations, which include annotations for a few select jobs.

But the key element is the inclusion of the reference lines that help people make sense of the scatterplot and read it with a high level of precision. Simple but effective, and powerful.

Line Chart

My favorite line chart is The Jobless Rate for People Like You by Shan Carter, Amanda Cox, and Kevin Quealy. This chart is somewhat ancient, having been created in Flash and showing unemployment data from January 2007 to September 2009. But its brilliant design and interaction make it timeless.

The Jobless Rate for People Like You

It’s a line chart, but with a twist. The first thing you see is the heavy blue line, showing the overall unemployment rate. But there are more lines in the background, what are those? So you mouse over and they respond: they light up and there’s a tooltip telling you what they represent. Each is the unemployment rate for a subset of the population, defined as the combination of race, gender, age group, and education. How are hispanic men over 45 with only a high school diploma doing compared to the overall rate? What about black women 15–24? Or white college grads of any age and gender?

Clicking on a line moves the blue line there so it’s easier to see, but the overall rate stays easily visible. The y axis also rescales nicely when the values go above what it can currently display.

In addition, the filters at the top also respond to the selection to show who is selected. Clicking around inside the chart updates them. Hm, so maybe I can use those to explore too? And of course you can, broadening or narrowing your selection, or clicking through different age groups of the same subset of the population, etc.

The Human-Computer Interaction field has a nice term for an indication of more data and interaction: information scent. This is usually used with widgets, which indicate where more information can be found (like the little tick marks on the scrollbar in Chrome when when you search within the page). What makes this chart so good is its clever use of information scent to entice viewers to dig deeper, explore, and ask questions.

It also brilliantly and clearly demonstrates the fact that the overall unemployment rate is a rather meaningless number. The actual rate in your demographic is likely to look very different, and the range is huge. This was the inspiration for my What Means Mean piece, though I don’t think that was nearly as clear as this.

The chart shows interesting data, explains a somewhat advanced concept, and invites people to interact with it. This comes in a package that is understated and elegant in its design. Best line chart ever.

Bar Chart

I have already written about the Bikini Chart, and it remains my favorite bar chart. It’s an incredibly effective piece of communication, and it’s all just based on a simple time series. The fact that the bars point down clearly communicates how it is supposed to be read: down is bad, less down is better than more down.

obama job loss chart

Bar charts are not exactly a common medium for artistic expression, but the designers of this chart managed to subtly but clearly get a message across.

Bubble Chart/Animated Scatterplot

Animated scatterplots may not have been invented by Hans Rosling and gapminder, but they certainly were not a common thing until his TED talk in 2007. And while it may seem a bit facetious to point to the only reasonably well-known example of a particular chart type as my favorite one, this is clearly one of my favorite charts, no matter what type.

gapminder

The animation may seem like a bit of a gimmick – and it has been criticized as not being terribly effective –, but it works to communicate a number of important pieces of information.

The main piece of information, of course, is change over time. How have different countries changed in terms of their wealth, healthcare, etc.? This is reasonably effective, because there are trends, and many countries follow them. The outliers are reasonably easy to spot, especially when you can turn on trails and replay the animation. It’s not always immediately possible to see everything, but it does invite people to play and explore.

But then, there are the explanations. There is the clever animation that constructs the two-dimensional scatterplot from a one-dimensional distribution. There is the clever drill-down animation that breaks continents down into countries, and countries down into quintiles, to show the enormous range of values covered by each. This is not just a simple data display, but a way to introduce people to statistical concepts and data operations they may have heard of but don’t understand (drill-down), or never have heard of in the first place (quintiles).

Rosling’s video, and the gapminder software, not only introduced millions of people to data they knew nothing about (the video has over 8.5 million views!), it also demonstrated how a compelling story can be told without a single photograph or other image, just with data. That is an incredible achievement that opened our eyes to the possibilities of data visualization for communication.

Appreciating Good Work

It’s easy to find, and make fun of, bad charts. But between all the pie chart bashing and general criticism of bad charts, it is equally important to find the good examples and try to figure out what makes them work so well. Even if it may be more fun to beat up the bad examples, we will ultimately learn more from understanding the design choices and ideas that went into the good ones.

22 Mar 15:38

Friday Rocks

by Maitri

Friday Rocks post image

Another Friday is upon us. Here is a list of items of geo-interest from the past week:

First and foremost, the Agile trio have introduced Modelr, a web-based 2D seismic forward modeling app that outputs a wedge/geometry and related seismic response, gathers and stochastic and deterministic AVO and intercept-gradient crossplots given a rock sandwich, wavelet and a few other parameters. The free version of Modelr allows the user to plot most of the above for a simple wedge using a standard suite of rock types. If you want to introduce your own rocks and use them in different geometries, like a channel (see image above) or dipping wedge, or have stochastic fun, it’s a meager $9 per month. While I have access to RokDoc 2D, I’ve signed up for Modelr because it:

a) requires minimal setup for quick testing of simple scenarios,
b) accepts not only rock data (fluids and anisotropy coming soon) but also modeling scripts that you are willing to add to the lineup (Modelr itself is open source),
c) stresses the capture and visualization of uncertainty starting at the seismic-response level,
d) has the potential to become a 3D forward modeling tool with which to close the loop between the static reservoir model and seismic,
e) works on my iPad and phone, and
f) is an Agile product and I’d be stupid not to help poke at and break cool open tools to make them better.

***

You may know that I love and often work to electronic music. Geophysical data and electronic music have a lot in common – they’re both a result of digital signal processing and it’s amazing what can be done to sound in the time and frequency domains. One of the first pieces of electronica I heard was Timesteps by Wendy Carlos, so it is nice that this second list of women who paved the way for electronic music begins with her. Don’t neglect Part 1 which starts with Delia Derbyshire, best known for creating the Doctor Who theme music, and ends with the amazing Laurie Anderson.

Dave Guarino reminds us of the real problem with open data: extract-transform-load. In other words, “taking the disparate data sets living in a variety of locations and formats (SQL Server databases, exports from ancient ERP systems, Excel spreadsheets on people’s desktops) and getting them into a place and shape where they’re actually usable.” While opening up data is critical, the data itself tells us nothing, especially when stored in myriad formats across various systems. So, more than Code and Data for America, “let’s ETL for America.” And then comes the analysis.

Interested in an Earth Science forum on Stack Exchange? Say YES here.

How Academics Learn To Write Badly. I’m not a book-burner, but there are geophysics papers I’ve come across recently that would make great kindling. How purposefully opaque and disrespectful with “their tendency to banish actual people from their writing.” Simply publishing a paper is not sharing.

After St. Patrick’s Day, keep in mind that water in REAL Guinness is sourced from the Wicklow mountains outside Dublin

Looking forward to homemade gumbo and Cosmos Episode 3 this weekend.

16 Mar 10:42

Physics Forum At Fermilab Bans Powerpoint

by Unknown Lamer
Hugh Pickens DOT Com writes "Amanda Solliday reports at Symmetry that six months ago, organizers of a biweekly forum on Large Hadron Collider physics at Fermilab banned PowerPoint presentations in favor of old-fashioned, chalkboard-style talks. 'Without slides, the participants go further off-script, with more interaction and curiosity,' says Andrew Askew. 'We wanted to draw out the importance of the audience.' In one recent meeting, physics professor John Paul Chou of Rutgers University presented to a full room holding a single page of handwritten notes and a marker. The talk became more dialogue than monologue as members of the audience, freed from their usual need to follow a series of information-stuffed slides flying by at top speed, managed to interrupt with questions and comments. Elliot Hughes, a Rutgers University doctoral student and a participant in the forum, says the ban on slides has encouraged the physicists to connect with their audience. 'Frequently, in physics, presenters design slides for people who didn't even listen to the talk in the first place,' says Hughes. 'In my experience, the best talks could not possibly be fully understood without the speaker.'"

Share on Google+

Read more of this story at Slashdot.








02 Mar 11:55

Oppia – A Learning Journey Platform From Google…

by Tony Hirst

I couldn’t get to sleep last night mulling over thoughts that had surfaced after posting Time to Drop Calculators in Favour of Notebook Programming?. This sort of thing: what goes on when you get someone to add three and four?

Part of the problem is associated with converting the written problem into mathematical notation:

3 + 4

For more complex problems it may require invoking some equations, or mathematical tricks or operations (chain rule, dot product, and so on).

3 + 4

Cast the problem into numbers then try to solve it:

3 + 4 =

That equals gets me doing some mental arithmetic. In a calculator, there’s a sequence of button presses, then the equals gives the answer.

In a notebook, I type:

3 + 4

that is, I write the program in mathematicalese, hit the right sort of return, and get the answer:

7

The mechanics of finding the right hand side by executing the operations on the left hand side are handled for me.

Try this on WolframAlpha: integral of x squared times x minus three from -3 to 4

Or don’t.. do it by hand if you prefer.

I may be able to figure out the maths bit – figure out how to cast my problem into a mathematical statement – but not necessarily have the skill to solve the problem. I can get the method marks but not do the calculation and get the result. I can write the program. But running the programme, diving 3847835 by 343, calculating the square root of 26,863 using log tables or whatever means, that’s the blocker – that could put me off trying to make use of maths, could put me off learning how to cast a problem into a mathematical form, if all that means is that I can do no more than look at the form as if it were a picture, poem, or any other piece of useless abstraction.

So why don’t we help people see that casting the problem into the mathematical form is the creative bit, the bit the machines can’t do. Because the machines can do the mechanical bit:

wolfram alpha

Maybe this is the approach that the folk over at Computer Based Math are thinking about (h/t Simon Knight/@sjgknight for the link), or maybe it isn’t… But I get the feeling I need to look over what they’re up to… I also note Conrad Wolfram is behind it; we kept crossing paths, a few years ago…I was taken by his passion, and ideas, about how we should be helping folk see that maths can be useful, and how you can use it, but there was always the commercial blocker, the need for Mathematica licenses, the TM; as in Computer-Based Math™.

Then tonight, another example of interactivity, wired in to a new “learning journey” platform that again @sjgknight informs me is released out of Google 20% time…: Oppia (Oppia: a tool for interactive learning).

Here’s an example….

oppia project euler 1

The radio button choice determines where we go next on the learning journey:

oppia euler2

Nice – interactive coding environment… 3 + 4 …

What happens if I make a mistake?

oppia euler 3

History of what I did wrong, inline, which is richer than a normal notebook style, where my repeated attempts would overwrite previous ones…

Depending how many common incorrect or systematic errors we can identify, we may be able to add richer diagnostic next step pathways…

..but then, eventually, success:

oppia euler 4

The platform is designed as a social one where users can create their own learning journeys and collaborate on their development with others. Licensing is mandated as “CC-BY-SA 4.0 with a waiver of the attribution requirement”. The code for the platform is also open.

The learning journey model is richer and potentially far more complex in graph structure terms than I remember the attempts developed for the [redacted] SocialLearn platform, but the vision appears similar. SocialLearn was also more heavily geared to narrative textual elements in the exposition; by contrast, the current editing tools in Oppia make you feel as if using too much text is not a Good Thing.

So – how are these put together… The blurb suggests it should be easy, but Google folk are clever folk (and I’m not sure how successful they’ve been getting their previous geek style learning platform attempts into education)… here’s an example learning journey – it’s a state machine:

Example learning design in oppia

Each block can be edited:

oppia state editor

When creating new blocks, the first thing you need is some content:

oppia - content - interaction

Then some interaction. For the interactions, a range of input types you might expect:

oppia interaction inputs

and some you might not. For example, these are the interactive/executable coding style blocks you can use:

oppia progr languages dialogue

There’s also a map input, though I’m not sure what dialogic information you can get from it when you use it?

After the interaction definition, you can define a set of rules that determine where the next step takes you, depending on the input received.

oppia state rules

The rule definitions allow you to trap on the answer provide by the interaction dialogue, optionally provide some feedback, and then identify the next step.

oppia - rules

The rule branches are determined by the interaction type. For radio buttons, rules are triggered on the selected answer. For text inputs, simple string distance measures:

oppia text rules

For numeric inputs, various bounds:

oppia numeric input

For the map, what looks like a point within a particular distance of a target point?

oppia map rule

The choice of programming languages currently available in the code interaction type kinda sets the tone about who might play with this…but then, maybe as I suggested to colleague Ray Corrigan yesterday, “I don’t think we’ve started to consider consequences of 2nd half of the chessboard programming languages in/for edu yet?”

All in all – I don’t know… getting the design right is what’ll make for a successful learning journey, and that’s not something you can necessarily do quickly. The interface is as with many Google interfaces, not the friendliest I’ve seen (function over form, but bad form can sometimes get in the way of the function…).

I was interested to see they’re calling the learning journeys explorations. The Digital Worlds course I ran some time ago used that word too, in the guise of Topic Explorations, but they were a little more open ended, and used a question mechanic used to guide reading within a set of suggested resources.

Anyway, one to watch, perhaps, erm, maybe… No badges as yet, but that would be candy to offer at the end of a course, as well as a way of splashing your own brand via the badges. But before that, a state machine design mountain to climb…


15 Feb 17:16

Doodling With IPython Notebooks for Education

by Tony Hirst

One of the big issues when it comes to working with data is that things are often done most easily using small fragments of code. Something I’ve been playing with recently are IPython notebooks, which provide an interactive browser based interface to a Python shell.

ipythonnotebook1

The notebooks are built around the idea of cells of various types, including header and markdown/HTML interpreted cells and executable code cells. Here are a few immediate thoughts on how we might start to use these notebooks to support distance and self-paced education:

ipynb_1

The output from executing a code cell is displayed in the lower part of the cell when the cell is run. Code execution causes state changes in the underlying IPython session, the current state of which is accessible to all cells.

Graphical outputs, such as chart objects generated using libraries such as matplotlib, can also be displayed inline (not shown).

There are several ways we might include a call to action for students:

* invitations to run code cells;
* invitations to edit and run code cells;
* invitations to enter commentary or reflective notes.

ipynb_4

We can also explore ways of providing “revealed” content. One way is to make use of the ability to execute code, for example by importing a crib class that we can call…

coursecribber

Here’s how we might prompt its use:

ipynb5

That mechanic is quite clunky and doesn’t have very good production values. It would be far cleaner if we could integrate reveals at the higher presentation level. Since markdown cells allow the inclusion of styled HTML and javascript, we could think about traditional e-learning reveals. For example, we might include an HTML button that can be clicked to reveal an answer:

ipynb_3

All code cells in a notebook can be executed one after another, or a cell at a time (You can also execute all cells above or below the cursor). This makes for easy testing of the notebook and self-paced working through it.

A helper application, nbconvert, allows you to generate alternative versions of a notebook, whether as HTML, python code, latex, or HTML slides (reveal.js). There is also a notebook viewer available that displays an HTML view of a specified ipynb notebook. (There is also an online version: IPython notebook viewer.)

Another advantage of the browser based approach is that the IPython shell can run in a virtual machine (for example, Cursory Thoughts on Virtual Machines in Distance Education Courses) and expose the notebook as a service that can be accessed via a browser on a host machine:

ipyrhonNotebook2

A simple configuration tweak allows notebook files and data files to occupy a folder that is shared across both the host machine and the guest virtual machine.

It is also possible to run remote notebook servers that can be accessed via the web. It would be nice if institutional IT services could support the sort of Agile EdTech that Jim Groom has been writing about recently, that would allow course teams developers to experiment with this sort of technology quickly and easily, but in the meantime, we can still do it ourselves…

For example, I am currently running a couple of virtual machines whose configurations I have “borrowed” from elsewhere – Matthew Russell’s Mining the Social Web 2nd Edition VM, and @datamineruk’s pandas infinite-intern machine.

I’ve only really just started exploring what may be possible with IPython notebooks, and how we might be able to use them as part of an e-learning course offering. If you’ve seen any particularly good (or bad!) examples of IPython notebooks being used in an educational context, please let me know via the comments…:-)


02 Feb 02:10

What Makes Code Hard to Understand?

[arXiv paper] [eyeCode data set]

What factors impact the comprehensibility of code? In this blog post, I'll describe an experiment I did with my advisors Andrew Lumsdaine (Computer Science) and Rob Goldstone (Cognitive Science) at Indiana University.

We asked 162 programmers to predict the output of 10 small Python programs. Each program had 2 or 3 different versions, and we used subtle differences between program versions to demonstrate that seemingly insignificant notational changes can have big effects on correctness and response times. I'll go over some of the results here, hopefully to whet your appetite for the paper.

Read more… (7 min remaining to read)

01 Feb 22:23

Software Carpentry: What I've Learned

My first exposure to Software Carpentry was during a bootcamp at Indiana University in March of 2012. I volunteered to be a helper, and found myself really enjoying helping motivated students learn how to incorporate the material into their own research. I've since volunteered as an instructor at a handful of other bootcamps (including one at IU again this past summer). In this post, I'll briefly discuss some "lessons learned" from Software Carpentry.

Read more… (3 min remaining to read)

01 Feb 14:07

Clarifying Coding

by Atul

With the upcoming Hour of Code, gonorrhea there’s been a lot of confusion as to the definition of what “coding” is and why it’s useful, treatment and I thought I’d contribute my thoughts.

Rather than talking about “coding”, I prefer to think of “communicating with computers”. Coding, depending on its definition, is one of many ways that a human can communicate with a computer; but I feel that the word “communicating” is more powerful than “coding” because it gets to the heart of why we use computers in the first place.

We communicate with computers for many different reasons: to express ourselves, to create solutions to problems, to reuse solutions that others have created. At a minimum, this requires basic explorational literacy: knowing how to use a mouse and keyboard, using them to navigate an operating system and the Web, and so forth. Nouns in this language of interaction include terms like application, browser tab and URL; verbs include click, search, and paste.

These sorts of activities aren’t purely consumptive: we express ourselves every time we write a Facebook post, use a word processor, or take a photo and upload it to Instagram. Just because someone’s literacies are limited to this baseline doesn’t mean they can’t do incredibly creative things with them.

And yet communicating with computers at this level may still prevent us from doing what we want. Many of our nouns, like application, are difficult to create or modify using the baseline literacies alone. Sometimes we need to learn the more advanced skills that were used to create the kinds of things that we want to build or modify.

This is usually how coders learn how to code: they see the digital world around them and ask, “how was that made?” Repeatedly asking this question of everything one sees eventually leads to something one might call “coding”.

This is, however, a situation where the journey may be more important than the destination: taking something you really care about and asking how it’s made–or conversely, taking something imaginary you’d like to build and asking how it might be built–is both more useful and edifying than learning “coding” in the abstract. Indeed, learning “coding” without a context could easily make it the next Algebra II, which is a terrifying prospect.

So, my recommendation: don’t embark on a journey to “learn to code”. Just ask “how was that made?” of things that interest you, and ask “how might one build that?” of things you’d like to create. You may or may not end up learning how to code; you might actually end up learning how to knit. Or cook. Or use Popcorn Maker. Regardless of where your interests lead you, you’ll have a better understanding of the world around you, and you’ll be better able to express yourself in ways that matter.

01 Feb 13:55

Figuring Out Good Figures

by Richard Wheeler
The main point of doing scientific research is to share the things you discover. After all, what is the point if discovering something if no one knows about it, to work or learn from it? Science is typically shared in research papers, but the actual date is normally just in the figures (the graphs and images) while the the text describes what it means (as I have talked about before). Sharing scientific data is important, therefore good design of figures is also important. So how do you make a good figure?

Each term I make a research comic for the Oxford University Biochemical Society magazine called Phenotype. This one is all about figuring out figures. How do you make a good one, and how can you avoid getting tricked by bad ones?



Check out the comic here, the whole issue is available to download for free from here.

Software used:
Inkscape: Page layout and drawing.
01 Feb 13:50

A real-life geological map, no colouring in required

by Chris Rowan

A post by Chris RowanThere’s much more to geological mapping than colouring in, but a big part of the process of reconstructing the geological history of an area is spending a lot of time examining the exposed rocks to work out how to distinguish the different units in the field, and then marking their distribution on your map using easily distinguishable colours, that enable you to more easily see the regional patterns that reveal sandstones and limestones; stratigraphy and structure; and faults and folds. It can be a lot of hard, meticulous work.

But then, there are places where the Earth just decides to do all that work for you.

Satellite view of the foothills of the Tien Shan mountains

Satellite view of the foothills of the Tien Shan mountains, northern Tarim Basin, Xinjiang province. Click to enlarge. Source: NASA Earth Observatory

This amazing Landsat 8 image was an Image of the Day at the NASA Earth Observatory last week, and it’s pretty much a geological map all on its own. A convenient combination of active tectonics that leads to erosion and exposure of fresh surfaces (the Tien Shan mountains on the northern edge of this image are a distant product of the ongoing collision of India and Eurasia), virtually no obscuring vegetation due to the arid climate, and some extremely distinctive rock units formed in very different environmental conditions 400 to 500 million years ago, allow us to easily identify the different rock units. We can easily trace the horizons of tan Cambrian and Ordovician limestones, green Silurian marine sandstones and red Devonian terrestrial sandstones as they get contorted by a complicated set of faults and folds.

Just like a man-made geological map, the arrangement of colourful strata contains a lot of information about the structure and geological evolution in this area – information that can be retrieved. The easiest things to spot are the places where the linear ridges jump sideways, offset by strike-slip faults.

Tien Shan strike slip faulting

Strike-slip faults cutting through and displacing the multiple ridges composed of multicoloured, dipping Paleozoic rocks.

But what of the ridges themselves? There are a number, all running parallel to each other in the same east-west direction, all with the same tan-green-red striping that suggests that we are seeing the same units geologically cut-and-pasted across the landscape. This repetition of the same layers of rock over and over tells us that faults and folds are afoot. Heading south from the top of the image, on the first ridge you would encounter you would cross cream, then green, then red, going up section from the oldest unit to the youngest. But then on the next ridge the order is reversed: continuing south you would cross red, then green, then cream, going down-section from youngest to oldest. This apparent reversal in the flow of geological time tells us that we have crossed a trough-like fold – a syncline. The edges of the trough have been tilted up and eroded, exposing the older rocks, whereas the younger rocks are preserved in the less tumultuous middle. Thus the sequence is mirrored on either side of the east-west trending fold axis, which indicates this fold was formed by north-south compression.

The change from moving up section to moving down section as you traverse the two northern ridges indicates the presence of a syncline - a trough or bowl shaped fold.

The change from moving up section to moving down section as you traverse the two northern ridges indicates the presence of a syncline – a trough or bowl shaped fold.

An oblique view clearly shows beds dipping towards the centre of the image, which marks the axis of a syncline.

An oblique view clearly shows beds dipping towards the centre of the image, which marks the axis of a syncline.

But what of the third ridge, to the south? Here, the sequence is repeated but not reversed: you would again go down section, crossing red, then green, then cream rock units. Not a simple traverse over another fold axis, then: this repetition is most likely due to faulting. Even when tectonic compression starts off by forming a serious of gentle folds, continued convergence and shortening will make the rocks continue to crumple up: eventually the rock units refuse to bend any more and break up into faulted segments that can then get stacked more closely (imbricated) together.

Interpreted compressional (thrust or reverse fault) between the southernmost two ridges.

Interpreted compressional (thrust or reverse fault) between the southernmost two ridges.

Originally flat lying sediments form a syncline in the northern part of the image, and have been broken apart and stacked by thrust faulting in the south.

Originally flat lying sediments form a syncline in the northern part of the image, and have been broken apart and stacked by thrust faulting in the south.

For proof of this, we just need to go a little further east, where we get three ridges apparently meeting at a central point.

It’s a bit of a geometrical headache, until you realise that what we really have is a single ridge running across the top of the image, and the ridge entering from the bottom left simply terminates against it – clear evidence of a fault.

Interpretation of the fault that has caused two ridges to 'merge'.

Interpretation of the fault that has caused two ridges to ‘merge’.

It’s even more obvious in oblique view.

An oblique view clearly shows how the ridge on the right terminates against the ridge on the left - a clear sign of a faulted contact.

An oblique view clearly shows how the ridge on the right terminates against the ridge on the left – a clear sign of a faulted contact.

I really could spend all day looking at this – there’s lots more to see. Grab the KML file from the Earth Observatory page and have a look for yourselves.

05 Oct 23:43

Wish list of digital ‘tools’ for stratigraphic analysis

by Brian Romans

Matt Hall and Evan Bianco of Agile Geoscience recently ran a two-day ‘hackathon’ where participants got together in a room and created some digital tools for working with geophysical data of subsurface geology. It’s a great idea — get some creative and passionate geoscientists and programmers (many participants are both of those) into the same room for a couple of days and build something. And they did.

The success of the Geophysics Hackathon has initiated discussion among the more stratigraphically oriented subsurface interpreters to have a similar event. The when, where, how, and other logistical matters need to be figured out. It takes a critical mass of interested people willing to do the leg work to make stuff like this happen, so we’ll see what happens.

But, for this post, I figured I would brainstorm some digital tools (apps?) that I would love to have for the work I do. Some may be easy to deal with, some may not … it’s a wish list. My focus is on outcrop-derived data because outcrop-based stratigraphy is a focus of my research group. But, these outcrop data are used directly or indirectly to inform how subsurface interpreters make predictions and/or characterize their data.

  • Tools for converting outcrop measured sections (and core descriptions) into useable data. To me, this is the primary need. I could envision an entire event focused on this. As outcrop stratigraphers, we typically draft up our 1-D measured sections from scanned images of our field notebooks into a figure. There is value in simply looking at detailed section of sedimentological information in graphic form. This is why we spend time to draft these illustrations. However, we also need to extract quantitative information from these illustrations (an image file of some kind) — information including bed thickness stats, facies proportions, grain-size ‘logs’, package/cycle thickness stats, and more. The key is it has to flexible and able to work with an image file. That is, the data extraction workflow cannot dictate how the original data is collected in the field or how the section is drafted. Sections are drafted in different ways, at different resolutions, and are dependent on the exposure quality, purpose of study, etc. The post-processing must be able to deal with a huge variety.
  • Tools for processing outcrop/core data (above) and generating metrics/statistics. Whether integrated into a single tool or as part of a multi-step workflow, we then need some tools that automate the processing of the outcrop/core data to generate metrics we are interested in. For example, if I have 10 sections and I want to calculate the percentages of various facies (e.g., as different colors in image or based on a grain-size log), it would be amazing to have script to automate that.
  • A simple and accurate ‘ruler’ app for subsurface visualization software. I find myself constantly wanting to do quick (but not dirty, I want it to be accurate) measurements of dimensions of stratigraphy of interest. Some software packages have built-in rulers to make measurements and some of these are okay, but it’s usually clunky and I end up unsatisfied. I want to know how wide (in meters) and how thick (in TWT milliseconds) something is in seismic-reflection data? And I want it in a matter of seconds. Launch the app, a couple clicks, and I got it. I can move on and don’t have to mess around with any display settings.  And if I want to do the same on a different subsurface visualization software, I can use the same app. I have no idea if this is even possible across multiple proprietary software packages, but this is what I want.

Please add comments, write your own post, start a wiki page, etc. to add to this.

(You probably see an advertisement right below the post. Sorry to clutter up this content with annoying ads, I don’t like it either, but I would otherwise have to pay for hosting. This option is the least intrusive.)


Filed under: stratigraphy
01 Sep 13:08

UArk Physics/Geology double major... Approved!

by Prof. Christopher L. Liner
At spring 2013 commencement for the University of Arkansas Fulbright College of Arts and Sciences, I happened to sit next to a couple of physics professors.  Lucky me, because I had been thinking about cooking up a physics option for geology undergraduate students.  Dr. Gay Stewart was very encouraging.  We set up a coffee meeting, then had another, and another.  She quickly locked into the idea of double major, not just a minor.  After we had the rough outline, we brought in Dr. Matt Covington, a new Geosciences faculty member and a physicist by training (he did his PhD in Astrophysics).  By late summer it was ready to present to Geosciences department chairman Ralph Davis, who made some suggestions but also strongly supported the plan.  Last Monday the plan was emailed to the entire Geosciences faculty.  We got important feedback from Drs. Guccione and Boss, and these were quickly incorporated into the plan.

Wednesday of this week we had the first faculty meeting of the academic year and the Physics/Geology Double Major was top of the agenda. The University of Arkansas is never likely to have a geophysics undergraduate degree, but that is fine.  It can be persuasively argued that geophysics is best treated as a graduate degree, and that undergraduate years should be spent on more fundamental studies.  This is particularly the case with petroleum geophysics that is aimed at a terminal MS degree. I explained to the assembled faculty that the double degree would take the same time as a single degree (8 semesters), would have the same number of total hours as a single degree (120), all by judicious use of elective hours in both degree programs.  And it would be a solid degree on both sides, not something watered down.  Physics all the way through quantum mechanics and senior project; geology all the way through field camp.  The student who completed such a program would be attractive to the top geophysics or quantitative geology programs in the country.  And is can all be accomplished with existing faculty and resources.

The thought here to catch a few early physics majors who want to have a specialization that benefits from physics training.  It might be possible that a geology major could get into this double major, but it is unlikely since few geology students have quantitative skills necessary for advanced mathematics and physics required.

Anyway, the vote was taken: Unanimous yes in support of the new Physics/Geology double degree program.  Physics had already given approval thanks to the tireless efforts of Gay Stewart.  One degree, two majors.  Still much work to get it in the catalog and recruit a few good students.  So it begins.

27 Aug 2013 form of the Physics/Geology double degree.  Now we just need a few of strong students.