Shared posts

17 Dec 03:26

2014 Data Science Salary Survey [R + Python?]

by Patrick Durusau

2014 Data Science Salary Survey: Tools, Trends, What Pays (and What Doesn’t) for Data Professionals by John King and Roger Magoulas.

From the webpage:

For the second year, O’Reilly Media conducted an anonymous survey to expose the tools successful data analysts and engineers use, and how those tool choices might relate to their salary. We heard from over 800 respondents who work in and around the data space, and from a variety of industries across 53 countries and 41 U.S. states.

Findings from the survey include:

  • Average number of tools and median income for all respondents
  • Distribution of responses by a variety of factors, including age, location, industry, position, and cloud computing
  • Detailed analysis of tool use, including tool clusters
  • Correlation of tool usage and salary

Gain insight from these potentially career-changing findings—download this free report to learn the details, and plug your own variables into the regression model to find out where you fit into the data space.

The best take on this publication can be found in O’Reilly Data Scientist Salary and Tools Survey, November 2014 by David Smith where he notes:

The big surprise for me was the low ranking of NumPy and SciPy, two toolkits that are essential for doing statistical analysis with Python. In this survey and others, Python and R are often similarly ranked for data science applications, but this result suggests that Python is used about 90% for data science tasks other than statistical analysis and predictive analytics (my guess: mainly data munging). From these survey results, it seems that much of the “deep data science” is done by R.

My initial observation is that “more than 800 respondents” is too small of a data sample to draw any useful conclusions about tools used by data scientists. Especially when the #1 tool listed in that survey was Windows.

Why a majority of “data scientists” confuse an OS with data processing tools like SQL or Excel, both of which ranked higher than Python or R, is unknown but casts further doubt on the data sample.

My suggestion would be to have a primary tool or language (other than an OS) whether it is R or Python but to be familiar with the strengths of other approaches. Religious bigotry about approaches is a poor substitute for useful results.

08 Dec 15:52

The Caltech-JPL Summer School on Big Data Analytics

by Patrick Durusau

The Caltech-JPL Summer School on Big Data Analytics

From the webpage:

This is not a class as it is commonly understood; it is the set of materials from a summer school offered by Caltech and JPL, in the sense used by most scientists: an intensive period of learning of some advanced topics, not on an introductory level.

The school will cover a variety of topics, with a focus on practical computing applications in research: the skills needed for a computational (“big data”) science, not computer science. The specific focus will be on applications in astrophysics, earth science (e.g., climate science) and other areas of space science, but with an emphasis on the general tools, methods, and skills that would apply across other domains as well. It is aimed at an audience of practicing researchers who already have a strong background in computation and data analysis. The lecturers include computational science and technology experts from Caltech and JPL.

Students can evaluate their own progress, but there will be no tests, exams, and no formal credit or certificates will be offered.

Syllabus:

  1. Introduction to the school. Software architectures. Introduction to Machine Learning.
  2. Best programming practices. Information retrieval.
  3. Introduction to R. Markov Chain Monte Carlo.
  4. Statistical resampling and inference.
  5. Databases.
  6. Data visualization.
  7. Clustering and classification.
  8. Decision trees and random forests.
  9. Dimensionality reduction. Closing remarks.

If this sounds challenging, imagine doing it in nine (9) days!

The real advantage of intensive courses is you are not trying to juggle work/study/eldercare and other duties while taking the course. That alone may account for some of the benefits of intensive courses, the opportunity to focus on one task and that task alone.

I first saw this in a tweet by Gregory Piatetsky.

08 Dec 07:43

Types and Functions

by Patrick Durusau

Types and Functions by Bartosz Milewski.

From the post:

The category of types and functions plays an important role in programming, so let’s talk about what types are and why we need them.

Who Needs Types?

There seems to be some controversy about the advantages of static vs. dynamic and strong vs. weak typing. Let me illustrate these choices with a thought experiment. Imagine millions of monkeys at computer keyboards happily hitting random keys, producing programs, compiling, and running them.

monkey with keyboard

With machine language, any combination of bytes produced by monkeys would be accepted and run. But with higher level languages, we do appreciate the fact that a compiler is able to detect lexical and grammatical errors. Lots of monkeys will go without bananas, but the remaining programs will have a better chance of being useful. Type checking provides yet another barrier against nonsensical programs. Moreover, whereas in a dynamically typed language, type mismatches would be discovered at runtime, in strongly typed statically checked languages type mismatches are discovered at compile time, eliminating lots of incorrect programs before they have a chance to run.

So the question is, do we want to make monkeys happy, or do we want to produce correct programs?

That is a sample of the direct, literate prose that awaits you if you follow this series on category theory.

08 Dec 06:25

Functional and Reactive Domain Modeling

by Patrick Durusau

Functional and Reactive Domain Modeling by Debasish Ghosh.

From the post:

Manning has launched the MEAP of my upcoming book on Domain Modeling.

functional-reactive programming cover

The first time I was formally introduced to the topic was way back when I played around with Erik Evans’ awesome text on the subject of Domain Driven Design. In the book he discusses various object lifecycle patterns like the Factory, Aggregate or Repository that help separation of concerns when you are implementing the various interactions between the elements of the domain model. Entities are artifacts with identities, value objects are pure values while services model the coarse level use cases of the model components.

In Functional and Reactive Domain Modeling I look at the problem with a different lens. The primary focus of the book is to encourage building domain models using the principles of functional programming. It’s a completely orthogonal approach than OO and focuses on verbs first (as opposed to nouns first in OO), algebra first (as opposed to objects in OO), function composition first (as opposed to object composition in OO), lightweight objects as ADTs (instead of rich class models).

The book starts with the basics of functional programming principles and discusses the virtues of purity and the advantages of keeping side-effects decoupled from the core business logic. The book uses Scala as the programming language and does an extensive discussion on why the OO and functional features of Scala are a perfect fit for modelling complex domains. Chapter 3 starts the core subject of functional domain modeling with real world examples illustrating how we can make good use of patterns like smart constructors, monads and monoids in implementing your domain model. The main virtue that these patterns bring to your model is genericity – they help you extract generic algebra from domain specific logic into parametric functions which are far more reusable and less error prone. Chapter 4 focuses on advanced usages like typeclass based design and patterns like monad transformers, kleislis and other forms of compositional idioms of functional programming. One of the primary focus of the book is an emphasis on algebraic API design and to develop an appreciation towards ability to reason about your model.

An easy choice for your holiday wish list! Being a MEAP, it will continue to be “new” for quite some time.

Enjoy!

08 Dec 06:25

Category: The Essence of Composition

by Patrick Durusau

Category: The Essence of Composition by Bartosz Milewski.

From the post:

I was overwhelmed by the positive response to my previous post, the Preface to Category Theory for Programmers. At the same time, it scared the heck out of me because I realized what high expectations people were placing in me. I’m afraid that no matter what I’ll write, a lot of readers will be disappointed. Some readers would like the book to be more practical, others more abstract. Some hate C++ and would like all examples in Haskell, others hate Haskell and demand examples in Java. And I know that the pace of exposition will be too slow for some and too fast for others. This will not be the perfect book. It will be a compromise. All I can hope is that I’ll be able to share some of my aha! moments with my readers. Let’s start with the basics.

Bartosz’s post includes pigs, examples in C and Haskell, and ends with:

Challenges

  1. Implement, as best as you can, the identity function in your favorite language (or the second favorite, if your favorite language happens to be Haskell).
  2. Implement the composition function in your favorite language. It takes two functions as arguments and returns a function that is their composition.
  3. Write a program that tries to test that your composition function respects identity.
  4. Is the world-wide web a category in any sense? Are links morphisms?
  5. Is Facebook a category, with people as objects and friendships as morphisms?
  6. When is a directed graph a category?

My suggestion is that you follow Bartosz’s posts and after mastering them, try less well explained treatments of category theory.

08 Dec 06:25

Category Theory for Programmers: The Preface

by Patrick Durusau

Category Theory for Programmers: The Preface by Bartosz Milewski.

From the post:

For some time now I’ve been floating the idea of writing a book about category theory that would be targeted at programmers. Mind you, not computer scientists but programmers — engineers rather than scientists. I know this sounds crazy and I am properly scared. I can’t deny that there is a huge gap between science and engineering because I have worked on both sides of the divide. But I’ve always felt a very strong compulsion to explain things. I have tremendous admiration for Richard Feynman who was the master of simple explanations. I know I’m no Feynman, but I will try my best. I’m starting by publishing this preface — which is supposed to motivate the reader to learn category theory — in hopes of starting a discussion and soliciting feedback.

I will attempt, in the space of a few paragraphs, to convince you that this book is written for you, and whatever objections you might have to learning one of the most abstracts branches of mathematics in your “copious spare time” are totally unfounded.

My optimism is based on several observations. First, category theory is a treasure trove of extremely useful programming ideas. Haskell programmers have been tapping this resource for a long time, and the ideas are slowly percolating into other languages, but this process is too slow. We need to speed it up.

Second, there are many different kinds of math, and they appeal to different audiences. You might be allergic to calculus or algebra, but it doesn’t mean you won’t enjoy category theory. I would go as far as to argue that category theory is the kind of math that is particularly well suited for the minds of programmers. That’s because category theory — rather than dealing with particulars — deals with structure. It deals with the kind of structure that makes programs composable.

Composition is at the very root of category theory — it’s part of the definition of the category itself. And I will argue strongly that composition is the essence of programming. We’ve been composing things forever, long before some great engineer came up with the idea of a subroutine. Some time ago the principles of structural programming revolutionized programming because they made blocks of code composable. Then came object oriented programming, which is all about composing objects. Functional programming is not only about composing functions and algebraic data structures — it makes concurrency composable — something that’s virtually impossible with other programming paradigms.

See the rest of the preface and the promise to provide examples in code for most major concepts.

Are you ready for discussion and feedback?

23 Nov 05:08

Compojure Address Book

by Patrick Durusau

Jarrod C. Taylor writes in part 1:

Introduction

Clojure is a great language that is continuing to improve itself and expand its user base year over year. The Clojure ecosystem has many great libraries focused on being highly composable. This composability allows developers to easily build impressive applications from seemingly simple parts. Once you have a solid understanding of how Clojure libraries fit together, integration between them can become very intuitive. However, if you have not reached this level of understanding, knowing how all of the parts fit together can be daunting. Fear not, this series will walk you through start to finish, building a tested compojure web app backed by a Postgres Database.

Where We Are Going

The project we will build and test over the course of this blog series is an address book application. We will build the app using ring and Compojure and persist the data in a Postgres Database. The app will be a traditional client server app with no JavaScript. Here is a teaser of the final product.

Not that I need another address book but as an exercise in onboarding, this rocks!

Compojure Address Book Part 1 by

(see above)

Compojure Address Book Part 2

Recap and Restructure

So far we have modified the default Compojure template to include a basic POST route and used Midje and Ring-Mock to write a test to confirm that it works. Before we get started with templates and creating our address book we should provide some additional structure to our application in an effort to keep things organized as the project grows.

Compojure Address Book Part 3

Introduction

In this installment of the address book series we are finally ready to start building the actual application. We have laid all of the ground work required to finally get to work.

Compojure Address Book Part 4

Persisting Data in Postgres

At this point we have an address book that will allow us to add new contacts. However, we are not persisting our new additions. It’s time to change that. You will need to have Postgres installed. If you are using a Mac, postgresapp is a very simple way of installing. If you are on another OS you will need to follow the install instructions from the Postgres website.

Once you have Postgres installed and running we are going to create a test user and two databases.

Compojure Address Book Part 5

The Finish Line

Our address book application has finally taken shape and we are in a position to put the finishing touches on it. All that remains is to allow the user the ability to edit and delete existing contacts.

One clever thing Jarrod has done is post all five (5) parts to this series on one day. You can go as fast or as slow as you choose to go.

Another clever thing is that testing is part of the development process.

How many programmers actually incorporate testing day to day? Given the prevalence of security bugs (to say nothing at all of other bugs), I would say less than one hundred percent (100%).

You?

How much less than 100% I won’t hazard a guess.

23 Nov 00:15

A modern guide to getting started with Data Science and Python

by Patrick Durusau

A modern guide to getting started with Data Science and Python by Thomas Wiecki.

From the post:

Python has an extremely rich and healthy ecosystem of data science tools. Unfortunately, to outsiders this ecosystem can look like a jungle (cue snake joke). In this blog post I will provide a step-by-step guide to venturing into this PyData jungle.

What’s wrong with the many lists of PyData packages out there already you might ask? I think that providing too many options can easily overwhelm someone who is just getting started. So instead, I will keep a very narrow scope and focus on the 10% of tools that allow you to do 90% of the work. After you mastered these essentials you can browse the long lists of PyData packages to decide which to try next.

The upside is that the few tools I will introduce already allow you to do most things a data scientist does in his day-to-day (i.e. data i/o, data munging, and data analysis).

A great “start small” post on Python.

Very appropriate considering that over sixty percent (60%) of software skill job postings mention Python. Popular Software Skills in Data Science Job postings. If you have a good set of basic tools, you can add specialized ones later.

23 Nov 00:14

Using Load CSV in the Real World

by Patrick Durusau

Using Load CSV in the Real World by Nicole White.

From the description:

In this live-coding session, Nicole will demonstrate the process of downloading a raw .csv file from the Internet and importing it into Neo4j. This will include cleaning the .csv file, visualizing a data model, and writing the Cypher query that will import the data. This presentation is meant to make Neo4j users aware of common obstacles when dealing with real-world data in .csv format, along with best practices when using LOAD CSV.

A webinar with substantive content and not marketing pitches! Unusual but it does happen.

A very good walk through importing a CSV file into Neo4j, with some modeling comments along the way and hints of best practices.

The “next” thing for users after a brief introduction to graphs and Neo4j.

The experience will build their confidence and they will learn from experience what works best for modeling their data sets.

19 Nov 00:43

6 links that will show you what Google knows about you

by Patrick Durusau

6 links that will show you what Google knows about you by Cloud Fender.

After reviewing these links, ask yourself: “How do I keep Google, etc. from knowing more about me?”

14 Nov 01:48

Using Clojure To Generate Java To Reimplement Clojure

by Patrick Durusau

Using Clojure To Generate Java To Reimplement Clojure by Zach Tellman.

From the post:

Most data structures are designed to hold arbitrary amounts of data. When we talk about their complexity in time and space, we use big O notation, which is only concerned with performance characteristics as n grows arbitrarily large. Understanding how to cast an O(n) problem as O(log n) or even O(1) is certainly valuable, and necessary for much of the work we do at Factual. And yet, most instances of data structures used in non-numerical software are very small. Most lists are tuples of a few entries, and most maps are a few keys representing different facets of related data. These may be elements in a much larger collection, but this still means that the majority of operations we perform are on small instances.

But except in special cases, like 2 or 3-vectors that represent coordinates, it’s rarely practical to specify that a particular tuple or map will always have a certain number of entries. And so our data structures have to straddle both cases, behaving efficiently at all possible sizes. Clojure, however, uses immutable data structures, which means it can do an end run on this problem. Each operation returns a new collection, which means that if we add an element to a small collection, it can return something more suited to hold a large collection.

Tellman describes this problem and his solution in Predictably Fast Clojure. (The URL is to a time mark but I think the entire video is worth your time.)

If that weren’t cool enough, Tellman details the creation of 1000 lines of Clojure that generate 5500 lines of Java so his proposal can be rolled into Clojure.

What other data structures can be different when immutability is a feature?

10 Nov 19:31

Vagrant Manager

Vagrant Manager:

Manage your Vagrant machines in one place with Vagrant Manager for OS X

10 Nov 03:05

The Concert Programmer

by Patrick Durusau

From the description:

From OSCON 2014: Is it possible to imagine a future where “concert programmers” are as common a fixture in the worlds auditoriums as concert pianists? In this presentation Andrew will be live-coding the generative algorithms that will be producing the music that the audience will be listening too. As Andrew is typing he will also attempt to narrate the journey, discussing the various computational and musical choices made along the way. A must see for anyone interested in creative computing.

This impressive demonstration is performed using Extempore.

From the GitHub page:

Extempore is a systems programming language designed to support the programming of real-time systems in real-time. Extempore promotes human orchestration as a meta model of real-time man-machine interaction in an increasingly distributed and environmentally aware computing context.

Extempore is designed to support a style of programming dubbed ‘cyberphysical’ programming. Cyberphysical programming supports the notion of a human programmer operating as an active agent in a real-time distributed network of environmentally aware systems. The programmer interacts with the distributed real-time system procedurally by modifying code on-the-fly. In order to achieve this level of on-the-fly interaction Extempore is designed from the ground up to support code hot-swapping across a distributed heterogeneous network, compiler as service, real-time task scheduling and a first class semantics for time.

Extempore is designed to mix the high-level expressiveness of Lisp with the low-level expressiveness of C. Extempore is a statically typed, type-inferencing language with strong temporal semantics and a flexible concurrency architecture in a completely hot-swappable runtime environment. Extempore makes extensive use of the LLVM project to provide back-end code generation across a variety of architectures.

For more detail on what the Extempore project is all about, see the Extempore philosophy.

For programmers only at this stage but can you imagine the impact of “live searching?” Where data structures and indexes arise from interaction with searchers? Definitely worth a long look!

I first saw this in a tweet by Alan Zucconi.

04 Nov 18:04

Mapping Out Lambda Land:…

by Patrick Durusau

Mapping Out Lambda Land: An Introduction to Functional Programming by Katie Miller.

From the post:

Anyone who has met me will probably know that I am wildly enthusiastic about functional programming (FP). I co-founded a group for women in FP, have presented a series of talks and workshops about functional concepts, and have even been known to create lambda-branded clothing and jewellery. In this blog post, I will try to give some insight into what the fuss is about. I will briefly explain what functional programming is, why you should care, and how you can use OpenShift to learn more about FP.

Good introduction to functional programming and resources on using OpenShift to learn FP.

Just in case you don’t recognize the name, Katie is the author of Lingo of Lamda Land, a poem depending on who is reading it.

02 Nov 18:51

Extracting SVO Triples from Wikipedia

by Patrick Durusau

Extracting SVO Triples from Wikipedia by Sujit Pal.

From the post:

I recently came across this discussion (login required) on LinkedIn about extracting (subject, verb, object) (SVO) triples from text. Jack Park, owner of the SolrSherlock project, suggested using ReVerb to do this. I remembered an entertaining Programming Assignment from when I did the Natural Language Processing Course on Coursera, that involved finding spouse names from a small subset of Wikipedia, so I figured I it would be interesting to try using ReVerb against this data.

This post describes that work. As before, given the difference between this and the “preferred” approach that the automatic grader expects, results are likely to be wildly off the mark. BTW, I highly recommend taking the course if you haven’t already, there are lots of great ideas in there. One of the ideas deals with generating “raw” triples, then filtering them using known (subject, object) pairs to find candidate verbs, then turning around and using the verbs to find unknown (subject, object) pairs.

So in order to find the known (subject, object) pairs, I decided to parse the Infobox content (the “semi-structured” part of Wikipedia pages). Wikipedia markup is a mini programming language in itself, so I went looking for some pointers on how to parse it (third party parsers or just ideas) on StackOverflow. Someone suggested using DBPedia instead, since they have already done the Infobox extraction for you. I tried both, and somewhat surprisingly, manually parsing Infobox gave me better results in some cases, so I describe both approaches below.

As Sujit points out, you will want to go beyond Wikipedia with this technique but it is a good place to start!

If somebody does leak the Senate Report on CIA Torture, that would be a great text (hopefully the full version) to mine with such techniques.

Remembering that anonymity = no accountability.

02 Nov 18:50

Querying Graphs with Neo4j [cheatsheet]

by Patrick Durusau

Querying Graphs with Neo4j by Michael Hunger.

Download the refcard by usual process, login into Dzone, etc.

When you open the PDF file in a viewer, do be careful. (Page references are to the DZone cheatsheet.)

Cover The entire cover is a download link. Touch it at all and you will be taken to a download link for Neo4j.

Page 1 covers “What is a Graph Database?” and “What is Neo4j?,” just in case you have been forced by home invaders to download a refcard for a technology you know nothing about.

Page 2 pitches the Neo4j server and then Getting Started with Neo4j, perhaps to annoy the NSA with repetitive content.

The DZone cheatsheet replicates the cheatsheet at: http://neo4j.com/docs/2.0/cypher-refcard/, with the following changes:

Page 3

WITH

Re-written. Old version:

MATCH (user)-[:FRIEND]-(friend) WHERE user.name = {name} WITH user, count(friend) AS friends WHERE friends > 10 RETURN user

The WITH syntax is similar to RETURN. It separates query parts explicitly, allowing you to declare which identifiers to carry over to the next part.

MATCH (user)-[:FRIEND]-(friend) WITH user, count(friend) AS friends ORDER BY friends DESC SKIP 1 LIMIT 3 RETURN user

You can also use ORDER BY, SKIP, LIMIT with WITH.

New version:

MATCH (user)-[:KNOWS]-(friend) WHERE user.name = {name} WITH user, count(*) AS friends WHERE friends > 10 RETURN user

WITH chains query parts. It allows you to specify which projection of your data is available after WITH.

ou can also use ORDER BY, SKIP, LIMIT and aggregation with WITH. You might have to alias expressions to give them a name.

I leave it to your judgement which version was the clearer.

Page 4

MERGEinserts: typo “{name: {value3}} )” on last line of final example under MERGE.

SETinserts: “SET n += {map} Add and update properties, while keeping existing ones.”

INDEXinserts: “MATCH (n:Person) WHERE n.name IN {values} An index can be automatically used for the IN collection checks.”

Page 5

PATTERNS

changes: “(n)-[*1..5]->(m) Variable length paths.” to “(n)-[*1..5]->(m) Variable length paths can span 1 to 5 hops.”

changes: “(n)-[*]->(m) Any depth. See the performance tips.” to “(n)-[*]->(m) Variable length path of any depth. See performance tips.”

changes: “shortestPath((n1:Person)-[*..6]-(n2:Person)) Find a single shortest path.” to “shortestPath((n1)-[*..6]-(n2))”

COLLECTIONS

changes: “range({first_num},{last_num},{step}) AS coll Range creates a collection of numbers (step is optional), other functions returning collections are: labels, nodes, relationships, rels, filter, extract.” to “range({from},{to},{step}) AS coll Range creates a collection of numbers (step is optional).” [Loss of information from the earlier version.]

inserts: “UNWIND {names} AS name MATCH (n:Person {name:name}) RETURN avg(n.age) With UNWIND, you can transform any collection back into individual rows. The example matches all names from a list of names.”

MAPS

inserts: “range({start},{end},{step}) AS coll Range creates a collection of numbers (step is optional).”

Page 6

PREDICATES

changes: “NOT (n)-[:KNOWS]->(m) Exclude matches to (n)-[:KNOWS]->(m) from the result.” to “NOT (n)-[:KNOWS]->(m) Make sure the pattern has at least one match.” [Older version more precise?]

replaces: mixed case, true/TRUE with TRUE

FUNCTIONS

inserts: “toInt({expr}) Converts the given input in an integer if possible; otherwise it returns NULL.”

inserts: “toFloat({expr}) Converts the given input in a floating point number if possible; otherwise it returns NULL.”

PATH FUNCTIONS

changes: “MATCH path = (begin) -[*]-> (end) FOREACH (n IN rels(path) | SET n.marked = TRUE) Execute a mutating operation for each relationship of a path.” to “MATCH path = (begin) -[*]-> (end) FOREACH (n IN rels(path) | SET n.marked = TRUE) Execute an update operation for each relationship of a path.”

COLLECTION FUNCTIONS

changes: “FOREACH (value IN coll | CREATE (:Person {name:value})) Execute a mutating operation for each element in a collection.” to “FOREACH (value IN coll | CREATE (:Person {name:value})) Execute an update operation for each element in a collection.”

MATHEMATICAL FUNCTIONS

changes: degrees({expr}), radians({expr}), pi() Converts radians into degrees, use radians for the reverse. pi for π.” to “degrees({expr}), radians({expr}), pi() to Converts radians into degrees, use radians for the reverse.” Loses “pi for π.”

changes: “log10({expr}), log({expr}), exp({expr}), e() Logarithm base 10, natural logarithm, e to the power of the parameter. Value of e.” to “log10({expr}), log({expr}), exp({expr}), e() Logarithm base 10, natural logarithm, e to the power of the parameter.” Loses “Value of e.”

Page 7

STRING FUNCTIONS

inserts: “split({string}, {delim}) Split a string into a collection of strings.”

AGGREGATION changes: collect(n.property) Collection from the values, ignores NULL. to “collect(n.property) Value collection, ignores NULL.”

START

remove: “START n=node(*) Start from all nodes.”

remove: “START n=node({ids}) Start from one or more nodes specified by id.”

remove: “START n=node({id1}), m=node({id2}) Multiple starting points.”

remove: “START n=node:nodeIndexName(key={value}) Query the index with an exact query. Use node_auto_index for the automatic index.”

inserts: “START n = node:indexName(key={value}) n=node:nodeIndexName(key={value}) n=node:nodeIndexName(key={value}) Query the index with an exact query. Use node_auto_index for the old automatic index.”

inserts: ‘START n = node:indexName({query}) Query the index by passing the query string directly, can be used with lucene or spatial syntax. E.g.: “name:Jo*” or “withinDistance:[60,15,100]“‘


I may have missed some changes because as you know, the “cheatsheets” for Cypher have no particular order for the entries. Alphabetical order suggests itself for future editions, sans the marketing materials.

Changes to a query language should appear where a user would expect to find the command in question. For example, the “CREATE a={property:’value’} has been removed” should appear where expected on the cheatsheet, noting the change. Users should not have to hunt high and low for “CREATE a={property:’value’}” on a cheatsheet.

I have passed over incorrect use of the definite article and other problems without comment.

Despite the shortcomings of the DZone refcard, I suggest that you upgrade to it.

31 Oct 23:48

Enhancing open data with identifiers

by Patrick Durusau

Enhancing open data with identifiers

From the webpage:

The Open Data Institute and Thomson Reuters have published a new white paper, explaining how to use identifiers to create extra value in open data.

Identifiers are at the heart of how data becomes linked. It’s a subject that is fundamentally important to the open data community, and to the evolution of the web itself. However, identifiers are also in relatively early stages of adoption, and not many are aware of what they are.
w
Put simply, identifiers are labels used to refer to an object being discussed or exchanged, such as products, companies or people. The foundation of the web is formed by connections that hold pieces of information together. Identifiers are the anchors that facilitate those links.

This white paper, ‘Creating value with identifiers in an open data world’ is a joint effort between Thomson Reuters and the Open Data Institute. It is written as a guide to identifier schemes:

  • why identity can be difficult to manage;
  • why it is important for open data;
  • what challenges there are today and recommendations for the community to address these in the future.

Illustrative examples of identifier schemes are used to explain these points.

The recommendations are based on specific issues found to occur across different datasets, and should be relevant for anyone using, publishing or handling open data, closed data and/or their own proprietary data sets.

Are you a data consumer?
Learn how identifiers can help you create value from discovering and connecting to other sources of data that add relevant context.

Are you a data publisher?
Learn how understanding and engaging with identifier schemes can reduce your costs, and help you manage complexity.

Are you an identifier publisher?
Learn how open licensing can grow the open data commons and bring you extra value by increasing the use of your identifier scheme.

The design and use of successful identifier schemes requires a mix of social, data and technical engineering. We hope that this white paper will act as a starting point for discussion about how identifiers can and will create value by empowering linked data.

Read the blog post on Linked data and the future of the web, from Chief Enterprise Architect for Thomson Reuters, Dave Weller.

When citing this white paper, please use the following text: Open Data Institute and Thomson Reuters, 2014, Creating Value with Identifiers in an Open Data World, retrieved from thomsonreuters.com/site/data-identifiers/

Creating Value with Identifiers in an Open Data World [full paper]

Creating Value with Identifiers in an Open Data World [management summary]

From the paper:

The coordination of identity is thus not just an inherent component of dataset design, but should be acknowledged as a distinct discipline in its own right.

A great presentation on identity and management of identifiers, echoing many of the themes discussed in topic maps.

A must read!

Next week I will begin a series of posts on the individual issues identified in this white paper.

I first saw this in a tweet by Bob DuCharme.

30 Oct 21:43

Announcing Clasp

by Patrick Durusau

Announcing Clasp by Christian Schafmeister.

From the post:

Click here for up to date build instructions

Today I am happy to make the first release of the Common Lisp implementation “Clasp”. Clasp uses LLVM as its back-end and generates native code. Clasp is a super-set of Common Lisp that interoperates smoothly with C++. The goal is to integrate these two very different languages together as seamlessly as possible to provide the best of both worlds. The C++ interoperation allows Common Lisp programmers to easily expose powerful C++ libraries to Common Lisp and solve complex programming challenges using the expressive power of Common Lisp. Clasp is licensed under the LGPL.

Common Lisp is considered by many to be one of the most expressive programming languages in existence. Individuals and small teams of programmers have created fantastic applications and operating systems within Common Lisp that require much larger effort when written in other languages. Common Lisp has many language features that have not yet made it into the C++ standard. Common Lisp has first-class functions, dynamic variables, true macros for meta-programming, generic functions, multiple return values, first-class symbols, exact arithmetic, conditions and restarts, optional type declarations, a programmable reader, a programmable printer and a configurable compiler. Common Lisp is the ultimate programmable programming language.

Clojure is a dialect of Lisp, which means you may spot situations where Lisp would be the better solution. Especially if you can draw upon C++ libraries.

The project is “actively looking” for new developers. Could be your opportunity to get in on the ground floor!

26 Oct 20:32

Analyzing Schema.org

by Patrick Durusau

Analyzing Schema.org by Peter F. Patel-Schneider.

Abstract:

Schema.org is a way to add machine-understandable information to web pages that is processed by the major search engines to improve search performance. The definition of schema.org is provided as a set of web pages plus a partial mapping into RDF triples with unusual properties, and is incomplete in a number of places. This analysis of and formal semantics for schema.org provides a complete basis for a plausible version of what schema.org should be.

Peter’s analysis is summarized when he says:

The lack of a complete definition of schema.org limits the possibility of extracting the correct information from web pages that have schema.org markup.

Ah, yes, “…the correct information from web pages….”

I suspect the lack of semantic precision has powered the success of schema.org. Each user of schema.org markup has their private notion of the meaning of their use of the markup and there is no formal definition to disabuse them of that notion. Not that formal definitions were enough to save owl:sameAs from varying interpretations.

Schema.org empowers varying interpretations without requiring users to ignore OWL or description logic.

For the domains that schema.org covers, eateries, movies, bars, whore houses, etc., the semantic slippage permitted by schema.org lowers the bar to usage of its markup. Which has resulted in its adoption more widely than other proposals.

The lesson of schema.org is the degree of semantic slippage you can tolerate depends upon your domain. For pharmaceuticals, I would assume that degree of slippage is as close to zero as possible. For movie reviews, not so much.

Any effort to impose the same degree of semantic slippage across all domains is doomed to failure.

I first saw this in a tweet by Bob DuCharme.

26 Oct 20:29

R Programming for Beginners

by Patrick Durusau

R Programming for Beginners by LearnR.

Short videos on R programming, running from a low of two (2) minutes (the intro) up to eight minutes (the debugging session) but generally three (3) to five (5) minutes in length. I have cleaned up the YouTube listing to make it suitable for sharing and/or incorporation into other R resources.

Enjoy!

24 Oct 17:35

Can You Beat Aimai?

by bspencer

Do you remember my “one pot, one skillet” post awhile back? Well, a lot of you shared some of your favorite one-pot meals. I read through just about the entire thread, and there were LOTS of delicious offerings, but one dish really stood out (to me) and that was Aimai’s Jamie Oliver roasted chicken recipe. It was truly one-pot cooking, with an ease, a hint of sophistication, a depth of flavor, and a built-in sauce that you simply do not often find, even in beloved one-pot recipes.

I’ve made Jamie’s Aimai’s recipe probably close to to ten times since she shared it, varying ingredients (only very slightly) and cooking methods. It’s never turned out poorly. In fact, each time it’s been somewhere between “this is great” to “my tastebuds are climaxing.” The bones of this recipe are that solid.

Take one 4 lb chicken
10 garlic cloves
Handful of fresh sage
1 cinnamon stick
1 dried red pepper (or two)
Peel of one lemon
Juice of one lemon
1 cup milk
1/2 cup or 1 cup of dry white wine
Onions–Sliced in quarters longitudinally so the shape will be preserved.
Celery–sliced in long batons about 2 inches
Carrots–Chunks
sometimes parsnips–same size as the carrots

Salt and pepper the chicken all over. Brown it in butter or olive oil. Drain pan but save any sticky bits. Then put the vegetables and all the other ingredients in a roasting pan or a dutch oven and put the chicken down on top, nested in the liquid, and cook until the top is browned and golden, chicken is done, and the base vegetables are cooked. The milk and the lemon will “break” and make a classic sauce. For extra killer deprssion repair you can add some cream at the last minute to the sauce and you get an unbelievably rich sauce for pouring over potatoes or rice or dipping bread.

 

My challenge for you is this: Can you top Aimai’s recipe?

Also…marinated cheese? Yes.








22 Oct 02:24

Big Data: 20 Free Big Data Sources Everyone Should Know

by Patrick Durusau

Big Data: 20 Free Big Data Sources Everyone Should Know Bernard Marr.

From the post:

I always make the point that data is everywhere – and that a lot of it is free. Companies don’t necessarily have to build their own massive data repositories before starting with big data analytics. The moves by companies and governments to put large amounts of information into the public domain have made large volumes of data accessible to everyone.

Any company, from big blue chip corporations to the tiniest start-up can now leverage more data than ever before. Many of my clients ask me for the top data sources they could use in their big data endeavour and here’s my rundown of some of the best free big data sources available today.

I didn’t see anything startling but it is a good top 20 list for a starting point. Would make a great start on a one to two page big data cheat sheet. Will have to give some thought to that idea.

21 Oct 02:26

LSD Dimensions

by Patrick Durusau

LSD Dimensions

From the about page: http://lsd-dimensions.org/dimensions

LSD Dimensions is an observatory of the current usage of dimensions and codes in Linked Statistical Data (LSD).

LSD Dimensions is an aggregator of all qb:DimensionProperty resources (and their associated triples), as defined in the RDF Data Cube vocabulary (W3C recommendation for publishing statistical data on the Web), that can be currently found in the Linked Data Cloud (read: the SPARQL endpoints in Datahub.io). Its purpose is to improve the reusability of statistical dimensions, codes and concept schemes in the Web of Data, providing an interface for users (future work: also for programs) to search for resources commonly used to describe open statistical datasets.

Usage

The main view shows the count of queried SPARQL endpoints and the number of retrieved dimensions, together with a table that displays these dimensions.

  • Sorting. Dimensions can be sorted by their dimension URI, label and number of references (i.e. number of times a dimension is used in the endpoints) by clicking on the column headers.
  • Pagination. The number of rows per page can be customized and browsed by clicking at the bottom selectors.
  • Search. String-based search can be performed by writing the search query in the top search field.

Any of these dimensions can be further explored by clicking at the eye icon on the left. The dimension detail view shows

  • Endpoints.. The endpoints that make use of that dimension.
  • Codes. Popular codes that are defined (future work: also assigned) as valid values for that dimension.

Motivation

RDF Data Cube (QB) has boosted the publication of Linked Statistical Data (LSD) as Linked Open Data (LOD) by providing a means “to publish multi-dimensional data, such as statistics, on the web in such a way that they can be linked to related data sets and concepts”. QB defines cubes as sets of observations affected by dimensions, measures and attributes. For example, the observation “the measured life expectancy of males in Newport in the period 2004-2006 is 76.7 years” has three dimensions (time period, with value 2004-2006; region, with value Newport; and sex, with value male), a measure (population life expectancy) and two attributes (the units of measure, years; and the metadata status, measured, to make explicit that the observation was measured instead of, for instance, estimated or interpolated). In some cases, it is useful to also define codes, a closed set of values taken by a dimension (e.g. sensible codes for the dimension sex could be male and female).

There is a vast diversity of domains to publish LSD about, and quite some dimensions and codes can be very heterogeneous, domain specific and hardly comparable. To this end, QB allows users to mint their own URIs to create arbitrary dimensions and associated codes. Conversely, some other dimensions and codes are quite common in statistics, and could be easily reused. However, publishers of LSD have no means to monitor the dimensions and codes currently used in other datasets published in QB as LOD, and consequently they cannot (a) link to them; nor (b) reuse them.

This is the motivation behind LSD Dimensions: it monitors the usage of existing dimensions and codes in LSD. It allows users to browse, search and gain insight into these dimensions and codes. We depict the diversity of statistical variables in LOD, improving their reusability.

(Emphasis added.)

The highlighted text:

There is a vast diversity of domains to publish LSD about, and quite some dimensions and codes can be very heterogeneous, domain specific and hardly comparable.

is the key isn’t it? If you can’t rely on data titles, users must examine the data and determine which sets can or should be compared.

The question then is how do you capture the information such users developed in making those decisions and pass it on to following users? Or do you just allow following users make their own way afresh?

If you document the additional information for each data set, by using a topic map, each use of this resource becomes richer for the following users. Richer or stays the same. Your call.

I first saw this in a tweet by Bob DuCharme. Who remarked this organization has a great title!

If you have made it this far, you realize that with all the data set, RDF and statistical language this isn’t the post you were looking for. ;-)

PS: Yes Bob, it is a great title!

19 Oct 07:24

Tupleware: Redefining Modern Analytics

by Patrick Durusau

Tupleware: Redefining Modern Analytics by Andrew Crotty and Alexander Galakatos.

From the post:

Up until a decade ago, most companies sufficed with simple statistics and offline reporting, relying on traditional database management systems (DBMSs) to meet their basic business intelligence needs. This model prevailed in a time when data was small and analysis was simple.

But data has gone from being scarce to superabundant, and now companies want to leverage this wealth of information in order to make smarter business decisions. This data explosion has given rise to a host of new analytics platforms aimed at flexible processing in the cloud. Well-known systems like Hadoop and Spark are built upon the MapReduce paradigm and fulfill a role beyond the capabilities of traditional DBMSs. However, these systems are engineered for deployment on hundreds or thousands of cheap commodity machines, but non-tech companies like banks or retailers rarely operate clusters larger than a few dozen nodes. Analytics platforms, then, should no longer be built specifically to accommodate the bottlenecks of large cloud deployments, focusing instead on small clusters with more reliable hardware.

Furthermore, computational complexity is rapidly increasing, as companies seek to incorporate advanced data mining and probabilistic models into their business intelligence repertoire. Users commonly express these types of tasks as a workflow of user-defined functions (UDFs), and they want the ability to compose jobs in their favorite programming language. Yet, existing analytics systems fail to adequately serve this new generation of highly complex, UDF-centric jobs, especially when companies have limited resources or require sub-second response times. So what is the next logical step?

It’s time for a new breed of systems. In particular, a platform geared toward modern analytics needs the ability to (1) concisely express complex workflows, (2) optimize specifically for UDFs, and (3) leverage the characteristics of the underlying hardware. To meet these requirements, the Database Group at Brown University is developing Tupleware, a parallel high-performance UDF processing system that considers the data, computations, and hardware together to produce results as efficiently as possible.

The article is the “lite” introduction to Tuppleware. You may be more interested in:

Tupleware: Redefining Modern Analytics (the paper):

Abstract:

There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world—petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to several terabytes, and perform primarily compute-intensive operations. Targeting these users fundamentally changes the way we should build analytics systems.

This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware’s architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems.

Subject to the “in memory” limitation, speedups of 10 – 6,000x over other systems are nothing to dismiss without further consideration.

Interesting to see that “medium” data now reaches into the terabyte range. ;-)

Are “mini-clouds” in the offing that provide specialized processing models?

The Tuppleware website.

I first saw this in a post by Danny Bickson, Tuppleware.

19 Oct 07:24

Data Sources for Cool Data Science Projects: Part 1

by Patrick Durusau

Data Sources for Cool Data Science Projects: Part 1

From the post:

At The Data Incubator, we run a free six week data science fellowship to help our Fellows land industry jobs. Our hiring partners love considering Fellows who don’t mind getting their hands dirty with data. That’s why our Fellows work on cool capstone projects that showcase those skills. One of the biggest obstacles to successful projects has been getting access to interesting data. Here are a few cool public data sources you can use for your next project:

Nothing surprising or unfamiliar but at least you know what the folks at Data Incubator think is “cool” and/or important. Intell is never a waste.

Enjoy!

17 Oct 17:22

COLD 2014 Consuming Linked Data

by Patrick Durusau

COLD 2014 Consuming Linked Data

Table of Contents

You can get an early start on your weekend reading now! ;-)

17 Oct 01:16

For Programmers, There Is No "Normal Person" Feeling

by Eugene Wallingford

I see this in the lab every week. One minute, my students sit peering at their monitors, their heads buried in their hands. They can't do anything right. The next minute, I hear shouts of exultation and turn to see them, arms thrust in the air, celebrating their latest victory over the Gods of Programming. Moments later I look up and see their heads again in their hands. They are despondent. "When will this madness end?"

Last week, I ran across a tweet from Christina Cacioppo that expresses nicely a feeling that has been vexing so many of my intro CS students this semester:

I still find programming odd, in part, because I'm either amazed by how brilliant or how idiotic I am. There's no normal-person feeling.

Christina is no beginner, and neither am I. Yet we know this feeling well. Most programmers do, because it's a natural part of tackling problems that challenge us. If we didn't bounce between feeling puzzlement and exultation, we wouldn't be tackling hard-enough problems.

What seems strange to my students, and even to programmers with years of experience, is that there doesn't seem to be a middle ground. It's up or down. The only time we feel like normal people is when we aren't programming at all. (Even then, I don't have many normal-person feelings, but that's probably just me.)

I've always been comfortable with this bipolarity, which is part of why I have always felt comfortable as a programmer. I don't know how much of this comfort is natural inclination -- a personality trait -- and how much of it is learned attitude. I am sure it's a mixture of both. I've always liked solving puzzles, which inspired me to struggle with them, which helped me get better struggling with them.

Part of the job in teaching beginners to program is to convince them that this is a habit they can learn. Whatever their natural inclination, persistence and practice will help them develop the stamina they need to stick with hard problems and the emotional balance they need to handle the oscillations between exultation and despondency.

I try to help my students see that persistence and practice are the answer to most questions involving missing skills or bad habits. A big part of helping them this is coaching and cheerleading, not teaching programming language syntax and computational concepts. Coaching and cheerleading are not always tasks that come naturally to computer science PhDs, who are often most comfortable with syntax and abstractions. As a result, many CS profs are uncomfortable performing them, even when that's what our students need most. How do we get better at performing them? Persistence and practice.

The "no normal-person feeling" feature of programming is an instance of a more general feature of doing science. Martin Schwartz, a microbiologist at the University of Virginia, wrote a marvelous one-page article called The importance of stupidity in scientific research that discusses this element of being a scientist. Here's a representative sentence:

One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time.

Scientists get used to this feeling. My students can, too. I already see the resilience growing in many of them. After the moment of exultation passes following their latest conquest, they dive into the next task. I see a gleam in their eyes as they realize they have no idea what to do. It's time to bury their heads in their hands and think.

16 Oct 04:31

5 Machine Learning Areas You Should Be Cultivating

by Patrick Durusau

5 Machine Learning Areas You Should Be Cultivating by Jason Brownlee.

From the post:

You want to learn machine learning to have more opportunities at work or to get a job. You may already be working as a data scientist or machine learning engineer and looking to improve your skills.

It is about as easy to pigeonhole machine learning skills as it is programming skills (you can’t).

There is a wide array of tasks that require some skill in data mining and machine learning in business from data analysis type work to full systems architecture and integration.

Nevertheless there are common tasks and common skills that you will want to develop, just like you could suggest for an aspiring software developer.

In this post we will look at 5 key areas were you might want to develop skills and the types of activities that you could take on to practice in those areas.

Jason has a number of useful suggestions for the five areas and you will profit from taking his advice.

At the same time, I would be keeping a notebooks of assumptions or exploits that are possible with every technique or process that you learn. Results and data will be presented to you as though the results and data are both clean. It is your responsibility to test that presentation.

14 Oct 21:36

RNeo4j: Neo4j graph database combined with R statistical programming language

by Patrick Durusau

From the description:

RNeo4j combines the power of a Neo4j graph database with the R statistical programming language to easily build predictive models based on connected data. From calculating the probability of friends of friends connections to plotting an adjacency heat map based on graph analytics, the RNeo4j package allows for easy interaction with a Neo4j graph database.

Nicole is the author of the RNeo4j R package. Don’t be dismayed by the “What is a Graph” and “What is R” in the presentation outline. Mercifully only three minutes followed by a rocking live coding demonstration of the package!

Beyond Neo4j and R, use this webinar as a standard for the useful content that should appear in a webinar!

RNeo4j at Github.

13 Oct 18:54

Measuring Search Relevance

by Patrick Durusau

Measuring Search Relevance by Hugh E. Williams.

From the post:

The process of asking many judges to assess search performance is known as relevance judgment: collecting human judgments on the relevance of search results. The basic task goes like this: you present a judge with a search result, and a search engine query, and you ask the judge to assess how relevant the item is to the query on (say) a four-point scale.

Suppose the query you want to assess is ipod nano 16Gb. Imagine that one of the results is a link to Apple’s page that describes the latest Apple iPod nano 16Gb. A judge might decide that this is a “great result” (which might be, say, our top rating on the four-point scale). They’d then click on a radio button to record their vote and move on to the next task. If the result we showed them was a story about a giraffe, the judge might decide this result is “irrelevant” (say the lowest rating on the four point scale). If it were information about an iPhone, it might be “partially relevant” (say the second-to-lowest), and if it were a review of the latest iPod nano, the judge might say “relevant” (it’s not perfect, but it sure is useful information about an Apple iPod).

The human judgment process itself is subjective, and different people will make different choices. You could argue that a review of the latest iPod nano is a “great result” — maybe you think it’s even better than Apple’s page on the topic. You could also argue that the definitive Apple page isn’t terribly useful in making a buying decision, and you might only rate it as relevant. A judge who knows everything about Apple’s products might make a different decision to someone who’s never owned an digital music player. You get the idea. In practice, judging decisions depend on training, experience, context, knowledge, and quality — it’s an art at best.

There are a few different ways to address subjectivity and get meaningful results. First, you can ask multiple judges to assess the same results to get an average score. Second, you can judge thousands of queries, so that you can compute metrics and be confident statistically that the numbers you see represent true differences in performance between algorithms. Last, you can train your judges carefully, and give them information about what you think relevance means.

An illustrated walk through measuring search relevance. Useful for a basic understanding of the measurement process and its parameters.

Bookmark this post so When you tell your judges what “…relevance means”, you can return here and post what you told your judges.

I ask because I deeply suspect that our ideas of “relevance” vary widely from subject to subject.

Thanks!