Shared posts

19 Jun 08:21

You Can Be Busy or Remarkable — But Not Both

by Study Hacks

distracted

The Remarkably Relaxed

Terence Tao is one of the world’s best mathematicians. He won a Fields Medal when he was 31. He is, we can agree, remarkable.

He is not, however, busy.

I should be careful about definitions. By “busy,” I mean a schedule packed with non-optional professional responsibilities.

My evidence that Tao is not overwhelmed by such obligations is the time he spends on non-obligatory, non-time sensitive hobbies. In particular, his blog.

Since the new year, he’s written nine long posts, full of mathematical equations and fun titles, like “Matrix identities as derivatives of determinant identities.” His most recent post is 3700 words long! And that’s a normal length.

As a professor who also blogs, I know that posts are something you do only when you have down time. I conjecture, therefore, that Tao’s large volume of posting implies he enjoys a large amount of down time in his professional life.

Here’s why you should care: Tao’s downtime is not an aberration — a quirk of a quirky prodigy — it is instead, I argue, essential to his success.

The Phases of Deep Work

Deep work is phasic.

Put another way, to ape Rushkoff, we’re not computer processors. We can’t be expected to accomplish any job any time we have the available cycles. There are rhythms to our psychology. Certain times of the day, week, month, and even year (e.g., the professor I discussed in my last post) are better suited for deep work than other times.

To respect this reality, you must leave sufficient time in your schedule to handle the intense bursts of such work when they occur. This requires that you constrain the other obligations in your life — perhaps by being reluctant to agree to things or start projects, or by ruthlessly batching and streamlining your regular obligations.

When it’s time to work deeply, this approach leaves you the schedule space necessary to immerse.

When you’ve shifted temporarily out of deep work mode, however, this approach leaves you with down time.

This is why people who do remarkable things can seem remarkably under-committed — it’s a side-effect of the scheduling philosophy necessary to accommodate depth.

Returning to Tao’s blog, the specific dates of his posts support my theory. As mentioned, he posted nine long posts since the New Year. On closer inspection, it turns out that most of the posts occurred in a single month: February.

We can imagine that this month was a down cycle between two periods of more intense thinking.

If my theory is true — and I don’t know that it is — its implication is striking: busyness stymies accomplishment.

If you’re looking for the next Tao, in other words, ignore the guy checking e-mail while running to his next meeting, and look instead towards the quiet fellow, staring off at the clouds, trying to figure out what to do with his afternoon.

(Photo by The Other Dan)

19 Jun 08:17

Controlling Your Schedule with Deadline Buffers

by Study Hacks

deadlinebuffer-500px

A Hard Week

Last week was hard. Four large deadlines landed within a four day period. The result was a week (and weekend) where I was forced to violate my fixed-schedule productivity boundaries.

I get upset when I violate these boundaries, so, as I do, I conducted a post-mortem on my schedule to find out what happened.

The high-level explanation was clear: bad luck. I originally had two big deadlines on my calendar, each separated by a week. But then two unfortunate things happened in rapid succession:

  1. One of my two big deadlines was shifted to coincide with the second big deadline. Because I was working with collaborators, I couldn’t just ignore the shift. The new deadline would become the real deadline.
  2. The other issue was due to shadow commitments – work obligations you accept before you know the specific dates the work will be due. I had made two such commitments months earlier. Not long ago, however, their due dates were announced, and they both fell square within this brutal week.

The easy conclusion from this post-mortem is that sometimes you have a hard week. Make sure you recharge afterward and then move on.

This is a valid conclusion And I took it to heart. But it’s not complete…

The Deadline Banner

As I dug deeper through the forensic detritus of this brutal week I noticed that I could have made it less brutal. As deadlines popped up or shifted on my schedule, I dutifully updated them on my calendar. But in doing so, I didn’t appreciate the monumental work pile-up these shifts were creating. If I had noticed this, I could have invoked some emergency measures earlier to lessen the load.

In response to this revelation I am now toying with a simple tweak to how I use my calendar: the deadline buffer.

The idea is simple…

Any serious deadline should not exist on your calendar just as a note on a single day. It should instead by an event that spans the entire week preceding the actual deadline. (In Google Calendar, I do this by making it an “all day” event that lasts the full duration; e.g., as in the screenshot at the top of this post.)

The motivation behind this hack is to eliminate the possibility for pile-ups to happen without your knowledge. If you buffer each deadline with a week-long event, any overlap will become immediately apparent.

As a bonus, this approach also helps you keep these key pre-deadline weeks clear of excessive meetings. It’s easy, for example, to agree to a non-urgent interview months in the future. But when you see that this date has a deadline buffer in place, you become more likely to say, “actually, let’s schedule this for the week after…that week is going to be a little tight.”

This is the type of prescient scheduling you’ll appreciate when the deadline looms and you see before you a delightfully light schedule.

In the spirit of anti-planning, I don’t know how well this will work, but it’s worth some experimentation.

19 Jun 07:52

The MOOC will soon die. Long live the MOOR

by Keith Devlin

A real-time chronicle of a seasoned professor who just completed giving his second massively open online course.

The second running of my MOOC (massive open online course) Introduction to Mathematical Thinking ended recently. The basic stats were:

Total enrollment: 27,930

Number still active during final week of lectures: ca 4,000

Total submitting exam: 870

Number of students receiving a Statement of Accomplishment: 1,950

Number of students awarded a SoA with Distinction: 390

From my perspective, it went better than the first time, but this remains very much a research project, and will do for many more iterations. It is a research project with at least as many “Can we?” questions as “How do we?”

From the start, I took the viewpoint that, given the novelty of the MOOC platform, we need to examine the purpose, structure, and use of all the familiar educational elements: “lecture,” “quiz,” “assignment,” “discussion,” “grading,” “evaluation,” etc. All bets are off. Some changes to the way we use these elements might be minor, but on the other hand, some could be significant.

For instance, my course is not offered for any form of college credit. The goal is purely learning. This could be learning solely for its own sake, and many of my students approached it as such. On the other hand, as a course is basic analytic thinking and problem solving, with an emphasis on mathematical thinking in the second half of the course, it can clearly prepare a student to take (and hopefully do better in) future mathematics or STEM courses that do earn credit – and I have had students taking it with that goal in mind.

Separating learning from evaluation of what has been learned is enormously freeing, both to the instructor and to the student. In particular, evaluation of student work and the awarding of grades can be devoted purely to providing students with a useful (formative) indication of their progress, not a (summative) measure of their performance or ability.

To be sure, many of my students, conditioned by years of high stakes testing, have a hard time adjusting to the fact that a grade of 30% on a piece of work can be very respectable, indeed worth an A in many cases.

My typical response to students who lament their “low” grade is to say that their goal should be that a problem for which they struggle to get 30% in week 2 should be solvable for 80% or more by week 5 (say). And for problems they struggle with in week 8 (the final week of curriculum in my course), they should be able to do them more successfully if they take the course again the next time it is offered – something else that is possible in the brave new world of MOOCs. (Many of the students in my second offering of the course had attempted the first one a few months earlier.)

Incidentally, I think I have to make a comment regarding my statement above that the MOOC platform is novel. A number of commentators have observed that “online education is not new,” and they are right. But they miss the point that even this first generation of MOOC platforms represents a significant phase shift, not only in terms of the aggregate functionality but also the social and cultural context in which today’s MOOCs are being offered.

Regarding the context, not only have many of us grown accustomed to much of our interpersonal interaction being mediated by the internet, the vast majority of people under twenty now interact far more using social media than in person.

We could, of course, spend (I would say “waste”) our time debating whether or not this transition from physical space to cyberspace is a good thing. Personally, however, I think it is more productive to take steps to make sure it is – or at least ends up – a good thing. That means we need to take good education online, and we need to do so for the same reason that it’s important to embed good learning into video games.

The fact is, we have created for the new and future generations a world in which social media and video games are prevalent and attractive – just as earlier generations created worlds of books and magazines, and later mass broadcast media (radio, films, television) which were equally as widespread and attractive in their times. The media of any age are the ones through which we must pass on our culture and our cumulative learning. (See my other blog profkeithdevlin.org for my argument regarding learning in video games.)

Incidentally, I see the points I am making here (and will be making in future posts) as very much in alignment with, and definitely guided by, the views Sir Ken Robinson has expressed in a series of provocative lectures, 1, 2, 3.

Sir Ken’s thoughts influenced me a lot in my thinking about MOOCs. To be sure, there is much in the current version of my MOOC that looks very familiar. That is partly because of my academic’s professional caution, which tells me to proceed in small steps, starting from what I myself am familiar with; but in part also because the more significant changes I am presently introducing are the novel uses I am making (or trying to make) of familiar educational elements.

The design of my course was also heavily influenced by the expectation (more accurately a recognition, given how fast MOOCs are developing) that no single MOOC should see itself as the primary educational resource for a particular learning topic. Rather, those of us currently engaged in developing and offering MOOCs are, surely, creating resources that will be part of a vast smorgasbord from which people will pick and choose what they want or need at any particular time.

Given the way names get assigned and used, we may find we are stuck with the name MOOC (massive open online course), but a better term would be MOOR, for massive open online resource.

For basic, instructional learning, which makes up the bulk of K-12 mathematics teaching (wrongly in my view, but the US will only recognize that when virtually none of our home educated students are able to land the best jobs, which is about a generation away), that transition from course to resource has already taken place. YouTube is littered with short, instructional videos that teach people how to carry out certain procedures.

[By the way, I used the term “mathematical thinking” to describe my course, to distinguish it from the far more prevalent instructional math course that focuses on procedures. Students who did not recognize the distinction in the first three weeks, and approached the material accordingly, dropped out in droves in week four when they suddenly found themselves totally lost.]

By professional standards, many of the instructional video resources you can find on the Web (not just in mathematics but other subjects as well) are not very good, but that does not prevent them being very effective. As a professional mathematician and mathematics educator, I cringe when I watch a Khan Academy video, but millions find them of personal value. Analogously, in a domain where I am not an expert, bicycle mechanics, I watch Web videos to learn how to repair or tune my (high end) bicycles, and to assemble and disassemble my travel bike (a fairly complex process that literally has potential life and death consequences for me), and they serve my need, though I suspect a good bike mechanic would find much to critique in them. In both cases, mathematics and bicycle mechanics, some sites will iterate and improve, and in time they will dominate.

That last point, by the way, is another where many commentators miss the point. Something else that digital technologies and the Web make possible is rapid iteration guided by huge amounts of user feedback data – data obtained with great ease in almost real time.

In the days when products took a long time, and often considerable money, to plan and create, careful planning was essential. Today, we can proceed by a cycle of rapid prototypes. To be sure, it would be (in my view) unwise and unethical to proceed that way if a MOOC were being offered for payment or for some form of college credit, but for a cost-free, non-credit MOOC, learning on a platform that is itself under development, where the course designer is learning how to do it, can be in many ways a better learning experience than taking a polished product that has stood the test of time.

You don’t believe me? Consider this. Textbooks have been in regular use for over two thousand years, and millions of dollars have been poured into their development and production. Yet, take a look at practically any college textbook and ask yourself is you could, or would like to, learn from that source. In a system where the base level is the current college textbook and the bog-standard course built on it, the bar you have to reach with a MOOC to call it an improvement on the status quo is low indeed.

Again, Khan Academy provides the most dramatic illustration. Compared with what you will find in a good math classroom with a well trained teacher, it’s not good. But it’s a lot better than what is available to millions of students. More to the point, I know for a fact that Sal Khan is working on iterating from the starting point that caught Bill Gates’ attention, and has been for some time. Will he succeed? It hardly matters. (Well, I guess it does to Sal and his employees!) Someone will. (At least for a while, until someone else comes along and innovates a crucial step further.)

This, as I see it, is what, in general terms, is going on with MOOCs right now. We are experimenting. Needless to say – at least, it should be needless but there are worrying developments to the contrary – it would be unwise for any individual, any educational institution, or any educational district to make MOOCs (as courses) an important component of university education at this very early stage in their development. (And foolish to the point of criminality to take them into the K-12 system, but that’s a whole separate can of worms.)

Experimentation and rapid prototyping are fine in their place, but only when we all have more experience with them and have hard evidence of their efficacy (assuming they have such), should we start to think about giving them any critical significance in an educational system which (when executed properly) has served humankind well for several hundred years. Anyone who claims otherwise is probably trying to sell you something.

A final remark. I’m not saying that massive open online courses will go away. Indeed, I plan to continue offering mine – as a course – and I expect and hope many students will continue to take it as a complete course. I also expect that higher education institutions will increasingly incorporate MOOCs into their overall offerings, possibly for credit. (Stanford Online High School already offers a for-certificate course built around my MOOC.) So my use of the word “die” in the title involved a bit of poetic license

But I believe my title is correct in its overall message. We already know from the research we’ve done at Stanford that only a minority of people enroll for a MOOC with the intention of taking it through to completion. (Though that “minority” can comprise several thousand students!) Most MOOC students already approach it as a resource, not a course! With an open online educational entity, it is the entire community of users that ultimately determines what it primarily is and how it fits in the overall educational landscape. According to the evidence, they already have, thereby giving us a new (and more accurate) MOOC mantra: resources, not courses. (Even when they are courses and when some people take them as such.)

In the coming posts to this blog, I’ll report on the changes I made in the second version of my MOOC, reflect on how things turned out, and speculate about the changes I am thinking of making in version 3, which is scheduled to start in September. First topic up will be peer evaluation – something that I regard as key to the success of a MOOC on mathematical thinking.

Those of us in education are fortunate to be living in a time where there is so much potential for change. The last time anything happened on this scale in the world of education was the invention of the printing press in the Fifteenth Century. As you can probably tell, I am having a blast.

To be continued …


28 Mar 08:46

Google NegativeSEO: Case Study in User-Generated Censorship

by Petey
Jun

hilarious!

The origin myth is as familiar as that of any dominant empire. Romulus and Remus Larry Page and Sergey Brin met as graduate students at Stanford. Page, casting about for a dissertation topic, settled on the Web as a graph of links. He and Brin began building a program called BackRub which would “crawl” the graph, visiting and counting links between pages, and aggregate the weighted counts into a value called PageRank. This value, they claimed, constituted “an objective measure of its citation importance that corresponds well with people's subjective idea of importance.” BackRub became Google, a prototype search engine which, powered by PageRank, incorporated user input into its ranking algorithm.

At first, Page and Brin were not only confident that they’d built a better search engine, but, as they wrote in a paper explaining PageRank, one “virtually immune to manipulation by commercial interests”:

For a page to get a high PageRank, it must convince an important page, or a lot of non-important pages, to link to it. At worst, you can have manipulation in the form of buying advertisements (links) on important sites. But, this seems well under control since it costs money. This immunity to manipulation is an extremely important property.

Of course, in hindsight, Google wasn't immune to manipulation at all. Instead, SEO marketers just started using bots to create lots of "non-important pages" to link to sites they want to push up in the Google results. Kolari et al have found that 88% of all English-language blogs on the Internet are so-called "splogs" mean to game Google. As Professor Finn Brunton writes in his forthcoming book on Internet spam:

Terra's [splog] links to other splogs, which link to still more, forming an insular community on a huge range of sites, a kind of PageRank greenhouse which is not in itself meant to be read by people. Splogs of Terra's type are not meant to interact with humans at all; they are created solely for the benefit of search engine spiders.

In April 2012 Matt Cutts, the head of Google’s antispam team, announced that an important algorithm update would, among other things, “decrease rankings for sites that we believe are violating Google’s existing quality guidelines.” In a video, Cutts explained to webmasters that Google’s antispam algorithms had “gotten better and better at assessing bad links” to the point where the company felt comfortable penalizing people for various kinds of “unnatural links,” including paid links, low quality syndicated articles, or splogs. This was a strong move by Google to punish this kind of bad behavior by making it backfire.

The thing about predictable and powerful backfiring, though, is that the shooter can just reverse the weapon to achieve its effect. If pointing bad links to your own site is now punishable, some SEO artists reasoned, why not just point links at your competitor to get them to sink?

Indeed, even before the Panda algorithm had been fully deployed, two members of the TrafficPlanet forums, posting under the names Pixelgrinder and Jammy, decided to test whether NegativeSEO (as they called it) was possible. As a target they selected SeoFastStart.com, a site run by a SEO consultant they disliked. Pixelgrinder and Jammy began a lightweight NegativeSEO campaign using a simple tool, while an unknown third party simultaneously organized a much more powerful linking campaign.

The campaign, which lasted a month, dramatically dropped their target’s rankings in key Google results:

The first day Google began notifying webmasters of “unnatural links” Dan Thies, the consultant targeted by TrafficPlanet, received one. He posted to the Google Webmaster Support forums that he had been warned for having thousands of unnatural links pointing to his site, despite himself having never engaged in “link building” activity himself. The warning coincided with the campaign conducted by TrafficPlanet.

In October 2012 Cutts announced a new “disavow links” tool. “If you’ve been notified of a manual spam action based on ‘unnatural links’ pointing to your site, this tool can help you address the issue,” wrote Google analyst Jonathan Simon. In a December 2012 video Cutts addressed NegativeSEO directly for the first time. He cautioned that “while a lot of people talk about NegativeSEO, very few try it, and fewer still actually succeed,” and reminded viewers that, if they were worried about the issue, the disavow links tool could be used to “defuse” any action taken against them. At present the extent and efficacy of NegativeSEO as a tactic remain largely unexplored: the next uncertain step in the ongoing dance between Google and those who would game it.

social networks
28 Mar 08:30

There's a Hole in 1,951 Amazon S3 Buckets

28 Mar 08:28

Meet Gabe Newell, Microsoft’s next CEO

28 Mar 08:13

A new English language model release

by admin

A new English language model is available (updated) for download on our new Torrent tracker.

This is a good trigram language model for a general transcription trained on a various open sources, for example Guttenberg texts.

It archives the good transcription performance on various types of
texts, for example on the following tests sets the perplexities are:

TED talks from IWSLT 2012

Perplexity: 158.3

Lecture transcription task

Perplexity: 206.677

Beside the transcription task, this model should be significantly better on conversational data like movie transcription.

The language model was pruned with a beam 5e-9 to reduce the model. It can be pruned further if needed or a vocabulary could be reduced to fit the target domain.

27 Mar 11:20

We Need an Economic Study on Lost Productivity from Poor Computing Education

by Mark Guzdial

How much does it cost the American economy that most American workers are not computer literate?  How much would be saved if all students were taught computer science?

These questions occurred to me when trying to explain why we need ubiquitous computing education.  I am not an economist, so I do not know how to measure the costs of lost productivity. I imagine that the methods would be similar to those used in measuring the Productivity Paradox.

We do have evidence that there are costs associated with people not understanding computing:

  • We know from Scaffidi, Shaw, and Myers that there are a great many end-user programmers in the United States.  Brian Dorn’s research on graphic designers identified examples of lost productivity because of self-taught programming knowledge.  Brian’s participants did useless Google searches like “javascript <variablename>” because they didn’t know which variable or function names were meaningful and which were arbitrary.  Brian saw one participant spend a half an hour studying a Web resource on Java, before Brian pointed out that he was programming in Javascript which was a different language.  I bet that many end-users flail like this — what’s the cost of that exploration time?
  • Erika Poole documented participants failing at simple tasks (like editing Wikipedia pages) because they didn’t understand basic computing ideas like IP addresses.  Her participants gave up on tasks and rebooted their computer, because they were afraid that someone would record their IP address.  How much time is lost because users take action out of ignorance of basic computing concepts?

We typically argue for “Computing for All” as part of a jobs argument. That’s what Code.org is arguing, when they talk about the huge gap between those who are majoring in computing and the vast number of jobs that need people who know computing.  It’s part of the Computing in the Core argument, too.  It’s a good argument, and a strong case, but it’s missing a bigger issue.  Everyday people need computing knowledge, even if they are not professional software developers.  What is the cost for not having that knowledge?

Now, I expect Mike Byrne (and other readers who push back in interesting ways on my “Computing for Everyone” shtick) to point out that people also need to know about probability and statistics (for example), and there may be a greater cost for not understanding those topics.  I agree, but I am even harder pressed to imagine how to measure that.  One uses knowledge of probability and statistics all the time (e.g., when deciding whether to bring your umbrella to work, and whether you can go another 10K miles on your current tires). How do you identify (a) all the times you need that knowledge and (b) all the times you make a bad prediction because you don’t have the right knowledge?  There is also a question of whether having the knowledge would change your decision-making, or whether you would still be predictably irrational.  Can I teach you probability and statistics in such a way that it can influence your everyday decision making?  Will you transfer that knowledge?  I’m pretty sure that once you know IP addresses and that Java is not the same as JavaScript, you won’t forget those definitions — you don’t need far-transfer for that to be useful.  While it is a bit of a “drunk under the streetlight” argument, I can characterize the behaviors where computing knowledge would be useful and when there are costs for not having that knowledge, as in Brian and Erika’s work.  I am trying to address problems that I have some idea of how to address.

Consider this a call to economics researchers: How do we measure the lost productivity from computing illiteracy?


Tagged: computing for everyone, economics, public policy
27 Mar 10:30

Don't be an April Fool. Participate in World Backup Day on 3/31

Posted by Jovan Washington

Stop for a second and think about this: if you were to wake up one morning and find that your computer would not boot up, what would you do? Unfortunately, many of you have probably experienced a similar tragedy.

Some facts:

  • A hard drive is a mechanical device, and as such is the most failure-prone component in your computer.
  • If you need to recover your data when the drive fails, you will probably have to pay hundreds of dollars to a recovery service, and there is no guarantee that your data will be fully recovered.
  • The real-world observed failure rate on hard drives is around 3%, and much worse under non-ideal conditions.
  • Accidents happen, whether it’s a coffee dumped on your laptop or a stuck delete key that sends an important folder to the trash without you noticing. Human error is one of the main culprits.
  • Laptops and cellphones are attractive targets for a thief.

Remember, it's not only making backups that is important; It's making sure that your backups actually work! Participate in World Backup Day (March 31) with us and start backing up your data with SpiderOak today, and we'll make sure your data is safe so you're not an April fool:

5 free GB for new users - your World Backup Day deal

Visit SpiderOak.com/signup/ and use the promo code "WorldBackupDay" in your account settings.

Instructions for using the code:

  • Go to www.spideroak.com/signup if you are not a currently signed up.
  • You must first activate your account on your computer by opening the SpiderOak downloaded application and selecting "Activate First Device."
  • If you have not yet downloaded SpiderOak, you may do so here: Download SpiderOak.
  • Once activated, go to our homepage.
  • At the top right side, you will see "Login." Click here and enter your credentials.
  • When you are logged in, you will click "Account" in the top right corner.
  • You will then select the orange "Buy More Space" button.
  • Once on the Account Details page, you will select "Upgrade My Plan" to the right.
  • On this page, you will see a "Promotional Code" box.
  • Type "WorldBackupDay" in this box and select "Update"
  • You should see the discount in the 'Yearly Billing' drop down. If so, click "Next."
  • Your account is now updated. Enjoy!
  • Note: This code will replace your current amount with 5GBs if you are an existing member.

    We'll also be offering deals throughout the rest of the month so be sure to keep an eye on our blog, or follow us on Facebook.

    DON'T BE AN APRIL FOOL! Join us and learn more at worldbackupday.com.

    26 Mar 19:42

    Case 81: Bubble Sort

    Walking the halls of his abbey late at night, master Bawan came upon a monk in distress. A morning deadline was fast approaching, and after three all-nighters the exhausted monk had dropped his only remaining coins into the soda machine without first noticing the paper sign taped to it. The sign read OUT OF ORDER in large red capitals.

    “Can you fix it?” asked the monk, who knew of Bawan’s skill with primitive machines.

    Bawan studied the machine for a moment, then crossed out the letters of the sign and under them wrote DEFOOORRTU. Immediately a soda can dropped into the chute.

    Satisfied, Bawan continued on his way.

    26 Mar 19:40

    Case 83: Consequences

    Jun

    haha...multiple inheritance can really bite you in the ass sometimes...

    A senior monk had applied for admittance into the temple. The Abbess Jinyu was called in to investigate the man.

    “I will dictate a domain,” she said, gesturing at the whiteboard with her cane. “You will model an implementation in Java.”

    The monk bowed and uncapped a green marker.

    “All soldiers of the Imperial Army must know their rank,” began the abbess. “The Emperor may order a soldier to fight to the death, and no one but a soldier can be told to do this...”

    The monk bowed and drew a rectangle, inscribed with instance variables and methods as was his custom.

    “Some soldiers are archers,” continued the abbess, “each of whom must know the number of arrows in their possession. The Emperor may order an archer to shoot a distant foe, and no one but an archer can be told to do this...”

    The monk bowed and drew a second rectangle, joined to the first.

    “Some soldiers are horsemen,” continued the abbess, “each of whom must know the horse they have been assigned. The Emperor may order a horseman to trample the enemies in his path, and no one but a horseman can be told to do this...

    The monk bowed and drew a third rectangle, similar to the second.

    “Finally,” concluded the abbess, “Some soldiers belong to the Flying Rain of Fire, a cadre whose members are both archers and horsemen in every respect. The Emperor may order his Flying Rain to lead the charge, and no one but the Flying Rain has this privilege.”

    The monk hesitated. For a full minute he did nothing but frown at the whiteboard; all present could sense the fierce calculations taking place behind the monk’s calm visage.

    A nun of the temple whispered to Jinyu: “This problem has several solutions, but I dislike all of them.”

    “Therein lies its value,” whispered the abbess in reply. “For we are all of us doomed in this profession: our designs may aspire to celestial purity, yet all requirements are born in the muck of a pig-sty. * I trust that this monk can succeed when the stars align in his favor, but when they do not, how will he choose to fail? By cowardly surrender? By costly victory? By erroneous compromise? For it is not he alone but the temple that must bear the consequences.”

    * In Jinyu's parlance, the "pig-sty" usually meant "the world outside the temple walls," or sometimes, "my youngest son's bedroom."
    26 Mar 01:39

    How to Write Six Important Papers a Year without Breaking a Sweat: The Deep Immersion Approach to Deep Work

    by Study Hacks

    diligence

    The Productive Professor

    I’m fascinated by people who produce a large volume of valuable output. Motivated by this interest, I recently setup a conversation with a hot shot young professor who rose quickly in his field.

    I asked him about his work habits.

    Though his answer was detailed — he had obviously put great thought into these issues — there was one strategy that caught my attention: he confines his deep work to long, uninterrupted bursts.

    On small time scales, this means each day is either completely dedicated to a single deep work task, or is left open to deal with all the  e-mail and meetings and revisions that also define academic life.

    If he’s going to write a paper, for example, he puts aside two days, and does nothing else, emerging from his immersion with a completed first draft.

    If he’s going to instead deal with requests and logistics, he’ll spend the whole day doing so.

    On longer time scales, his schedule echoes this immersion strategy. He teaches all three of his courses during the fall. He can, therefore, dedicate the entire semester to two main goals: teaching his courses and conceiving/discussing potential research ideas (the teaching often stimulates new ideas as it forces him to review the key ideas and techniques in his field).

    Then, in the spring and summer that follow, he attacks his new research projects with the burst strategy mentioned above, turning out 1 – 2 papers every 2 months. (He aims for — and achieves — around 6 major papers a year.)

    Notice, this immersion approach to deep work is different than the more common approach of  integrating a couple hours of deep work into most days of your schedule, which we can call the chain approach, in honor of Seinfeld’s “don’t break the chain” advice (which I have previously cast some doubt on in the context of writing).

    There are two reasons why deep immersion might work better than chaining:

    1. It reduces overhead. When you put aside only a couple hours to go deep on a problem, you lose a fair fraction of this time to remembering where you left off and getting your mind ready to concentrate. It’s also easy, when the required time is short, to fall into the least minimal progress trap, where you do just enough thinking that you can avoid breaking your deep work chain, but end up making little real progress. When you focus on a specific deep work goal for 10 – 15 hours, on the other hand, you pay the overhead cost just once, and it’s impossible to get away with minimal progress. In other words, two days immersed in deep work might produce more results than two months of scheduling an hour a day for such efforts.
    2. It better matches our rhythms. There’s an increasing understanding that the human body works in cycles. Some parts of the week/month/year are better for certain types of work than others. This professor’s approach of spending the fall thinking and discussing ideas, and then the spring and summer actually executing, probably yields better results than trying to mix everything together throughout the whole year. During the fall, he rests the part of his mind required to tease out and write up results. During the spring and summer he rests the part of his mind responsible for having original thoughts and making new connections. (See Douglas Rushkoff’s recent writing for more on these ideas).

    I’m intrigued by the deep immersion approach to deep work mainly because I don’t usually apply it, but tend to generate more results when I do. I’m also intrigued by its ancillary consequences. If immersion is optimal for deep work, for example, do weekly research meetings make sense? When you check in weekly on a long term project, it’s easy to fall into a minimal progress trap and watch whole semesters pass with little results. What if, instead, weekly meetings were replaced with occasionally taking a couple days to do nothing but try to make real progress on the problem? Even doing this just a few times a semester might produce better results than checking in every week.

    I don’t know the answers here, but the implications are interesting enough to keep the immersion strategy on my productivity radar.

    (Photo by moriza)

    26 Mar 01:23

    Building a New Prose

    by Development Seed

    For many of the sites we build Jekyll is a prominent tool we use to build dynamic sites served by static pages. When we launched Prose last year, we set out to build a lightweight editor to create and manage Jekyll sites hosted on GitHub. We open sourced Prose, and the response from the GitHub community was overwhelming. Prose is a project with over 900 followers, and many actively use it each day.

    Along with the relaunch of Healthcare.gov in June, we will dedicate time to improving the user experience and reliability of Prose, as well as adding new features.

    This new version will also establish a clear direction to move forward - to make Prose a great interface for authoring content. We will focus Prose entirely on writing Markdown based documents, streamlining the interface for content creators.

    A New Interface

    As part of envisioning a new version of Prose, we have started on wireframes. You can view the entire set on Flickr, and below are an early look at key screens.

    Authenticated Landing Page

    Authenticated Landing PageFor logged in users, the landing page features a filterable directory of projects per organization.

    Project Page Settings

    Project Page SettingsThe project settings panel controls page deletion, publishes modes, and sets front matter values in clean form fields populated by a project schema.

    Project Page Settings Prototype

    Project Settings PrototypeHere's a more detailed version of the page settings panel with vertical navigation.

    These wires should represent an interface that has a good balance between users belonging to many projects and organizations or just one. Filenames and directories should be quickly scannable and filterable through autocomplete search. Filenames can be long and there can be many per directory. The interface should scale to accommodate volume and have subtle styled queues when content would otherwise overflow. The main navigation in the wires is presented in a vertical format and split into page level sections. It scales depending on the context of the page:

    Main Navigation
    - Authenticated Landing

    Project Navigation
    - Project Page - Create New File

    Page Navigation - Editing - Preview - Media/Assets - Page Settings

    New features

    Prose already provides the ability to specify configurable options in _config.yml

    auto: true
    server: true
    prose: 
        rooturl: '_posts'
        metadata:
            - name: layout
              field:
                element: select
                defaultText: 'Select a Layout'
                options:
                  - value: default
                  - value: page
    

    We want to expand on this by providing developers a way to specify a project schema in YAML that props up a sandboxed world of a site, free of system files. A metadata schema provides helpful defaults to YAML front matter, specifies an element type, and lists out values available to the site. Other ideas include specifying an assets directory for images or media.

    We will also enhance the editing interface overall, including improving markup formatting and building out a simple mechanism to drop in images or media into a page.

    Stay tuned. Work will begin in the master branch of Prose on GitHub - watch there for new development.

    Prose users: Let us know your thoughts over on GitHub

    26 Mar 01:04

    Post "Good Google," Who Will Defend the Open Web?

    by timothy
    psykocrime writes "The crazy kids at Fogbeam Labs have started a discussion about Google and their relationship with the Open Web, and questioning who will step up to defend these principles, even as Google seem to be abdicating their position as such a champion. Some candidates mentioned include Yahoo, IBM, Red Hat, Mozilla, Microsoft and The Wikimedia Foundation, among others. The question is, what organization(s) have both the necessary clout and the required ethical principles, to truly champion the Open Web, in the face of commercial efforts which are clearly inimical to Open Source, Open Standards, Libre Culture and other elements of an Open Web?"

    Share on Google+

    Read more of this story at Slashdot.



    26 Mar 01:03

    Matthew Garrett Has a Fix To Prevent Bricked UEFI Linux Laptops

    by timothy
    hypnosec writes "UEFI guru Matthew Garrett, who cleared the Linux kernel in Samsung laptop bricking issues, has come to rescue beleaguered users by offering a survival guide enabling them to avoid similar issues. According to Garrett, storage space constraints in UEFI storage variables is the reason Samsung laptops end up bricking themselves. Garrett said that if the storage space utilized by the UEFI firmware is more than 50 percent full, the laptop will refuse to start and ends up being bricked. To prevent this from happening, he has provided a Kernel patch."

    Share on Google+

    Read more of this story at Slashdot.



    26 Mar 00:48

    Our Internet Surveillance State

    by schneier

    I'm going to start with three data points.

    One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.

    Two: Hector Monsegur, one of the leaders of the LulzSac hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.

    And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels -- and hers was the common name.

    The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we're being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

    Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell's identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.

    Facebook, for example, correlates your online behavior with your purchasing habits offline. And there's more. There's location data from your cell phone, there's a record of your movements from closed-circuit TVs.

    This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it's efficient beyond the wildest dreams of George Orwell.

    Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.

    There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it's fanciful to expect people to simply refuse to use them just because they don't like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don't spy.

    This isn't something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web's privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.

    Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, and you've permanently attached your name to whatever anonymous service you're using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can't maintain his privacy on the Internet, we've got no hope.

    In today's world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect -- occasionally demanding that they collect more and save it longer -- to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they're not going to give up their positions of power, despite what the people want.

    Fixing this requires strong government will, but they're just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.

    So, we're done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.

    And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.

    Welcome to an Internet without privacy, and we've ended up here with hardly a fight.

    This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets -- by far the most widely distributed essay I've ever written.

    Commentary.

    EDITED TO ADD (3/26): More commentary.

    EDITED TO ADD (3/28): This Communist commentary seems to be mostly semantic drivel, but parts of it are interesting. The author doesn’t seem to have a problem with State surveillance, but he thinks the incentives that cause businesses to use the same tools should be revisited. This seems just as wrong-headed as the Libertarians who have no problem with corporations using surveillance tools, but don't want governments to use them.

    26 Mar 00:29

    Fallible beings

    by Buttonwood

    A LOT of faith is placed in the wisdom of central bankers, by politicians and investors. The former hope that monetary policy can prop up the economy while they attempt to reduce budget deficits; the latter tend to buy equities as soon as they think central bankers are easing.

    But it is worth remembering that central bankers are fallible. I've quoted Ben Bernanke before, asked about the possibility of a housing bubble in July 2005

    Well, I guess I don't buy your premise...We've never had a decline in house prices on a nationwide basis.

    And I just came across this quote* from Janet Yellen, Bernanke's potential successor, in a 2005 speech on housing bubbles and monetary policy. She acknowledges that house prices may be high relative to rents but adds that

    In my view, the ... decision to deflate an asset price bubble rests on positive answers to three questions. First, if the bubble were to collapse on its own, would the effect on the economy be exceedingly large? Second, is it unlikely that the Fed could mitigate the consequences? Third, is monetary policy the best tool to use to deflate a house-price bubble? My answers to these questions in the shortest possible form are, "no," "no," and "no." ... In answer to the first question on the size of the effect, it could be large enough to feel like a good-sized bump in the road, but the economy would likely to be able to absorb the shock... In answer to the second question on timing, the spending slowdown that would ensue is likely to kick in gradually... That would give the Fed time to cushion the impact with an easier policy.

    Her answer to the third point is left out for reasons of space but, it echoes Alan Greenspan's argument, that a rise in interest rates is too blunt a tool and might do unnecessary damage to the rest of the economy.

    Of course, Ms Yellen was not alone in failing to predict the damage that would be caused by the collapse of the housing bubble. But the fact she didn't get it right should make us pause when we assume that she, and other central bankers, will get other things right. In mid-2010, for example, the Fed thought the US economy would grow by between 2.9% and 4.5% in 2011; it actually grew 1.7%. In August 2010, the Bank of England thought the most likely UK GDP growth rate in 2011 was 3%; it managed 0.7%. Yes, one could argue both banks were blindsided by the EU crisis but Greece had been bailed out in May 2010 and the problems of Ireland and Portugal were already apparent.

    Looking forward, will central banks be able to exit their current policy with anything like the ease they assume? Here is Sir Mervyn King

    I have absolutely no doubt that when the time comes for us to reduce the size of our balance sheet that we'll find that a whole lot easier than we did when expanding it

    Absolutely no doubt? Hmm. With that infallibility, Sir Mervyn is a shoo-in for the Papacy.

    * The quote came from a proof of Stephen King's forthcoming book When the Money Runs Out, which looks very good indeed. Trying to find the original speech proved difficult; the link is to Mark Thoma's blog of the time. The link from there calls up a notice from the San Francisco Fed that the speech is no longer available. Let us charitably assume that it doesn't keep details of eight-year-old speeches.

    26 Mar 00:27

    The Most Difficult Environment for Generating Income in 140 Years

    by mfaber

    There are a few shops that I consider to be absolute must reads for everything they publish. Research Affiliates, GMO, and Leuthold are a few such shops we have featured on The Idea Farm, and this month’s research note by O’Shaughnessy Asset Management is another one.  Jim wrote the excellent “What Works on Wall Street: A Guide to the Best Performing Investment Strategies of All Time“, a classic quant read.

    Their research style is right up my alley – lots of pictures and charts backed up with plenty of data and original thinking.  You can access their archives from this link that includes some great pieces on “The Fiscal Cliff and Your Portfolio” and “Stocks, Bonds, and the Efficacy of Global Dividends”.

    A few pics below before the download:

    25 Mar 23:41

    Paul Hammant: The Cost of Unmerge

    One of the reasons you’re going to choose a Trunk Based Development (TBD) model, is because you’re doing concurrent development of consecutive releases. Maybe that’s not even your choice as a development team, but the people funding the development work already have a series of big releases planned, and even eighteen months out, they have a clear idea of the order of releases and are planning the marketing campaigns around them, and it’s pretty clear that you can’t delay the start of the each project because of the amount of work needed.

    What if the unthinkable happens? Specifically the order of releases is changed or a releases is cancelled, after usability testing shows it as popular be popular with end-users as dirt frosting on a donut.

    I think we all know the solution I’ll propose, but lets drill into the problem is we were using non-trunk branching models.

    Cascading branches.

    Remember: Your development teams are all working concurrently.

    In this model the release 1.0 project team has it easiest. They just work towards their completion. Periodically though the release 1.1 project team take merges (everything, not just cherry-picks). Mostly likely a dev-pair will work for between half a day to a couple of days to run through all the merge conflict resolutions. This is especially true if the changes were happening to the same parts of a web page, or the same classes. It’s also true if the team has been authorized to do refactorings as they go. If the ‘hit’ is taken weekly it’ll be longer, whereas daily merges from ‘upstream’ might be easier. Then again if the 1.0 team is 30 people, and the 1.1 is 20 people, then there is a likelihood that somebody in the 1.1 team is permanently arbitrating over merge activities. At least, that’s true for high-throughput teams.

    Release 1.1 being cancelled, or being re-sequenced after what would have been release 1.2, is pretty much a nuclear option. Specifically is it pertains to part finished projects and merges you wish you’d not made. If either of those happens you’re going to have some down-time for development while many dozens of people are involved in work to neutralize (temporarily or permanently) work that’s been merged to 1.2 (and/or 1.3), then start merging that ‘neutralization’ again. This is the “unmerge” that is costly of the article title. The internal customer paying for the dev work, wonders why from their point of view, dev work essentially stops for a week or a month, yet salaries are still being paid.

    It need not be a whole release that is scrapped or resequenced, it could be just some aspects of it.

    Classic ClearCase Branching

    ClearCase has a classic branching style, which counts a main-line branch as a pristine representation of previous released work, but actual development work-streams happen on a branch that is named for a particular release. As well as developer commits, actual releases happen from that release branch. Because both are happening on the same branch, a code freeze incrementally kicks in to protect the release from mistakes. After the release, everything is merged back to the mainline. Of course defects happen, so the release branch stays alive longer than is intended, and more merges happen after successive releases from there.

    Meanwhile, perhaps slightly in advance of the incremental code-freeze, a new release branch is cut, and preparatory work happens there towards that later release. After the release from the former branch, and the merge down to mainline, there will be a consequential merge out from mainline to the new release branch, bringing them up to date. These merges are “all” rather than cherry picks, and are not cheap.

    The problem is when you’re canceling or delaying some components of the release. You’re going to have to devote some resources to commenting out code, or wrapping it around conditionals tied to a feature-toggle, when you’d not previously intended that. The latter is better of course. Following that there’s a larger testing schedule and to ensure you got them all. Again, that essentially costly unmerge.

    The correct solution

    The right thing to do is Trunk Based Development together with Branch by Abstraction, and Feature Toggles.

    When you have multiple teams working in one code-line, and they’re toggling everything because their code is going out long before marketing are ready to trumpet the arrival of the associated features, then there’s no cost of unmerge. The only cost is setting up another Jenkins (substitute your choice of CI server) pipeline for a permutation of toggles that you’ve not previously tested. Maybe there’s some small work after that to iron out the kinks.

    This actually happened to a client of mine. We’d advocated for a shift to trunk based development for many months, and won it because of the huge pain or merging changes from a upstream branch (high throughput team), and Subversion was actually breaking in a particular scenario, and we were struggling to find support for it. Start a merge, abandon after two days, start over and abandon again after two days gets you to revaluate your branching strategy quite seriously.

    Source-Control tool trends.

    These numbers, from Indeed.com, are gameable by those with vested interest, but let’s assume nobody would go to that huge effort:

    Subversion, Git and TFS overtook ClearCase in ‘09 to ’12. ClearCase slowly trends downwards. Thank goodness for that I say. I’ll be pleased when IBM/Rational eventually pull the plug on it. Rational now have “Rational Team Concert” (RTC), which I have not shown. It is at the same level of interest as StarTeam (the black line at the bottom). StarTeam should also go the way of the dodo in my opinion. Mercurial is as good as Git really, but has been wrong-footed by the rise of Git and Github. Even Microsoft realized that Git is unstoppable when it endorsed it in their TFS ecosystem. Perforce chugs along as the darling of Google, Apple, and multi-platform games companies. TFS and Subversion spent a lot of time matching the feature-set and mode of operation of Perforce of course. Subversion has peaked, obviously, but still remains the gorilla in the room. I think Mercurial and Subversion should merge, as Selenium and WebDriver did, and Struts2 and WebWork2 and Rails3 & Merb did.

    Mar 21, 2013: This article was syndicated by DZone

    25 Mar 23:37

    ISO 8601

    Jun

    YYYYMMDS hooray!

    ISO 8601 was published on 06/05/88 and most recently amended on 12/01/04.
    22 Mar 16:13

    Three Indispensable Regular Expressions for OmegaT

    by Roman Mironov

    Learning how to use regular expressions in OmegaT pays offOne of the best things about OmegaT… Wait, do I say this too often? :)

    Anyway, OmegaT supports regular expressions for many tasks, enabling us users to do extremely useful things. I already wrote about using regular expressions to build a list of common errors. Other areas where they come in handy include segmentation and text search. Read this article to learn about some of the most important REs that make it so much easier and more satisfying to work on the translation in OmegaT.

    Searching for whole words

    OmegaT comes with two major search options: exact search and keywords:

    • Exact search yields the exact match of what you’re searching for:

    open box” results in “open box,” “open boxes,” “re-open boxes,” and so on, but not “opened box.”

    • To find “opened box,” you need to search by keywords, and OmegaT will be looking for any number of individual search terms in any order.

    “open box” results in “opened box,” “An opened box fell to the floor,” and “The mailbox was left opened.”

    This second option is very useful, by the way. And it’s pretty much unique to OmegaT since it isn’t available in many commercial counterparts.

    Now, what if you want to find just “open box,” without any inflections? You need the whole word option. While this option is standard in many applications, OmegaT doesn’t include it yet. No problem, just put \b, denoting a word boundary, right before and after your search term. Enable the Regular expressions checkbox and go ahead.

    Like this: \bopen box\b

    Finding the untranslated segments

    You can open the next untranslated segment by jumping to it with Ctrl+U. But what if you want to see a picture bigger than just one segment at a time? You can do so with this simple regular expression: ^$. This is what you can use its results for:

    1. Double-checking whether every segment requiring translation has been translated.
    2. Filtering out the already translated segments to display the untranslated segments only in the Editor pane. This “uncluttered” view can be very conducive to concentrated and productive work.
    3. Extracting the untranslated segments into a separate file for further use. For example, you may want to create such file to avoid processing the 100% matches that your client isn’t paying for. If you keep them in the project, they might distract you and will actually appear in the quality assurance results when you run QA in a separate program such as Verifika.

    Matching any single character

    I didn’t realize until recently how easy it was to create segmentation rules in OmegaT using such a simple regular expression as a period that matches any character. Wherever you want a segment to break you just add this item it as a “pattern before,” with a period as a “pattern after,” or vice versa. Here is an example:

    Open the Settings window.\n\nOpen the Files tab.

    This kind of segments can be a pain to manage in a translation project because while the two sentences are glued together in one segment here, they can also occur in the project as two separate sentences:

    Open the Settings window.

    Open the Files tab.

    As a result, they might be translated inconsistently.

    But this is where the mighty period comes into play. Just add these two segmentation rule:

    (Break/Exception enabled)

    Pattern Before: \\n\\n (you need to escape both backslashes because otherwise \n will be treated as a regular expression for the newline)

    Pattern After: .

    And the second one, mirroring the first one:

    (Break/Exception enabled)

    Pattern Before: .

    Pattern After: \\n\\n

    That’s it. You’ll produce two much cleaner segments, and sometimes, they’ll even turn out to be repetitions identical to some other sentences in your project:

    Open the Settings window.

    \n\n

    Open the Files tab.

    Most of the suggestions above originally came from the extremely helpful community at the OmegaT’s forum. Once again, kudos to the OmegaT’s “ecosystem”!

    22 Mar 04:22

    Browser Security

    by schneier

    Interesting discussion on browser security from Communications of the ACM. Also, an article on browser and web privacy from the same issue.

    22 Mar 03:46

    How the FBI Intercepts Cell Phone Data

    by schneier

    Good article on "Stingrays," which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network -- as opposed to computers which inherently do not trust the Internet -- it's easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they're on shaky legal ground.

    The latest release, amounting to some 300 selectively redacted pages, not only suggests that sophisticated cellphone spy gear has been widely deployed since the mid-'90s. It reveals that the FBI conducted training sessions on cell tracking techniques in 2007 and around the same time was operating an internal "secret" website with the purpose of sharing information and interactive media about "effective tools" for surveillance. There are also some previously classified emails between FBI agents that show the feds joking about using the spy gear. "Are you smart enough to turn the knobs by yourself?" one agent asks a colleague.

    Of course, if a policeman actually has your phone, he can suck pretty much everything out of it -- again, without a warrant.

    Using a single "data extraction session" they were able to pull:
    • call activity
    • phone book directory information
    • stored voicemails and text messages
    • photos and videos
    • apps
    • eight different passwords
    • 659 geolocation points, including 227 cell towers and 403 WiFi networks with which the cell phone had previously connected.
    22 Mar 03:44

    Gauss

    by schneier

    Nice summary article on the state-sponsored Gauss malware.

    22 Mar 03:43

    FinSpy

    by schneier

    Twenty five countries are using the FinSpy surveillance software package (also called FinFisher) to spy on their own citizens:

    The list of countries with servers running FinSpy is now Australia, Bahrain, Bangladesh, Britain, Brunei, Canada, the Czech Republic, Estonia, Ethiopia, Germany, India, Indonesia, Japan, Latvia, Malaysia, Mexico, Mongolia, Netherlands, Qatar, Serbia, Singapore, Turkmenistan, the United Arab Emirates, the United States and Vietnam.

    It's sold by the British company Gamma Group.

    Older news.

    EDITED TO ADD (3/20): The report.

    22 Mar 03:42

    Matthew Garrett: Using pstore to debug awkward kernel crashes

    The problem with Samsung laptops bricking themselves turned out to be down to the UEFI variable store becoming more than 50% full and Samsung's firmware being dreadful, but the trigger was us writing a crash dump to the nvram. I ended up using this feature to help someone get a backtrace from a kernel oops during suspend today, and realised that it's not been terribly well publicised, so.

    First, make sure pstore is mounted. If you're on 3.9 then do:

    mount -t pstore /sys/fs/pstore /sys/fs/pstore

    For earlier kernels you'll need to find somewhere else to stick it. If there's anything in there, delete it - we want to make sure there's enough space to save future dumps. Now reboot twice[1]. Next time you get a system crash that doesn't make it to system logs, mount pstore again and (with luck) there'll be a bunch of files there. For tedious reasons these need to be assembled in reverse order (part 12 comes before part 11, and so on) but you should have a crash log. Report that, delete the files again and marvel at the benefits that technology has brought to your life.

    [1] UEFI implementations generally handle variable deletion by flagging the space as reclaimable rather than immediately making it available again. You need to reboot in order for the firmware to garbage collect it. Some firmware seems to require two reboot cycles to do this properly. Thanks, firmware.

    comment count unavailable comments
    22 Mar 03:34

    What happens when the Secret Service uses a NSL on you

    22 Mar 03:33

    Escher Illusions in LaTeX

    22 Mar 03:31

    What’s New in 0.148u2

    by Haze

    There are two ways you can judge an emulator, either by what it can do, or what it can’t do. Personally I’ve always favored the first option, it’s a more positive outlook, and it’s a good one to take with MAME, and especially MESS, or the combined UME project (and it’s part of the whole philosophy behind it, UME allows you to do more than MAME)

    It’s with that introduction I’ll mention some of the progress in 0.148u2 coming from the MESS side of things. It goes without saying that the Jaguar driver is a bit rubbish, in terms of what it *can’t* do you could fill a list covering most of the system library but I’m going to focus on what it can do here, and 0.148u2, thanks to some work from Ville, expands what it can do into the realm of being able to run Tempest 2000, the psychedelic follow-up to Atari’s arcade classic. The game runs and is very playable on my Core2 system with full sound, I don’t think the graphics are perfect but it’s a huge improvement on before where it would simply crash the emulator due to unsafe blitter operations overwriting the game ROM in memory!


    Tempest 2000 on the JaguarTempest 2000 on the Jaguar
    Tempest 2000 on the JaguarTempest 2000 on the Jaguar
    (Tempest 2000 benefited from recent improvements to the Jaguar emulation and is now mostly playable with sound)

    Purists might say it lacks the crisp vector look of the original, although that’s obvious given that it isn’t running on vector hardware here, but it has a style of it’s own, and I feel could easily have been an arcade release. Jaguar will need a lot more significant work applied (probably a full rewrite) before it’s anywhere near a good driver, but nice to see it getting a bit of attention in the same way that u2 has given some attention to several older MAME drivers. The original Jaguar version of Rayman also benefited from the changes with the collisions now being fixed, although that still lacks sound and the save feature shows ‘ERROR’ in all slots, still it complements the progress made with the Saturn version of the same game in u1 nicely!

    Rayman JaguarRayman Jaguar
    (Rayman was also improved, but still lacks sound)

    Speaking of Saturn there have been continued improvements to the emulation of it in u2 from Kale, mostly targeting specific issues highlighted by specific games, but many of the improvements likely have a wider impact than the individual test cases observed. A knock on effect of this Saturn work has been improvements to the ST-V emulation in MAME, and as featured on Kale’s Blog, the game ‘Zenkoku Seifuku Bishoujo Grand Prix Find Love’ is now playable in MAME. If it wasn’t clear from the title it’s an adult game where you undress Japanese ‘gals’ by playing various mini-games, including a ‘find the differences’ one, ‘pairs’ and a jigsaw game. Typical 90s garbage really, but at least it works now, interesting only really because of the platform it runs on.


    Find LoveFind Love
    Find LoveFind Love
    (Kale got Zenkoku Seifuku Bishoujo Grand Prix Find Love working thanks to a better understanding of ST-V / Saturn hardware)

    Some observations were also made about the protected ST-V games during this development cycle, and Kale identified that the broken player / ball movements in Tecmo World Cup ’98 were indeed related to a protection check, although even with that fixed the game still hangs if the USA Goalkeeper gets the ball, it isn’t clear if that is further protection on the GKs special move, or just a bug in the Saturn / ST-V emulation. From some brief studying of the protection it looks like Sega actually used a complex encryption / compression system on some of the game data for these ST-V games, similar (possibly the same as) the one used on the Naomi cartridges. Further research suggests it might also be used on Model 3, both for the games with compressed graphics there, and even the ones doing a dumb ‘string check’. That would mean Sega were using an encryption scheme as complex as CPS2 to perform nothing more than a hardcoded check against a string in some cases if true, mind boggling!


    Tecmo World Cup '98Tecmo World Cup '98
    (Kale improved Tecmo World Cup ’98, but it still has game breaking bugs and bad performance)

    Either way, a lot more work, and possibly extensive data collection + analysis will be needed to either confirm or deny that theory. Decathlete remains an odd-one out and seems to have the compression more for functional purposes than protection, the way it talks to the device is different to the rest, and it explicitly uploads what look like Huffman tables for the compression. I plan on having another look at that one in the future. There were other ST-V fixes as well, the long standing ‘some games give 2 credits on startup’ bug appears to have been fixed as well, and while only a minor irritation it is a sign that the driver, and understanding of the hardware are starting to mature even a huge amount of work remains, especially in terms of performance and video accuracy.

    The work on Gunpey is included in 0.148u2, although no progress was made on the compression, which remains problematic. My early optimism that I would have that one sorted in a week definitely didn’t pay off! It’s actually rather playable but still marked as not-working because a couple of the sheer number of broken graphics caused by the missing decompression.


    GunpeyGunpey
    (Gunpey, quite playable but understanding the compression scheme has become a sticking point)

    You’re probably sick of seeing Cool Riders by now, but that is included as well, I’d had the driver on freeze for a while as not to break in any way prior to the 0.148u2 release, I might give some of the remaining issues another look now that u2 is out of the way, the main irritation is still the sound. *edit* Cool Riders will have problems with some Mac systems in 0.148u2 due to a compiler bug with the CLang compiler used, a workaround has been submitted by Phil B, thanks for the reports.


    Cool Riders
    (Cool Riders is officially supported in 0.148u2)

    On the MESS side etabeta has been giving some attention to expanded cartridges for a number of systems. The Genesis and SNES were beneficiaries of this, and also improved documentation of the cartridge content (rom name, chip locations, extra hardware etc.) thanks to notes / information from ‘ICEknight’, ‘Sunbeam’ on the MESS forums and other contributors from elsewhere. This kind of information is essential to the documentation value of MESS, and of course proper identification and emulation of any important extra chips found in the cartridges.

    I mentioned expanded cartridges, and for the Genesis that more often than not means 3rd party ones, usually unlicensed (by Sega) Chinese titles and the like many of which had unusual / custom mapping for things like their battery backed save systems, or custom protections. A number of games previously requiring hacked dumps to run can now be used with the proper ones, including things like ‘Tiny Toon Adventures 3′ I am however still noticing some stability issues with the driver randomly crashing, especially on startup with certain games (Mulan for example) This may or may not be related to some overall stability issues I’ve noticed since the recent HLSL work went in, where games with dynamic resolution changes are bombing out on me at random. The genesis does do dynamic resolution changes…


    Genesis Tiny Toons 3Genesis Tiny Toons 3
    Genesis Tiny Toons 3Genesis Tiny Toons 3

    Genesis Legend of WukongGenesis Legend of Wukong
    Genesis Legend of WukongGenesis Legend of Wukong

    Genesis MulanGenesis Mulan
    Genesis MulanGenesis Mulan

    Genesis Super Mario World 64Genesis Super Mario World 64
    Genesis Super Mario World 64Genesis Super Mario World 64
    Genesis Beggar PrinceGenesis Beggar Prince
    Genesis Beggar PrinceGenesis Beggar Prince
    Genesis Beggar PrinceGenesis Beggar Prince
    (A selection of unlicensed Genesis titles where the extra hardware in the cartridges is now emulated)

    Other systems saw work on obscure carts / pirate releases too, with the Gameboy not being excluded from that. Work was done on support for the custom hardware used by “Shi Qi Shi Dai – Jing Ling Wang Dan Sheng” a Chinese RPG, it still has issues with drivers other than gbpocket due to unhandled bios interactions (maybe it only works on specific machines anyway?) Improvements were made to the Rockman 8 support too, a pirate sequel to the famous series although I’m not convinced it REALLY works yet, most enemies seem to be missing until you die, and the glitches with the video timing are irritating.


    Gameboy Shi Qi Shi DaiGameboy Shi Qi Shi DaiGameboy Shi Qi Shi Dai
    Gameboy Shi Qi Shi DaiGameboy Shi Qi Shi DaiGameboy Shi Qi Shi Dai
    Rockman 8Rockman 8
    Rockman 8Rockman 8
    (Some unlicensed Gameboy titles had their emulation improved by emulating cart specific hardware)

    Another Gameboy-like handheld, the Megaduck had it’s software list hooked up, although both the Software List and driver have existed in their current form for a long time so I’m guessing the lack of a hookup (a one line addition) was more of an oversight than anything else. The emulation seems to have some minor timing glitches causing the occasional bad line, unsurprisingly similar to those seen with the regular Gameboy. *edit* Apparently this was actually a regression fix, and the Softlist had been hooked up in the past, but the hookup was lost at some point in recent refactoring


    Mega DuckMega DuckMega DuckMega Duck
    (The Megaduck was hooked up to the Software List, although both the driver and list have been there for a while!)

    As part of etabeta’s work the SNES saw a large number of cleanups, and refactoring to make things work in a more ‘logical’ way. For cartridges with expanded hardware you no longer need to specify different base machines, instead the software lists specify what hardware was in the cartridge and dynamically add the extra required CPUs to the emulation etc. The downside to this is that the slot system seems to be causing some pretty significant performance issues, with a drop of 20% reported on some of the SNES titles for which performance was already worrying (the DSP based ones). Hopefully this is temporary, and some more intelligent coding somewhere can win it back. The SNES also had a number of pirate titles and the like, including an odd multi-game cart consisting of an original Tetris game alongside many ported NES titles. Not all the games run especially well in MESS, although many are glitch on (at least some models of) original hardware too. Like most multi-game pirates the cartridge contains extra bankswitching logic and etabeta needed to emulate this in order for it to run at all.

    Korean 20-in-1 bootlegKorean 20-in-1 bootleg
    Korean 20-in-1 bootlegKorean 20-in-1 bootleg
    Korean 20-in-1 bootlegKorean 20-in-1 bootleg
    (A funky SNES bootleg cart containing NES games required custom bankswitch emulation in order to boot, although still has issues)

    While that pirate cart is quite funky, there were also a number of multi-games containing regular SNES games. Unfortunately most dumps of these are bad, containing only the first bank, probably due to being dumped with cart copiers incapable of dealing with the bankswitching. A similar problem exists for many Genesis mulit-game pirate carts.

    There were SNES games with additional protection as well, although many of these seem to have been of an even lower quality than the Genesis ones. Tekken 2 is one such example. A cracked version of the pirate game has been supported for a while, but 0.148u2 adds emulation of the original protected pirate cartridge too. The screenshots might look reasonable but the actual gameplay on offer makes Fit of Fighting in MAME look like Marvel vs. Capcom. Imagine if you will controls, animation and scrolling running at something like 5 frames per second, and a game where most of your attacks, if you do manage to pull any off, get blocked. It’s diabolical. As it turns out, emulating the protection on this also allowed for another game to run, the Street Fighter EX plus alpha pirate, that’s marginally better, but we’re talking very, very small margins here.


    SNES Tekken 2SNES Tekken 2
    Street Fighter EXStreet Fighter EX
    Street Fighter EXStreet Fighter EX
    (Some more terrible SNES pirate games are also now working)

    You’ll have noticed something of a recurring theme with the emulation of these pirate carts, and might be wondering why etabeta has spent his time looking at these rather than spending time improving the actual SNES and Genesis emulation. I think a lot of it comes down simply to levels of interest, it’s always enjoyable to be discovering, documenting and understanding something new then recording those findings in MAME / MESS as a record of how they work, and while the games are often terrible that’s not really the point. When I was working on HazeMD it was one of my goals, to understand the things nobody had put much effort into understanding before, so I’m glad to see that continued here, and it looks like people have even started performing hardware tests for some of the protected carts to document how they really work. I’ve always said the value of MAME / MESS is the knowledge contained within it, and obviously understanding these things increases that even when the games are of little worth.

    Back to the MAME side, we decided to mark Stadium Hero 96, the game featured in the previous update here, as working. Kale played through the game, completing it, and reported that there only seem to be a few issues with the clipping windows in places, and no game breaking bugs, so the IMPERFECT GRAPHICS flag seems more appropriate than the non-working one at this point.


    Stadium Hero '96Stadium Hero '96
    (The previously shown progress on Data East’s Stadium Hero 96 is included)

    I don’t usually cover bootleg clones, but ‘RevisionX’ dumped an unknown board which turned out to be a new bootleg of Moon Cresta called Star Fighter (making it the 3rd game we have supported to carry that name..) It has some redrawn graphics, fast shooting, and more aggressive wave advancement compared to the original at least, I’m not sure how it compares to some of the other bootlegs. The colours might be wrong because no colour PROM was dumped, and we’re using the one from the original.


    Star Fighter (Moon Cresta bootleg)Star Fighter (Moon Cresta bootleg)

    Star Fighter (Moon Cresta bootleg)Star Fighter (Moon Cresta bootleg)
    (Star Fighter is an interesting bootleg of Moon Cresta)

    PoPo Bear, another game featured here in a previous update is also officially supported in 0.148u2.


    PoPo BearPoPo Bear
    (Snake meets Bomberman in PoPo Bear)

    From a more technical side of things one of the most significant updates in 0.148u2 is the complete rewrite of the 6809/6309/KONAMI CPU cores by Nathan Woods. This was done as an attempt to make the cores cycle accurate and should lead to far better raster effects in systems like the CoCo in MESS with some other supporting improvements. This is a very big change, and has the potential to break a lot, in fact when it first went in there was widespread breakage of many drivers in MAME and MESS, but during the course of the u1->u2 cycle the majority of the obvious ones were caught and fixed. Just be warned, some bugs could still be lurking because it’s fresh code replacing tried and tested old code. Naturally if you see any new suspicious behavior introduced for games using a CPU from this family in 0.148u2 it should be reported on Mametesters

    Kale also put some work into the Casio Loopy, allowing preliminary screens of that. The system is a difficult target primarily because there is no dump of the BIOS. The system uses an SH1 with internal ROM, and boots the games from that, with some code pointing back to the bios for various interrupts / system calls. The goal of trying to emulate some of the system like this is in part to gain an understanding how how things work, and potentially come up with a way to dump the content of the internal ROM if the hardware allows for reading of it from the game code. The speech bubbles shown in one of the games might actually be an ideal starting place for such work so while most of this looks like meaningless garbage right now it is actually potentially very important progress.


    Casio LoopyCasio LoopyCasio Loopy
    Casio LoopyCasio LoopyCasio Loopy
    (Preliminary work on Casio Loopy might help find a way to get the BIOS dumped)

    Back to the ‘technical’ changes, we’ve seen a LOT of modernizations in MAME and MESS during this update in additional to Firewave going over a lot of drivers and devices adding proper initialization of many member variables in an effort to keep the project both clean, deterministic and stable. Many of these were missed during older initialization passes because the older (non C++) device model would automatically 0 out memory when creating devices, but the new one does not, and was leaving many things in undetermined states, which generally isn’t a good thing because a lot of those are detectable by the emulated system and/or can cause stability issues. I do wonder if the Genesis stability issues I’ve been seeing while writing this article aren’t related to a similar problem if we can rule out the D3D code updates made for HLSL / multithreading by default. Tracking random issues like that can be a nightmare tho, especially when they stop happening altogether in your debug builds, so I’m glad to see preemptive steps being taken to prevent many potential problems from cropping up later, it certainly helps narrow things down when you have to start looking for potential issues manually.

    From the oddity department LoganB submitted a software list for the Sega Dreamcast VMU quickload files. While these aren’t technically ‘roms’ as such (but memory dumps) they do provide an accessible way to make use of Sandro Ronco’s driver by validating that you’re using recognized software, software that would definitely work on the original unit. While the approach used is not very ‘pure’ it is convenient, and very useful for regression testing and the like. Here are some screenshots of the Powerstone VMU memory dump running. No new work has been done on the actual driver, but with the software list now being present and hooked up it’s the first time I’ve given it a run through so I thought it warranted putting some shots up, for a reference to how it runs right now (even if it’s still marked as NOT WORKING)


    Sega VMU - PowerstoneSega VMU - PowerstoneSega VMU - PowerstoneSega VMU - PowerstoneSega VMU - Powerstone

    Sega VMU - PowerstoneSega VMU - PowerstoneSega VMU - PowerstoneSega VMU - PowerstoneSega VMU - Powerstone
    (The Sega VMU got a Software List, this was the controller Memory Card thing used on the Dreamcast)

    Not all work involving MAME and MESS involves changing the actual projects directly either. The support network is vital as well, and while I feel we should keep a lot of this tied as close to the actual emulation distributions as possible (rather than relying on miscellaneous forum posts and wiki entries) it is good to see when people make an active effort to teach people how to use MESS, and show what can be done. It’s with that I’d like to mention a thread R.Belmont created on the MESS forums to help people understand MESS to the point of being able to install an actual operating system inside an emulated PC in MESS. While doing this isn’t very practical for performance reasons it is important people realise that both MESS and UME can make use of far more advanced features of the MAME framework than MAME alone, and show how some of them are used. Now none of this would be possible without the underlying improvements to the PC emulation which have been ongoing for some time now, but the two really go hand in hand and one of the motivating factors for the project(s) is curiosity, and that desire to do things just because they can be done. There is another amusing thread of screenshots of the MESS forums taken from within MESS again showing that you can squeeze a lot of entertainment out of the projects beyond the obvious ones.

    Let’s look at some more of the clone additions in MESS too. You’ll see that there has been a steady flow of Megatouch clones for the older Megatouch units showing up for a while now (most of the newer ones are PC based I believe, although I don’t know if anybody has looked at them recently after all the work done on the PC drivers)

    Pit Boss Megatouch II (9255-10-01 ROG, Standard version) [Brian Troha, The Dumping Union]
    Megatouch III (9255-20-01 ROK, Standard version) [Brian Troha, The Dumping Union]
    Megatouch III (9255-20-01 ROB, Standard version) [Brian Troha, The Dumping Union]
    Megatouch III (9255-20-01 ROA, Standard version) [Brian Troha, The Dumping Union]
    Super Megatouch IV (9255-41-01 ROE, Standard version) [Brian Troha, The Dumping Union]
    Super Megatouch IV (9255-41-01 ROC, Standard version) [Brian Troha, The Dumping Union]

    is the whatsnew listing for Megatouch clones this time around. A question was put forward on the Mameworld board asking what the difference was between the various Megatouch clones, and ‘anoid’ put the following information forward about Megatouch III

    9255-20-01 - Standard Version - Includes all options and no restrictions
    9255-20-02 - Minnesota Version - Excludes Casino Games
    9255-20-03 - Louisiana Version - Excludes All Poker Games
    9255-20-04 - Wisconsin Version - Game cannot end if player busts; 1000 points are added to end of each hand
    9255-20-06 - California Version - Excludes Poker Double-Up feature and no free game in solitaire
    9255-20-07 - New Jersey Version - Excludes sex trivia and includes 2-coin limit with lockout coil
    9255-20-50 - Bi-lingual ENG/GER - Same as standard version, without word games
    9255-20-54 - Bi-lingual ENG/SPA - Same as standard version, without work games
    9255-20-56 - No Free Credits - Same as standard version, without word games and no free credits
    9255-20-57 - International - Same as standard version, without word games

    Most of that doesn’t really apply here, because all these are ‘Standard versions’ ie the most complete ones so it’s possible (likely?) the ROx codes are just revision numbers, and used to indicate bug fixes etc. The information provided is interesting however because of the restrictions present in some of the other versions. Things like ‘no word games’ are obvious, the word games are going to be heavily Americanized, and unsuitable for regions not using a US-English dictionary. The rest are more complex, clearly indicating that some regions consider the standard version of the games to fall foul of gambling laws, even if the majority of people would not consider these to be gambling games. Rules such as “Game cannot end if player busts; 1000 points are added to end of each hand” would appear to be tailored around very specifically worded legislation!

    While on the subject of gambling, the Data East hardware ‘Dream Ball’ (as featured in a previous update) is also included.


    Dream BallDream Ball
    (Dream Ball is supported)

    A bunch of new clones were added to the Igrosoft Multifish driver by MetalliC too, including the final game released on that platform ‘Crazy Monkey 2′ but the graphic roms and palette have some unhandled address line swapping on that one, so while you can initialize it like any other it currently only plays ‘blind’

    The new Head On set is also an interesting clone because it uses a different maze to any of the other sets. ANY dumped this one, and I added support for it, bypassing what seems like it might be a trivial protection check on startup. It’s since been revealed that there was a much older dump of the same set, also from Italy, and that it should run on the Sidam boardset for the game (which lacks a Colour PROM, so chances are it should also be black & white) I guess this must have been a relatively common version of the game in Italy?


    Head On, Alt MazeHead On, Alt Maze
    (This odd Italian clone of Head On features a different maze to the usual version)

    ShouTime also continued the run of tracking down important clones and thus we can say the World version of Starblade is supported now as well. Visually it looks identical to the Japan one, sans the warning at startup, so there isn’t really anything to show or write about for it other than this quick mention, but by having a World version confirmed and supported the documentation value of MAME is enhanced because it confirms the game was released outside of Japan, and that a specific version was made for that.

    In MESS, R.Belmont along with some others continued to add support for the various Apple II expansion cards, the Street Electronics Echo Plus, Zip Technologies ZipDrive and Apple II Rev. C SCSI Cards. The first of those is apparently a speech board, with the others being more self-explanatory.

    Numerous other fixes were made across both projects, a couple you’re unlikely to really even notice the difference from unless you’re intimately familiar with the games in question, but nevertheless important fixes. An example of this is the fix made to the Deniam Logic Pro 2 sound banking, apparently just one sample differs between the banks and the error / missing banking had gone unnoticed for years.

    Wrapping things up, Kale put in an early driver for the Casio FP-200 and you can already enter basic programs (make sure to RESET the memory first tho by typing RESET)


    Casio FP200Casio FP200Casio FP200Casio FP200
    (The Casio FP200 had a small screen built into the keyboard unit)

    As you can see, a lot of the progress again comes from the MESS side of things, and that’s even with me having worked flat out on MAME stuff for the past 2 months but there really isn’t much else to write about in terms of visible progress that I haven’t covered. There are definitely a few more interesting bit and pieces in the pipeline from other devs, but quite when they’ll hit nobody knows (Phil’s progress on Turret Tower still hasn’t been submitted for example, and the possibility of being able to submit another driver I worked on jointly with him but we’ve had to keep private for the past 3 years is increasing, so I’m crossing my fingers over that)