Beneath a mess of half-baked systems and massively detrimental online requirements, SimCity actually had some pretty cool ideas. Simulation of individual people and entities? Community options for those who want them? Curved roads? All interesting stuff on paper. Unfortunately, the reality of Maxis’ latest city builder failed (rather miserably) to live up to those promises, and Maxis has been struggling to build something workable from the pieces ever since. Enter Citybound. Its goal? To construct a city sim from the ground-up with a focus on single-player, out-of-the-not-a-box moddability, and simulating a truly sizable geographical region – not an itsy bitsy ant hill town. Also curved roads. Always curved roads.
O Γιάννης Φαρσάρης επιχειρεί μια πειραματική προσπάθεια έκδοσης ενός λογοτεχνικού βιβλίου με τη μέθοδο της συλλογικής μικρο-χρηματοδότησης (crowd-funding). Σκοπός είναι η ανεξάρτητη έκδοση μιας συλλογής διηγημάτων που γράφει αυτή την εποχή με τίτλο : “Η διαφορά του άσπρου απ’ το λευκό”, ως έργο της ανοικτής βιβλιοθήκης OPENBOOK που δημιούργησε ο ίδιος το 2010. Το εγχείρημά του δεν έχει κερδοσκοπικό χαρακτήρα, και τα χρήματα που θα συγκεντρωθούν θα καλύψουν το κόστος του τυπογραφείου, του επιμελητή, του γραφίστα και της ταχυδρομικής αποστολής των βιβλίων στους υποστηρικτές της προσπάθειας.
Το e-book, λίγο καιρό μετά την ολοκλήρωση της προσπάθειας, θα κυκλοφορήσει ελεύθερα στο διαδίκτυο υπό άδεια Creative Commons. // Θα διατεθούν επίσης δωρεάν αντίτυπα του βιβλίου σε δημόσιες και δημοτικές δανειστικές βιβλιοθήκες και Λέσχες ανάγνωσης όλης της χώρας. // Επιπρόσθετα, θα απελευθερωθούν αντίτυπα του βιβλίου μέσω Book-Crossing.
Η έκδοση βιβλίου μέσω crown-funding αποτελεί τη συνέχεια μιας σειράς εγχειρημάτων στην κατεύθυνση της ανοικτής και συνεργατικής λογοτεχνίας. Οι δυνατότητες που προσφέρει το διαδίκτυο είναι απεριόριστες και μπορούν να γίνουν ισχυρό εργαλείο στα χέρια των δημιουργών.
Recently I inherited some systems running Chef client, with a script that has to be called via sudo to edit some root owned Chef provisioned config files (e.g. munin.conf)..
/etc/sudoers contents looked like this
%sudoers hostname=(root) NOPASSWD: /usr/bin/chef-man-provision
and it actually executed rvim
rvim -u /root/.rvimrc /etc/munin/munin.conf
Contents of /root/.rvimrc look like this
" all following options are for securing vi for chef-man-provision script. " Security option: disable file opening with `:r` command cmap r cmap o
This “security option” had a pretty nasty effect… We couldn’t use characters “r” and “o” in vi command mode, even in simple search strings…
Enter sudoedit! To quote the sudoedit(8) manpage
-e‘ The -e (edit) option indicates that, instead of running a command, the user wishes to edit one or more files. In lieu of a command, the string “sudoedit” is used when consulting the security policy. If the user is authorized by the policy, the following steps are taken:
1. Temporary copies are made of the files to be edited with the owner set to the invoking user.
2. The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used.
3. If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed.
If the specified file does not exist, it will be created. Note that unlike most commands run by sudo, the editor is run with the invoking user’s environment unmodified. If, for some reason, sudo is unable to update a file with its edited version, the user will receive a warning and the edited copy will remain in a temporary file.
So, /etc/sudoers was modified to look like this
%sudoers hostname=(root) NOPASSWD: sudoedit /etc/munin/munin.conf
We now type “sudoedit /etc/munin/munin.conf” and everything works as it should work in the first place, i.e. securely and we can use all the letters!
Filed under: System Administration Tagged: sudo
I’ve talked before about the joys of continuous software delivery before.
Well, I’ve been building a couple of micro sites recently and thought it would be nice to try out a few new technologies and techniques.
We use TeamCity and Octopus Deploy for another much larger site. For micro sites I think that Octopus is overkill. A large complicated deployment workflow isn’t necessary for a simple micro site. Octopus really comes into its own when you need to deploy to a large web farm, or you need a workflow prior to releasing the website into production.
I thought I’d give Appharbor a go. Appharbor is a newish form of web hosting. They host your website as you’d expect, but they’ve also integrated source code control and source code building into their offering.
All you need to do is create an ASP.Net website project, initialize your git repository then push it to Appharbor. Your website is then built, any tests are executed and then, if you’re green, the website is deployed. Done!
One lesson learned. Your tests need to give you a high degree of confidence that your site is working. Your tests are your only gate keeper stopping you from deploying a broken site. So, if your tests are green, the site had better work.
I’ve aimed for 100% code coverage with the tests and have managed to achieve 94.41%. The other 5 odd percent is the code calling the API of a third party website. Tough to test code talking to externals, so I manually test that code. Not ideal but I can live with it.
The visuals on the sites are a little bit minimal, but the sites do work.
Make Sure To Check Out:
Make Sure To Check Out:
Putting aside the particulars of Bitcoin, the potential it represents is absolutely a very big deal.
As I’ve written multiple times on stratechery, the defining characteristic of anything digital is its zero marginal cost. Take apps for example:
What makes the software market so fascinating from an economic perspective is that the marginal cost of software is $0. After all, software is simply bits on a drive, replicated at the blink of an eye. Again, it doesn’t matter how much effort was needed to create said software; that’s a sunk cost. All that matters is how much it costs to make one more copy – $0.
The implication for apps is clear: any undifferentiated software product, such as your garden variety app, will inevitably be free. This is why the market for paid apps has largely evaporated. Over time substitutes have entered the market at ever lower prices, ultimately landing at their marginal cost of production – $0.
The same story applies for music, movies, content, etc., and this has fundamentally changed what it means to do business on the Internet. It’s why, for example, WhatsApp was so valuable to Facebook: attention is the true finite resource, and how it’s commanded is, in some ways, besides the point.
Bitcoin and the breakthrough it represents, broadly speaking, changes all that. For the first time something can be both digital and unique, without any real world representation. The particulars of Bitcoin and its hotly-debated value as a currency I think cloud this fact for many observers; the breakthrough I’m talking about in fact has nothing to do with currency, and could in theory be applied to all kinds of objects that can’t be duplicated, from stock certificates to property deeds to wills and more.
What makes Bitcoin so clever is how it assumes self-interest and uses incentives. To put it in the simplest possible terms, instead of a paid broker for transactions, tens of thousands of distributed computers working independently do the verification, at no cost to those involved in the transaction. Their reward is the possibility of more Bitcoin – the verification process is also the mining process. Most people are focused on the “mining” part of the process, but it’s the verification aspect that is profound.
Unfortunately, it’s not clear you can really divorce this verification process from the speculation involved with mining; it’s the speculation that incentivizes the verification. In other words, while the process behind Bitcoin enables unique digital goods beyond currency, the incentives only really work if said digital good has stored monetary value. Absent those incentives those doing the verification would need to earn some sort of commission, and then we’re right back where we started.1
Still though, currency is something – surely no-fee transfers is worth celebrating! And, as someone who regularly deals with wire transfers, I’m sympathetic to this point. Still, even if zero-fee transfers became seamless, Bitcoin as presently architected would be anything but free, and every one of us would have to pay the price.
The problem is the externalities of verification/mining. From the Wikipedia article on externalities:
In economics, an externality is the cost or benefit that affects a party who did not choose to incur that cost or benefit.
For example, manufacturing activities that cause air pollution impose health and clean-up costs on the whole society, whereas the neighbors of an individual who chooses to fire-proof his home may benefit from a reduced risk of a fire spreading to their own houses. If external costs exist, such as pollution, the producer may choose to produce more of the product than would be produced if the producer were required to pay all associated environmental costs. If there are external benefits, such as in public safety, less of the good may be produced than would be the case if the producer were to receive payment for the external benefits to others. For the purpose of these statements, overall cost and benefit to society is defined as the sum of the imputed monetary value of benefits and costs to all parties involved. Thus, it is said that, for goods with externalities, unregulated market prices do not reflect the full social costs or benefit of the transaction.
Recall the magic that makes Bitcoin profound: scores of independent computers all over the world running at full speed in the hope of capturing new Bitcoin, and in the process verifying transactions for free. Those computers need power, and that power needs to be generated. True, whoever owns the servers is paying a huge electricity bill, but (in most areas of the world) that electricity bill does not include the societal cost of pollution generated by electricity production.2
Moreover, the design of Bitcoin guarantees that electrical consumption increases dramatically indefinitely. Normally, you would expect the supply of computing power for a digital currency to initially increase, thus increasing the supply of said digital currency, which then lowers the price, ultimately reducing demand:
Under normal conditions, as the supply of computing power increases, the amount of a digital currency would increase as well. This lowers the price, eventually reducing demand.
That’s not the case with Bitcoin though. Anticipating the amount of power that would be thrown at mining Bitcoin, Satoshi Nakamoto built in a simple escalator that ensured new Bitcoin would be released about every 10 minutes no matter the amount of power being applied to mining/verification. This has effectively locked Bitcoin miners into a zero sum contest wherein greater and greater computing power serves only to steal opportunity from fellow miners; there is no corresponding increase in Bitcoin to be had.
Bitcoin is designed to be released on a regular schedule, no matter how much computing power is applied to it. This means supply will never catch up to demand, resulting in ever higher prices paid for with more computing power, i.e. more electricity.
The only possible increase is in computing power, which ultimately means Bitcoin effectively uses electricity as a release valve for inflation, compounding the externalities that accompany power production.
For what it’s worth, the structure of Bitcoin dictates that the price continue to rise, presuming it remains a viable currency. That price, though, is not free, and no one asked me if I were willing to pay.
- In fact, Bitcoin, which has a cap on the total amount of Bitcoin that will ever exist, is ultimately headed this way
- For the record, I am a major proponent of carbon taxes as both a means of reducing pollution as well as spurring innovation
A few days ago I tweeted: "Professional blogging --> no. It can be incidental. A professional reporter can blog. But being a blogger is not a professional thing."
At the time, I promised I'd write a blog post to explain. This is that post.
It is possible for a professional reporter to blog, even when they're doing their job as a reporter. But it is not a professional act. A reporter might blog about what they learn being a reporter, or covering a certain story. The reporter is an expert on reporting, at least. But a reporter could also be an expert at model trains. Or skiing. Or even be a kibitzer about politics or sports. Expertise comes at all levels. Let's not judge someone as not being expert enough. If you have something to say, it's cool to say it, and a blog is a great place to say it.
However, a blogger is not a reporter who uses WordPress. Yes it's a blogging system, but it's also a content management system, fully capable of running the web presence of all sizes of news organizations. How your writing is transmitted to readers has nothing to do with the act of writing news. It's a trivial distinction.
What's the harm in letting reporters call themselves bloggers? Well, we need a word for a person who shares his or her expertise for free. And we have such a word -- blogger, which derives from the word weblog which was coined by an early blogger, Jorn Barger, and shortened by Peter Merholz to blog and adopted by the community, years before reporters started calling themselves bloggers.
We need a word for what we do because it is an important activity. You can't understand how news works today without understanding the role that bloggers play. So in a sense professional reporters hurt themselves by usurping a term that meant something before they applied it to themselves.
Why are bloggers important to reporters? Bloggers are your sources. They are the people who previous generations of reporters had to reach by telephone. These days reporters can skim hundreds of perspectives on the web, prioritized by search engines. The reach of reporters in the age of blogging is far greater that it was in the age of the telephone. Understanding this synergy is key to understanding how news will evolve in the future.
But the real reason to let amateurs have this word is it's the right thing to do. It's fair. Reporters already have a word to describe what they do. Let us keep ours.
See also: Why blog?
Mobile World Congress came by and went away last week, and we finally have some time to sit down and look at it in hindsight without the rush of all the announcements and hands-on videos. Here are the six most … Continue reading →