Shared posts

21 Apr 07:08

Sourdough Starter

Once the lockdown is over, let's all get together and swap starters!
02 Mar 14:52

Ringtone Timeline

No one likes my novelty ringtone, an audio recording of a phone on vibrate sitting on a hard surface.
15 Aug 10:30

Old Game Worlds

Ok, how many coins for a cinnamon roll?
22 Jan 12:14

Mastodon 2.7

Polish translation is available: Mastodon 2.7 The fresh release of Mastodon brings long-overdue improvements to discoverability of content and the administration interface, as well as a large number of bug fixes and extra polish. The 2.7 release consists of 376 commits by 36 contributors since October 31, 2018. For line-by-line attributions, you can peruse the changelog file, and for a historically complete list of contributors and translators, you can refer to the authors file, both included in the release.
14 Jan 22:51

2019 - O ano de deixar o instagram

No início de 2018 eu finalmente apaguei o meu Facebook.

Eu já tinha reclamado do facebook em 2015 (abre em uma nova janela ); na época a preocupação era mais em termos de como eu consumia informação na internet. Eu procurei alternativas ao Facebook que tinha se tornado meu modo de consumir notícias e de passar as notícias a frente.

As preocupações com privacidade começaram quase ao mesmo tempo, posts como esse do Splinter, de 2016 (abre em uma nova janela ) ou esse do Gizmodo, de 2017 (abre em uma nova janela ) mostravam que os algoritmos de recomendação de amigos do Facebook tinham muito mais informação do que eles deixavam transparecer.

Em seguida todo o escândalo da Cambridge Analytica ganhou força, e logo depois, uma montanha de revelações ainda piores e a campanha #DeleteFacebook começou a ganhar força. Um artigo do New York Times fala de como o Facebook lutou contra os escândalos usando de lobbying e de difamação contra outras empresas. (abre em uma nova janela ).

No entanto, uma coisa sempre necessária de lembrar é que o Facebook é dono do Instagram e do Whatsapp. Hoje em dia, já há quem diga que a compra do Instagram pelo Facebook foi a maior falha regulatória da última década. (abre em uma nova janela )

Sair do Facebook foi um passo importante, mas pra manter contato com outras pessoas, é sempre muito difícil fugir do Whatsapp e do Instagram. A verdade é que o ideal seria que o Facebook (e o Google, já que estamos falando do assunto), sofressem um processo semelhante ao que a Microsoft passou no fim dos anos 90 (abre em uma nova janela ), quando a empresa teve que ser dividida, e o resultado foi o fôlego dado a outros navegadores como Firefox e Chrome, que puderam finalmente causar uma mudança na internet, que antes era basicamente dedicada ao Internet Explorer.

No entanto, como estamos falando de aplicações sociais por natureza, ações individuais como o #DeleteFacebook podem sim ter efeitos que tiram o poder da rede como um todo, a prova disso é como o Facebook já começou a mover sua atenção para o Instagram de diversas formas, como quando notificações de que pessoas que você seguia tinham postado fotos no Facebook estavam sendo testadas no Instagram (abre em uma nova janela ). Ao mesmo tempo, o Whatsapp deve receber anúncios em breve. (abre em uma nova janela )

Tudo isso mostra como o Facebook pretende continuar extendendo sua influência para os outros apps, em uma tentativa de manter seu controle sobre o dados dos seus usuários.

Por isso, uma das minhas resoluções de 2019 é de tentar abandonar o Instagram. A minha idéia é começar a mover mais e mais para o Fediverso (abre em uma nova janela ), agora que o Pixelfed (abre em uma nova janela ), uma alternativa ao instagram, começa a tomar forma.

Resta ver se a mudança vai ser possível. Que tal também vir pro Fediverso comigo? 😉

03 Jan 12:40

Why does decentralization matter?

Japanese translation is available: なぜ脱中央集権(decentralization)が重要なのか? I’ve been writing about Mastodon for two whole years now, and it occurred to me that at no point did I lay out why anyone should care about decentralization in clear and concise text. I have, of course, explained it in interviews, and you will find some of the arguments here and there in promotional material, but this article should answer that question once and for all.
28 Nov 09:50

The Depressing Phenomenon of Men Who Ask Their Dates No Questions

by Madeleine Holden

They do, however, talk a lot about snowboarding, ‘Mad Men,’ Socrates, their own penises, Amnesty International, mushrooms, foot fetishes, monogamy, war and trash bags

Nikki, a 22-year-old journalism student from Minneapolis, is telling me about the worst date she’s ever been on, with a man called Athens she met at college. “He talked about his goals, his week, his career, his meditation, his favorite books, his respect for ‘real’ musicians and how most people pronounce ‘namaste’ wrong,” she says. Nikki waited in vain to be asked a single question about herself while Athens raved about philosophy, monogamy, wanting to live in a van and how acid could lead to a higher sense of self. She waited for him to ask her about herself the entire date. “He texted me the next day about how much fun the date was,” she continues, “and he spelled my name wrong in the text.”

Nikki’s experience is bleakly funny, but it’s far from an anomaly. In the past week, I’ve heard from more than 250 women, men and non-binary people about their experiences with men asking them zero questions on dates. For example, Diana, a 25-year-old New Zealander currently based in Indiana, recently went on a date with the man who fixed her dishwasher. Assuming she was from Australia, he monologued about snakes, Steve Irwin and prison colonies while ordering pork nachos for the two of them (Diana is a vegetarian). After several hours of unidirectional conversation, Diana hadn’t been asked to share a single personal detail. “He didn’t ask me anything,” she tells me. “Like, not one thing. To this day, I’m not sure if he knows my name.”

Some of these men went into excruciating detail about dull topics while their dates sat across from them uninterrogated about their own jobs, dreams, values, favorite TV shows and best jokes. Vanessa, a 49-year-old consultant in Wellington, tells me about a date who treated her to a speech about his new office layout without learning a single detail about her. “He talked about how Bryan at work had got a desk next to the window, which was obviously a travesty,” she says. “Then he explained at length how his phone charger wouldn’t fit the electrical plug on his desk.” I heard from people whose dates — all men — Chromecasted their haircut pictures, performed feeble magic tricks, sang songs, broadcast the date on Instagram, adopted the downward dog position, watched the bar TV or pulled out their phones and began texting; anything but ask a solitary question of their dates in return, most of whom had been sitting like free therapists for hours.

To add insult to injury, many of the women who shared these stories with me said that the men told them later that they felt the dates had gone swimmingly, often asking for a second. This makes sense: being able to speak about oneself freely and without interruption to a patient, attentive audience is a service that usually costs upwards of $150 a session. If some smart, attractive social media editor from Ohio is willing to act as a free therapist for a few hours — and as a semi-relevant aside, almost all of these men refused to pick up the check for dinner — it’s no wonder the same men were lining up for more. As Anna, another woman I spoke to about her zero-question date, puts it: “Of course he thought the date went well. He’d been able to talk about himself uninterrupted for hours, while I looked on bored.”

Most of the people I spoke to about this phenomenon were women, but several gay men and non-binary people had near-identical experiences with romantic prospects who asked them no questions. “That happens so frequently dating other queer or gay men,” Kyle Turner, a 24-year-old freelance writer based in Brooklyn, tells me. “I spend a majority of the time asking them questions and they rarely return the favor, so at a certain point, I either try to slip in things about myself in response or give up.” Several women told me that, at a certain point, they began to treat the lack of reciprocity like a game, waiting in amusement to see how long it might take to be asked something about themselves. “I invented a bad first date drinking game,” Allie, a 27-year-old organizer from the Bay Area, tells me. “See how many sips and songs you can get through before he stops talking.” The date typically ends with Allie drunk, bemused and still a stranger to her date, despite her being treated to pretty much his entire inner world.

When I asked these ignored daters to hazard a guess as to the cause of this self-absorption, I got a variety of responses. Some thought it may have been nerves, while others felt men in general were more likely to view dates as a personal marketing exercise (“Here’s why you should find me attractive”) rather than an opportunity to get to know a romantic prospect. For a professional opinion, I spoke to Elise Franklin, a psychotherapist based in L.A., who tells me that the nerves hypothesis has limited applicability. “Sure, it can definitely be nerves for some,” she says. “I know I ramble when I’m nervous, and that’s common.” However, she says that a more significant explanation for the phenomenon is narcissism; a personality trait more common in men than women. “Narcissists can’t tolerate being told, ‘Your feelings don’t match my feelings,’” she explains. “To them, their feelings are everyone’s feelings — if I feel this way, then you feel this way, and if I’m interested in this, you are too.”

“Narcissism is encouraged in men,” Franklin continues. “Men are discouraged from mirroring their parents, and other members of society in general.” Because of this, she says, men are more likely to end up in the position of the oblivious, raving dater than women are. “Women are, in general, expected to be people pleasers,” she says. “We’ve learned our worth through social currency, and we’ve been the understanding ear for centuries.” She points out that her own listening profession, therapy, is dominated by women — the American Psychological Association found that there were 2.1 female psychologists for every male, and in less professionalized roles such as counseling, the gender gap is even larger.

Is this really such a gendered phenomenon, though? Aren’t women just as capable of being bloviating, self-absorbed bores? Yes, but with the significant proviso that social attitudes to gender mean that narcissism is tolerated in men but punished in women — an argument made by Jeffrey Kluger in his 2014 book The Narcissist Next Door, and confirmed in part by studies that show men interrupt women more than the reverse and that listener bias means even when men and women are speaking equal amounts, women are perceived as speaking 55 percent more and men 45 percent less.

As far as I’m aware, there’s no statistically significant data on this topic, and it’s a phenomenon that receives little media attention or academic inquiry. But my Twitter DMs and Gmail inbox are swollen with hundreds of anecdotes, all of which make one thing clear: There’s no shortage of men more willing to wax lyrical about snowboarding, Mad Men, Socrates, their own penises, Amnesty International, mushrooms, foot fetishes, monogamy and war — and to sings songs, strike yoga poses, share the contents of their entire camera rolls and perform magic tricks — than to ask the flesh-and-blood women and men they’re presently on a date with a single question about themselves.

The kicker? Most of them walk away thinking they nailed it.

The post The Depressing Phenomenon of Men Who Ask Their Dates No Questions appeared first on MEL Magazine.

05 Nov 08:06

Round and round. A fear submitted by Alec to Deep Dark Fears -...

Round and round. A fear submitted by Alec to Deep Dark Fears - thanks!
My two Deep Dark Fears books are available now from your local bookstore, Amazon, Barnes & Noble, Book Depository, iBooks, IndieBound, and wherever books are sold. You can find more information here!

02 Nov 10:28

Mastodon 2.6 released

After more than a month of work, I am happy to announce the new version of Mastodon, with improved visuals, a new way to assert your identity, and a lot of bug fixes.Verification Verifying identity in a network with no central authority is not straightforward. But there is a way. It requires a change in mindset, though. Twitter teaches us that people who have a checkmark next to their name are real and important, and those that don’t are not.
24 Sep 09:29

by dorrismccomics
31 Aug 14:38

Mastodon quick start guide

Polish translation is available: Przewodnik po Mastodonie So you want to join Mastodon and get tooting. Great! Here’s how to dive straight in. Let’s start with the basics. What is this? Mastodon is a microblogging platform akin to others you may have seen, such as Twitter, but instead of being centralised it is a federated network which operates in a similar way to email. Like email, you choose your server and whether it’s GMail, Outlook, iCloud, wherever you sign up you know you’ll be able to email everyone you need to so long as you know their address.
09 Aug 09:36

Voting Software

There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
23 Jul 15:06

Setting up the development environment for Mastodon on Arch Linux

Well, on the last post I described how to run a mastodon instance using Arch Linux. But what if you wanted to contribute to Mastodon also?

I still plan to write maybe a small demo on how to get your hands dirty on Mastodon’s codebase, maybe fixing a small bug, but before that we need to have the development environment up and working!

Now, as it’s the case with the guide on how to run your instance, this guide is very similar to the official guide (opens in a new window ), and when in doubt, you should double check the official guide because it’s more likely to be up-to-date. This guide is also very similar to how to run an instance, I mean, it’s the same software, right?

There is also an official guide to setting up your environment using vagrant (opens in a new window ) which might be easier if you have enough resources for a VM running side-by-side with your environment and/or does not run Linux.

This guide is focused on Mastodon, but most of the setup done here will work for other ruby on rails projects you might want to contribute to.

This was last updated on 31st of January, 2019.

Note on the choices made in this guide

The official guide recommends rbenv (opens in a new window ), but I’m more used to rvm (opens in a new window ). rbenv is likely to be more lightweight. So if you don’t have any preferences, you might want to stick to rbenv and ruby-build (opens in a new window ) when installing ruby.

Since this is a development setup, I’m not mentioning any security concerns.
⚠️ Do not use this guide for running a production instance. ⚠️
Refer to how to run a mastodon instance using Arch Linux instead.

Questions are super welcome, you can contact me using any of the methods listed in the about page. Also if you notice that something doesn’t seem right, don’t hesitate to hit me up.

As with the other guide, I tested the steps on this guide on a virtual machine and they should work if you copy-paste them. Things might not work well if your computer has less than 2GB of ram.

On this page

  1. General Mastodon development tips
  2. Dependencies
  3. PostgreSQL configuration
  4. Redis
  5. Setting up ruby and node
  6. Cloning the repo and installing dependencies
    1. Run each service separately
    2. Run everything using Foreman
  7. Working on master
  8. Tests
  9. Other useful commands
  10. Troubleshooting
    1. RVM says it’s not a function
    2. Mastodon has no css

General Mastodon development tips

From the official guide:

You can use a localhost->world tunneling service like ngrok (opens in a new window ) if you want to test federation, however that should not be your primary mode of operation. If you want to have a permanently federating server, set up a proper instance on a VPS with a domain name, and simply keep it up to date with your own fork of the project while doing development on localhost.

Ngrok and similar services give you a random domain on each start up. This is good enough to test how the code you’re working on handles real-world situations. But as soon as your domain changes, for everybody else concerned you’re a different instance than before.

Generally, federation bits are tricky to work on for exactly this reason - it’s hard to test. And when you are testing with a disposable instance you are polluting the databases of the real servers you’re testing against, usually not a big deal but can be annoying. The way I have handled this so far was thus: I have used ngrok for one session, and recorded the exchanges from its web interface to create fixtures and test suites. From then on I’ve been working with those rather than live servers.

I advise to study the existing code and the RFCs before trying to implement any federation-related changes. It’s not that difficult, but I think “here be dragons” applies because it’s easy to break.

If your development environment is running remotely (e.g. on a VPS or virtual machine), setting the REMOTE_DEV environment variable will swap your instance from using “letter opener” (which launches a local browser) to “letter opener web” (which collects emails and displays them at /letter_opener ).

When trying to fix a bug or implement a new feature, it is a good idea to branch off the master branch with a new branch and then submit your pull request using that branch.

A good way to see that your environment is working as it should is to check out the latest stable release (for instance, at the time of writing the latest stable release is v2.7.1) and then run tests as suggested in the tests session. They should all pass because the tests in stable releases should always be working.


Since we’re trying to run the same software as in the production guide, we’ll need mostly the same dependencies, this is what we’ll need:

  • postgresql: The SQL database used by Mastodon
  • redis: Used by mastodon for in-memory data store
  • ffmpeg: Used by mastodon for conversion of GIFs to MP4s.
  • imagemagick: Used by mastodon for image related operations
  • protobuf: Used by mastodon for language detection
  • git: Used for version control.
  • python2: Used by gyp, a node tool that builds native addons modules for node.js

Besides those, it’s a good idea to install the base-devel group, since it comes with gcc and some ruby modules need to compile native extensions.

Now, you can install those with:

sudo pacman -S postgresql redis ffmpeg imagemagick protobuf git python2 base-devel

PostgreSQL configuration

Take a look at Arch Linux’s wiki about PostgreSQL (opens in a new window ). The first thing to do is to initialize the database cluster. This is done by doing:

sudo -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'

If you want to use a different language, there’s no problem.

After this completes, you can then do

sudo systemctl start postgresql # will start postgresql

You will need to start postgresql every time you want to use it for development. You could also enable it so it starts with your system if you prefer, but it will be running and using resources even when you don’t need it.

Now that postgresql is running, we can create your user in postgresql:

# Launch psql as the postgres user
sudo -u postgres psql

In the prompt that opens, you need to do the command below replacing the username you use on your setup.

-- Creates user with SUPERUSER permission level
CREATE USER <your username here> SUPERUSER;

The SUPERUSER level will let you do anything without having to change users. With great powers…


We also need to start redis. Same as postgresql:

sudo systemctl start redis # will start redis

As with postgres, you can enable it too to make it start with the system, but personally I prefer to start on demand.

Setting up ruby and node

This part is very similar to the production guide, so I’ll copy and paste a bit:

First step is that we install rvm that will be used for configuring ruby. For that we’ll follow the instructions at (opens in a new window ). Before doing the following command, visit (opens in a new window ) and check which keys need to be added with gpg --keyserver hkp:// --recv-keys.

\curl -sSL | bash -s stable

After that, we’ll have rvm. You will see that to use rvm in the same session you need to execute additional commands:

source $HOME/.rvm/scripts/rvm

With rvm installed, we can then install the ruby version that Mastodon uses:

rvm install 2.6.1

Now, this will take some time, drink some water, stretch and come back.

Similarly, we will install nvm for managing which node version we’ll use.

curl -o- | bash

Refer to nvm github (opens in a new window ) for the latest version.

You will also need to run

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/" ] && \. "$NVM_DIR/"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

And add these same lines to ~/.bash_profile

And then to install the node version we’re using:

nvm install 8.11.3

And to install yarn:

npm install -g yarn

While we’re at it, we also need to install bundler:

gem install bundler

And with that we have ruby and npm dependencies ready to go.

Cloning the repo and installing dependencies

You need to clone the repo somewhere on your computer. I usually clone my projects in a source folder in my home directory, if you do different, change the following instructions accordingly.

cd ~/source
git clone

And then, cd ~/source/mastodon and we will install the dependencies of the project:

bundle install # install ruby dependencies
yarn install --pure-lockfile # install node dependencies

This will also take a while, try to relax a bit, have you listened to your favorite song today? 🎶

Since we created the postgres user before, we can setup the development database using:

bundle exec rails db:setup

This will use the default development configuration to setup the database. Which means: no password, same user as your username, using a database named mastodon_development in localhost.

In development mode the database is setup with an admin account for you to test with. The email address will be admin@YOURDOMAIN (e.g. admin@localhost:3000) and the password will be mastodonadmin.

Now, you have two options.

Run each service separately

If you checked out the guide to run an instance, you probably noticed that mastodon has three parts: A web service, a sidekiq service to run background jobs and a streaming service. In development you need those three components too, plus the webpack development server, which will compile assets (javascript, css) as needed. In production we don’t need webpack running all the time because we compile the assets only once after we update Mastodon.

To run those separately, you will need one window for each, since each of those holds your terminal while it’s running.

To run the web server:

bundle exec puma -C config/puma.rb

To run sidekiq:

bundle exec sidekiq

To run the streaming service:

PORT=4000 yarn run start

And to run webpack:

./bin/webpack-dev-server --listen-host

All of those should start immediately, except for the webpack server, which compiles the assets before starting.

To check that everything is working as expected, if you open your browser window at http://localhost:3000 you should see Mastodon landing page!

Run everything using Foreman

Now, most of the time this method is more practical. Running each service by itself is good if one is not starting to see what is the error, but most of the time you’ll want to start everything so that you can start coding away. In which case, first you’ll want to install foreman

gem install foreman

And then, when you need to start your dev environment you can do:

foreman start -f

Working on master

When working on master, the steps are similar to when updating an instance, but they happen much more frequently since master changes much more frequently.

This means, every time you pull changes into your computer (for instance, when you do git pull origin master), you might need to:

# Update any gems that were changed
bundle install
# Update any node packages that were changed
yarn install --pure-lockfile
# Update the database to the latest version
bin/rails db:migrate RAILS_ENV=development

Now, you don’t need to run them all the time, you will notice if one of them is not working as it should. How?

Bundler complains like this:

Could not find proper version of railties (5.2.0) in any of the sources
Run `bundle install` to install missing gems.

The name of the gem and version will change, but this means that one of your dependencies is not up to date and you need to run bundle install again.

If the database is missing a migration, rails will complain with:

ActiveRecord::PendingMigrationError - Migrations are pending. To resolve this issue, run:

        bin/rails db:migrate RAILS_ENV=development

This will appear on your console, but also on your browser.

If the ruby being used in the project is updated, you will also see some complaints from rvm (in this example, with a hypothetical ruby 2.6.2):

$ cd .
Required ruby-2.6.2 is not installed.
To install do: 'rvm install "ruby-2.6.2"'

In that case we need to do the same as we did to install it the first time, that is:

rvm install 2.6.2

And since rvm manages gems by ruby version, you’ll need to install the dependencies again using bundle install.


Tests in the mastodon project live in the spec folder. Tests also use migrations, so if the database was updated since you last ran tests, you will need to run something like this:

bin/rails db:migrate RAILS_ENV=test

But when you try to run tests with a database missing migrations, you’ll get an error from Rails that will explain exactly that.

To run all the tests, you need to do:


To run only one test, you can run it like so:

rspec spec/validators/status_length_validator_spec.rb

Other useful commands

If you add a new string that needs to be translated, you can run

yarn manage:translations

To update the localization files. This is needed so that weblate (opens in a new window ) can inform translators that there are new strings to be translated in other languages.

You can check code quality using


Have in mind that it might complain about code violations that you did not introduce, but you should always try not to introduce new violations.


RVM says it’s not a function

Follow recommended instructions at (opens in a new window )

Mastodon has no css

If mastodon has no css and you see something like #<Errno::ECONNREFUSED: Failed to open TCP connection to localhost:3035 (Connection refused - connect(2) for "::1" port 3035)> in your console, the issue is where webpacker is trying to connect. You can fix it by changing config/webpacker.yml. Instead of

    host: localhost


16 Jul 11:45

Running a Mastodon instance using Arch Linux

It’s been a while now that I’ve been running using Arch Linux and since the official guide recommends using Ubuntu 18.04, I figured that describing what I did could help someone out there. If in doubt, use the official guide instructions instead, since they’re more likely to be up to date.

This was last updated on 31st of January, 2019.

Notes about some choices made in this guide

The official guide recommends rbenv, but I’m more used to rvm. rbenv is likely to be more lightweight. So if you don’t have any preferences, you might want to stick to rbenv and ruby-build when installing ruby.

There’s also choices made regarding firewall. You do not need to follow those exact ones if you already have another firewall on your server or if you want to use a different one. However, do use some firewall. :)

Like the official guide, this guide assumes you’re using Let’s Encrypt for the certificates. If it’s not the case, you can ignore let’s encrypt references and configure your own certificate in Nginx.

Also note that the SSL configurations for nginx are slightly different than the ones in the official guide. They are aimed at being compatible with older android phones and were generated using Mozilla SSL Configuration Generator with the intermediate configuration and HSTS enabled. Have in mind that HSTS disables non-secure traffic and gets cached on the client side, so if you are unsure, generate a new configuration without HSTS.

Questions are super welcome, you can contact me using any of the methods listed in the about page. Also if you notice that something doesn’t seem right, don’t hesitate to hit me up.

I tested this guide using a digital ocean droplet with 1GB memory and 1vCPU. I had to enable swap to be able to compile the assets. There’s more information about this in the relevant section.

On this page

  1. Before starting the guide
  2. What you should have by the end of this
  3. General suggestions
  4. DNS
  5. Dependencies
  6. Configuring ufw
  7. Configuring Nginx
  8. Intermission: Configuring Let’s Encrypt
  9. Finishing off nginx configuration
  10. Mastodon user setup
  11. Cloning mastodon repository and installing dependencies
  12. PostgreSQL configuration
  13. Redis configuration
  14. Mastodon application configuration
  15. Intermission: Mastodon directory permissions
  16. Mastodon systemd service files
  17. Emails
  18. Monitoring with uptime robot
  19. Remote media attachment cache cleanup
  20. Renew Let’s Encrypt certificates
  21. Updating between Mastodon versions
  22. Upgrading Arch Linux
  23. (Optional) Adding elasticsearch for searching authorized statuses

Before starting the guide

You need:

  • A server running at least a base install of Arch Linux
  • Root access
  • A domain or sub-domain to use for the instance.

What is assumed:

  • There’s no service running in the same ports as Mastodon. If that’s the case, adjustments will need to be made throughout the guide.
  • You’re not using root as your base user. You do have a user configured with sudo access.
  • You already configured NTP or something similar. Some operations, like 2-Factor Authentication need the correct time on your server.

What you should have by the end of this

You should have an instance running with a basic firewall, a valid https certificate and prepared to be upgraded when needed. All the services needed will be on the same machine. Basic monitoring to know if your instance is up or not.

General suggestions

If you have no experience with Linux systems administrations, it’s a good idea to read a bit about it. You will need to keep this system up to date since it will be facing the internet.

Do not reuse passwords and force public key for your SSH user. Use sudo instead of running everything as root and disable root login over ssh.

The official guide already recommends this, but I’ll go one step further: Always use tmux or screen when doing operations on your server. You will need to learn the basic commands but it’s well worth it to avoid losing things if your connection go down and also for long operations in which you can disconnect and leave it running.

If you have 1GB it’s quite likely that asset compilation will fail. Remember to setup a swap partition or use systemd-swap


The domain you’re planning to use should have DNS records pointing to your server. If your server has a IPv6 address, you should also configure an AAAA record, otherwise, only the A record should be enough.

Now, this guide will not get into serving a different domain. Just have in mind that:

  • The domain will be part of the identifier of your instance users. Once it’s defined, you cannot change it anymore or you’ll get all kinds of federation weirdness.
  • Because of that, avoid using “temporary” domains, like the ones coming from ngrok or similar.


  • ufw: An easy-to-use firewall
  • certbot and certbox-nginx: used for generating the certificates from Let’s Encrypt.
  • nginx: Frontend web server that will be used in this setup
  • jemalloc: Different memory management library that improves memory usage for this setup.
  • postgresql: The SQL database used by Mastodon
  • redis: Used by mastodon for in-memory data store
  • ffmpeg: Used by mastodon for conversion of GIFs to MP4s.
  • imagemagick: Used by mastodon for image related operations
  • protobuf: Used by mastodon for language detection
  • git: Used for version control.
  • python2: Used by gyp, a node tool that builds native addons modules for node.js
  • libxslt, libyaml: I don’t know. They were in the official guide so I’m installing them, but I have to say: I do not have they installed in other instances and never noticed an issue 🤷🏽‍♂️

Besides those, it’s a good idea to install the base-devel group. It comes with sudo and other tools which might come in handy.

Now, you can install those with:

sudo pacman -S ufw certbot nginx jemalloc postgresql redis ffmpeg imagemagick protobuf git base-devel python2 libxslt libyaml
sudo pacman -S --asdeps certbot-nginx

Configuring ufw

🛑WARNING: Configuring a firewall is quite important, but if something goes wrong you might lose connectivity to your server. Make sure you have other ways of reaching your server if something goes wrong. 🛑

Now, Mastodon runs a couple of different services and to support it we will be running different services too, but since we will have everything in the same server, the only ports that should be available for the outside world are the HTTP/HTTPS ports that will be used to connect to the instance. Also, we want the SSH port open so that we can connect remotely to the server.

You should read into Arch Linux’s wiki about ufw. For this guide what you want is to do:

sudo ufw allow SSH # this allows SSH traffic to your server
sudo ufw allow WWW # this allows traffic on port 80 to your server
sudo ufw allow "WWW Secure" # this allows traffic on port 443 to your server

And then you can do:

sudo ufw enable
sudo systemctl enable ufw # Enables ufw to be started at startup
sudo systemctl start ufw # starts ufw

And with this the firewall should be up :)

Configuring Nginx

You should read into Arch Linux’s wiki about nginx, but again, what you want to do is something along these lines:

First, you want to edit nginx.conf. To remove the “welcome to nginx” page, you want to change the beginning of your server block to something like this:

    server {
        listen       80 default_server;
        server_name  '';

        return 444;

And at the very end of the http block, add:

    types_hash_max_size 4096; # sets the maximum size of the types hash tables
    include sites-enabled/*; # Includes any configuration located in /etc/nginx/sites-enabled

And then create these two directories:

sudo mkdir /etc/nginx/sites-available # All domain configurations will live here
sudo mkdir /etc/nginx/sites-enabled # The enabled ones will be linked here

Now, let’s say we’re using as the instance domain/sub-domain. You will need to replace this accordingly throughout this next steps.

Create a new file /etc/nginx/sites-available/, replacing by your domain, and then add to it the following content:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;

server {
  listen 80;
  listen [::]:80;
  root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  ssl_certificate     /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  add_header Strict-Transport-Security max-age=15768000;

  ssl_stapling on;
  ssl_stapling_verify on;

  ssl_trusted_certificate /etc/letsencrypt/live/;

  resolver valid=300s;
  resolver_timeout 5s;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 8m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;

  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;

  error_page 500 501 502 503 504 /500.html;

⚠️ It’s a good idea to take a look if something changed in relation to the official guide ⚠️

At this point nginx still doesn’t know about our instance (because we’re including files from /etc/nginx/sites-enabled and we created the file in /etc/nginx/sites-available), however, we should be able to start nginx already.

For that, we need to do:

sudo systemctl start nginx # Starts the nginx service
sudo systemctl enable nginx # Makes the service start automatically at boot

If you do curl -v <your server ip> now, you should see something like this:

$ curl -v <your server's ip>
* Rebuilt URL to: <your server's ip>/
*   Trying <your server's ip>...
* Connected to <your server's ip> (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: <your server's ip>
> User-Agent: curl/7.60.0
> Accept: */*
* Empty reply from server
* Connection #0 to host <your server's ip> left intact
curl: (52) Empty reply from server

And that means nginx was correctly started and that ufw is allowing connections as expected. We will now get our certificates from Let’s Encrypt before jumping back to nginx configuration

Intermission: Configuring Let’s Encrypt

Now, for Let’s Encrypt we will use certbot, that we installed previously. For more information about it you can take a look at Arch Linux’s wiki about Certbot. For this guide, you need to run the following command:

sudo certbot --nginx certonly -d

As usual, remind to change the url to the url for your actual instance. You will need to follow the instructions on screen.

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter 'c' to

Please read the Terms of Service at You must
agree in order to register with the ACME server at

Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
Using default address 80 for authentication.
2018/07/13 18:28:47 [notice] 4617#4617: signal process started
Waiting for verification...
Cleaning up challenges
2018/07/13 18:28:53 [notice] 4619#4619: signal process started

If everything goes as expected, you should see something like this:

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2018-10-11. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"

This means that now we have a valid certificate and that we can go back to nginx. Double check that the path informed by certbot (in this example /etc/letsencrypt/live/ matches the one in your nginx file.

Let’s Encrypt certificates only last for 90 days, so we will still come back to this. But for now, let’s go back to nginx.

If you have an error like

Saving debug log to /var/log/letsencrypt/letsencrypt.log
An unexpected error occurred:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 10453: ordinal not in range(128)
Please see the logfiles in /var/log/letsencrypt for more details.

Check that you have set up correctly your locale! If your locale is set to C, certbot will fail.

Finishing off nginx configuration

At this point the certificate should be working. Since this configuration is using HSTS, we also need to generate a dhparam. You can do that by doing (might take a little while!)

openssl dhparam -out dhparam.pem 2048
sudo mv dhparam.pem /etc/ssl/certs/dhparam.pem

What is left for us to do is to enable the instance configuration and reload nginx. We should do this (remember to replace with your instance config!):

sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/ # creates a softlink of the configuration we created previously to the enabled sites directory
sudo systemctl reload nginx

Now, if everything went fine, your nginx should reload. Otherwise, it will throw some error like this:

$ sudo systemctl reload nginx
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.

In that case, you need to execute one of the commands and try to see what went wrong.

However, if everything went right until now, if you do curl -v replacing for your domain, you should see something like this:

$ curl -v
* Rebuilt URL to:
*   Trying <your server's ip>...
* Connected to (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.60.0
> Accept: */*
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.0
< Date: Fri, 13 Jul 2018 18:41:17 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location:
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
* Connection #0 to host left intact

And if you curl or visit the https address you should get a 502 Bad Gateway.

Mastodon user setup

We need to create the Mastodon user:

sudo useradd -m mastodon # create the user

Then, we will start using this user for the following commands:

sudo su - mastodon

First step is that we install rvm that will be used for configuring ruby. For that we’ll follow the instructions at Before doing the following command, visit and check which keys need to be added with gpg --keyserver hkp:// --recv-keys.

\curl -sSL | bash -s stable

After that, we’ll have rvm. You will see that to use rvm in the same session you need to execute additional commands:

source /home/mastodon/.rvm/scripts/rvm

With rvm installed, we can then install the ruby version that Mastodon uses:

rvm install 2.6.1 -C --with-jemalloc

Note that the -C --with-jemalloc parameter is there so that we use jemalloc instead the standard memory allocation library, since it’s more efficient in Mastodon’s case. Now, this will take some time, drink some water, stretch and come back.

Similarly, we will install nvm for managing which node version we’ll use.

curl -o- | bash

Refer to nvm github for the latest version.

You will also need to run

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/" ] && \. "$NVM_DIR/"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

And add these same lines to ~/.bash_profile

And then to install the node version we’re using:

nvm install 8.11.4

And to install yarn:

npm install -g yarn

And with that we have our mastodon user ready.

Cloning mastodon repository and installing dependencies

For these next instructions we still need to be logged in Mastodon’s user. First, we will clone the repo:

# Return to mastodon user's home directory
cd ~
# Clone the mastodon git repository into ~/live
git clone live

Now, it’s highly recommended to run a stable release. Why? Stable releases are bundles of finished features, if you’re running an instance for day-to-day use, they are the most recommended for being the less likely to have breaking bugs.

The stable release is the latest on tootsuite’s releases without any “rc”. At the time of writing the latest one is v2.7.1. With that in mind, we will do:

# Change directory to ~/live
cd ~/live
# Checkout to the latest stable branch
git checkout v2.7.1

And then, we will install the dependencies of the project:

# Install bundler
gem install bundler
# Use bundler to install the rest of the Ruby dependencies
bundle install -j$(getconf _NPROCESSORS_ONLN) --without development test
# Use yarn to install node.js dependencies
yarn install --pure-lockfile

After this finishes you can go back to the user you were using before. This will also take a while, try to relax a bit, have you listened to your favorite song today? 🎶

PostgreSQL configuration

Now, once more, check out Arch Linux’s wiki about PostgreSQL. The first thing to do is to initialize the database cluster. This is done by doing:

sudo -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'

If you want to use a different language, there’s no problem.

After this completes, you can then do

sudo systemctl enable postgresql # will enable postgresql to start together with the system
sudo systemctl start postgresql # will start postgresql

Now that postgresql is running, we can create mastodon’s user in postgresql:

# Launch psql as the postgres user
sudo -u postgres psql

In the prompt that opens, create the mastodon user with:

-- Creates mastodon user with CREATEDB permission level

Okay, after this we’re done with postgresql. Let’s move on!

Redis configuration

The last service we need to start is redis. Check out Arch Linux’s wiki about Redis.

We need to start redis and enable it on initialization:

sudo systemctl enable redis
sudo systemctl start redis

Mastodon application configuration

We’re approaching the end, I promise!

We need to go back to Mastodon user:

sudo su - mastodon

Then we change to the live directory and run the setup wizard:

cd ~/live
RAILS_ENV=production bundle exec rake mastodon:setup

This will do the instance setup: ask you about some options, generate needed secrets, setup the database and precompile the assets.

For PostgreSQL host, port and etc, you can just press enter and it will use default values. The same goes for redis. For email options, refer to the email section. You will want to allow the setup to prepare the database and compile the assets.

Precompiling the assets will take a little while! Also, pay attention to the output. It might output:

That failed! Maybe you need swap space?

All done! You can now power on the Mastodon server 🐘

Which means the asset compilation failed and you will need to try again with more memory. You can try again using RAILS_ENV=production bundle exec rails assets:precompile

Intermission: Mastodon directory permissions

The mastodon user folder cannot be accessed by nginx. The path /home/mastodon/live/public needs to be accessed by nginx because it’s where images and css are served from.

You have some options, the one I chose for this guide is:

sudo chmod 751 /home/mastodon/ # Makes mastodon home folder executable by all users in the server and readable and executable by the user group
sudo chmod 755 /home/mastodon/live/public # Makes mastodon public folder readable and executable by all users in the server
sudo chmod 640 /home/mastodon/live/.env.production # Gives read access only to the user/group for the file with production secrets

Other subfolders will also be readable by other users if they know what to search for.

Mastodon systemd service files

Now, you can go back to your user and we’ll create service files for Mastodon. You again should compare with the official guide to see if something changed, but have in mind that since we’re using rvm and nvm in this guide the final result will be a bit different.

This is what our services will look like, first the one in /etc/systemd/system/mastodon-web.service, responsible for Mastodon’s frontend and API:


ExecStart=/bin/bash -lc "bundle exec puma -C config/puma.rb"
ExecReload=/bin/kill -SIGUSR1 $MAINPID


Then, the one in /etc/systemd/system/mastodon-sidekiq.service, responsible for running background jobs:


ExecStart=/bin/bash -lc "bundle exec sidekiq -c 5 -q default -q push -q mailers -q pull"


Lastly, the one in /etc/systemd/system/mastodon-streaming.service, responsible for sending new content to users in real time:


ExecStart=/bin/bash -lc "npm run start"


Now, you can enable these services using:

sudo systemctl enable /etc/systemd/system/mastodon-*.service

And then run them using

sudo systemctl start mastodon-*.service

Check that they are running as they should using:

systemctl status mastodon-*.service

At this point, if everything is as it should, going to should give you the Mastodon landing page! 🐘


Now, you’ll probably want to send emails, since new users get an email to confirm their emails.

You should really follow the official guide on this one, because there’s no difference in this case.

Monitoring with uptime robot

I’m giving an example with Uptime Robot because they have a free tier, but you can use other services if you prefer. The idea is just to be pinged if your instance goes down and also to have an independent page where your users can be sure if everything is working as expected.

After creating an UptimeRobot account, you can create a HTTP(s) type monitor pointing to your instance full url:, don’t forget to change accordingly.

If you have IPv6, you should also create another monitor with the Ping type, in which you should use your server’s IPv6 as the IP.

Now, in the settings page, you can click on “add public status page”, then you select “for selected monitors” and select the ones you just created. You can create a CNAME DNS entry, so that for instance would show the this new status page. There’s more instructions in Uptime Robot’s page.

Now if your instance goes down or your IPv6 stops working, you should get an email.

Remote media attachment cache cleanup

Mastodon downloads media from other instances and caches them locally. If you don’t clean this from time to time, this will only keep growing. Using mastodon user, you can add a cron job that cleans it up daily using crontab -e and adding:

0 2 * * * /bin/bash -lc "cd live; RAILS_ENV=production bundle exec bin/tootctl media remove" 2>&1 /home/mastodon/remove_media.output

If you don’t have any cron installed in your server, you need to take a look in Arch Linux’s wiki page about cron.

Renew Let’s Encrypt certificates

The best way for this is to follow Arch Linux’s wiki about Certbot automatic renewal, which is:

Create a file /etc/systemd/system/certbot.service:

Description=Let's Encrypt renewal

ExecStart=/usr/bin/certbot renew --quiet --agree-tos

The nginx plugin should take care of making sure the server is reloaded automatically after renewal.

Then, create a second file /etc/systemd/system/certbot.timer:

Description=Daily renewal of Let's Encrypt's certificates



Now, enable and start the timer service:

sudo systemctl start certbot.timer
sudo systemctl enable certbot.timer

Updating between Mastodon versions

Okay, you set it all up, everything is running and then Mastodon v2.8.0 comes out. What do you do?

Do not despair, dear reader, all is well.

Remember our tip about tmux? When updating is always a good idea to be running tmux. Database migrations can take some time and tmux will help to avoid losing data if your connection fails in the meantime.

First, we will go to the Mastodon user once again:

sudo su - mastodon

Okay, first things first, let’s go into the live directory and get the new version:

cd ~/live
git fetch origin --tags
git checkout v2.8.0
cd . # This is to force rvm to check if we're in the right ruby version

Now, suppose the ruby version changed, since the last time you were here and instead of 2.6.1 is now 2.6.2. After you do cd ., rvm will complain:

$ cd .
Required ruby-2.6.2 is not installed.
To install do: 'rvm install "ruby-2.6.2"'

In this case, we will need to use rvm to install the new version. The command is the same as last time:

rvm install 2.6.2 -C --with-jemalloc

Everything will take some time and at the end you will be ready to follow through. Notice that this won’t happen very often. Also, after you make sure everything is running as expected, you can remove the old ruby version with rvm remove <version>. Wait for you to be sure that the new version is running, though!

Now, you’ll always want to make sure that you look at the releases notes for the release you’re going to. Sometimes there’s special tasks that need to be done before following.

If there was dependencies updated, you need to do:

bundle install --without development test # if you need to update ruby dependencies or if you installed a new ruby
yarn install --pure-lockfile # if you need to update node dependencies

In most of the updates you will need to update the assets:

RAILS_ENV=production bundle exec rails assets:precompile

For comparison: in the digital ocean droplet I tested this guide on, compiling assets on v2.4.3 took around 5 minutes.

If the update includes database migrations that you’ll need to do:

RAILS_ENV=production bundle exec rails db:migrate

Sometimes database migrations will change the database in a way that the instance will stop working for a little bit until you restart the services, that’s why I usually leave them for last to reduce downtime.

⚠️ Backup your database regularly ⚠️

After the migration is finished running, you can leave the mastodon user and then restart the services:

sudo systemctl restart mastodon-sidekiq
sudo systemctl restart mastodon-streaming
sudo systemctl reload mastodon-web

Now, if there was some database changes you need to restart mastodon-web instead of reload.

Alrite, you should be in the last version of Mastodon now!

Upgrading Arch Linux

Some special notes about upgrading Arch Linux itself. If you haven’t yet, read through the Arch Linux’s wiki on Upgrading the system. Since Arch Linux is rolling, there’s some differences if you’re coming from other distros.

Ruby uses native modules in some of the gems, that is, modules compiled against local libraries. This means that if your system changes radically from one version to the other, you might have issues starting services.

However, to make your life a bit easier, you can re-compile native modules by doing (using mastodon user):

cd ~/live
bundle pristine

This will take a little while but will recompile needed gems. When in doubt, do that after a system upgrade.

I had issues in the past with gems that Mastodon uses which have native extensions and are being installed straight from git, namely posix-spawn and http_parser.rb. They were not reinstalled with bundle pristine and I had to manually rebuild them. This seems to fixed in the most recent rvm, but in case you need to do that, find where they are installed doing:

bundle show posix-spawn

With the output of that (which will be something like /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139, do:

rm -rf /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139 /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/extensions/x86_64-linux/2.5.0/posix-spawn-58465d2e2139

This is just an example and you will have to replace this with the output of your bundle show command, and then find the equivalent path in the gems/extensions folder. Do it for both posix-spawn and http_parser.rb (and any other gem that comes from git if it gives you trouble).

And after that you can do bundle install --without development test to install them again.

Now, second thing to take note: Postgresql minor version are compatible between themselves. This means that 9.6.8 is compatible with 9.6.9 and after version 10 they adopted a two number versioning, which means that 10.3 is compatible with 10.4. However, 9.6 is not compatible with 10. And 10 will not be compatible with 11. This means that: when upgrading from 10 to 11 you need to follow the official documentation and Arch Linux’s wiki orientation. With that in mind, be careful when upgrading.

🛑 Upgrading wrongly may cause data loss. 🛑

(Optional) Adding elasticsearch for searching authorized statuses

Since Mastodon v2.3.0, you can enable full text search for authorized statuses. That means toots you have written, boosted, favourited or were mentioned in. For this functionality, Mastodon uses Elasticsearch. As usual, you should take a look in Arch Linux’s wiki about Elasticsearch.

Note: I was able to run elasticsearch on my test instance using the 1GB/1vCPU droplet from Digital Ocean with 1GB of Swap by using the memory configurations suggested at the Arch Linux’s wiki about Elasticsearch, that is, -Xms128m -Xmx512m. However, I don’t have any load and I don’t know how the system would behave with more real loads.

To install elasticsearch do:

sudo pacman -S elasticsearch

Pacman then will ask which version of jdk you want to use. After installed, you can start Elasticsearch by doing:

sudo systemctl enable elasticsearch # Enables elasticsearch to be started at startup
sudo systemctl start elasticsearch # starts elasticsearch

Then you need to switch to Mastodon user, cd ~/live and edit .env.production, to add configuration related to Elasticsearch, look for the commented configs and change them:


Then, you need to build the index. This might take a while if your database is big!

RAILS_ENV=production bundle exec rails chewy:deploy

When this is finished, you need to restart all mastodon services.

The official docs have some tips on how to tune Elasticsearch.

24 May 07:30

Morning News

Support your local paper, unless it's just been bought by some sinister hedge fund or something, which it probably has.
19 May 16:33

Saturday Morning Breakfast Cereal - Gojirasaurus


Click here to go see the bonus panel!

Also it didn't want to destroy the city because it mostly feeds off of aquatic insects.

New comic!
Today's News:
19 May 16:31

Saturday Morning Breakfast Cereal - Whistle


Click here to go see the bonus panel!

The really creepy part is how it requires you to install a tiny mouth.

New comic!
Today's News:
03 May 12:54

Python Environment

The Python environmental protection agency wants to seal it in a cement chamber, with pictorial messages to future civilizations warning them about the danger of using sudo to install random Python packages.
06 Apr 11:13

Friendly Questions

Just tell me everything you're thinking about in order from most important to last, and then we'll be friends and we can eat apples together.
23 Mar 11:13


by Eugen Rochko

Perspective from a platform that doesn’t put democracy in peril

Deep down you always knew it. On the edge of your perception, you always heard the people who talked about the erosion of privacy, that there was no such thing as free cheese, that if you don’t pay — then you’re the product. Now you know that it’s true. Cambridge Analytica has sucked the data so kindly and diligently collected by Facebook and used that data to influence the US elections (and who knows what else).

It doesn’t matter if you call it a “data breach” or not. The problem is how much data Facebook collects, stores and analyzes about us. You now know how Facebook’s platform was used by 3rd parties to meddle in elections. Now imagine how much more effective it would be, if it wasn’t 3rd parties, but Facebook itself putting its tools to use. Imagine, for example, if Mark Zuckerberg decided to run for president

#DeleteFacebook is trending on Twitter. Rightfully so. Some say, “even without an account, Facebook tracks you across the web and builds a shadow profile.” And that is true. So what? Use browser extensions that block Facebook’s domains. Make them work for it. Don’t just hand them the data.

Some say, “I don’t want to stop using Facebook, I want them to change.” And that is wrong. Keeping up with your friends is good. But Facebook’s business and data model is fundamentally flawed. For you, your data is who you are. For Facebook, your data is their money. Taking it from you is their entire business, everything else is fancy decoration.

Others will say, “I need Facebook because that’s where my audience is, and my livelihood depends on that.” And it is true. But depending on Facebook is not safe in the long-term, as others have learned the hard way. Ever changing, opaque algorithms make it harder and harder to reach “your” audience. So even in this case it’s wise to look for other options and have contingency plans.

There are ways to keep up with friends without Facebook. Ways that don’t require selling yourself to Big Data in exchange for a system designed around delivering bursts of dopamine in just the right way to keep you hooked indefinitely.

Mastodon is one of them. There are others, too, like Diaspora, Scuttlebutt, and Hubzilla, but I am, for obvious reasons, more familiar with Mastodon.

Mastodon is not built around data collection. No real name policies, no dates of birth, no locations — it stores only what is necessary for you to talk to and interact with your friends and followers. It does not track you across the web. The data it stores for you is yours — to delete or to download.

Mastodon does not have any investors to please or impress, because it’s not a commercial social network. It’s freely available, crowdfunded software. Its incentives are naturally aligned with its users, so there are no ads, no dark UX patterns. It’s there, growing and growing: Over 130,000 people were active on Mastodon last week.

To make an impact, we must act. It is tempting to wait until others make the switch, because what if others don’t follow? But individual actions definitely add up. One of my favourite stories from a Mastodon user is how they were asked for social media handles at a game developer conference, and when they replied with Mastodon, received understanding nods instead of confused stares. Step by step, with every new person, switching to Mastodon will become easier and easier.

Now is the time to act. Join Mastodon today.

#DeleteFacebook was originally published in Mastodon Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

19 Mar 11:17

Minha orientação política

by alexcastro

Sempre voto, apoio, milito em prol de movimentos, partidos, pessoas que se propõem a lutar por grupos, classes, categorias que não conseguem lutar por si mesmas.

Porque um Estado que atua em prol da classe alta é um Estado redundante.

A classe alta sabe se defender com seus próprios recursos: o Estado justifica sua existência defendendo os direitos daquelas pessoas que não conseguem.

eu vou te comer

* * *

Demonizar a classe alta é infantil e contraproducente.

Cresci na classe alta da Barra da Tijuca e estudei na escola mais cara do país. Em meus anos formativos, minhas pessoas amigas, colegas, familiares, eram todas empresárias e empreendedoras, executivas de multinacional e capitãs de indústria.

Atesto e dou fé que a proporção de pessoas ruins entre elas é mais ou menos a mesma de todos os outros grupos dos quais participei.

Ainda assim, voto sistematicamente contra seus interesses.

Não porque são pessoas ruins. (Não são.)

Mas porque são pessoas que sabem se defender sozinhas.

empatia vote espremedor limao

* * *

Qualquer reforma tributária deve ser feita para simplificar a vida da pessoa física que faz seu próprio imposto de renda, não da empresa que tem seu próprio departamento de contabilidade. Etc.

Então, por exemplo, não sei os detalhes da recente reforma trabalhista, mas sei que as entidades patronais estavam unanimemente a favor, e as trabalhadoras, contra.

Então, sou contra.

Não porque as integrantes das classes patronais sejam “pessoas canalhas que levam uma vida fácil”.

(É uma gente esforçada que trava uma luta hercúlea para empreender no Brasil.)

Sou contra porque as pessoas que trabalham para elas são tão esforçadas quanto e enfrentam dificuldades infinitamente maiores.

Sob qualquer métrica, se a vida da dona da fábrica é difícil, a vida da trabalhadora que precisa negociar com ela de igual pra igual, sem apoio de um departamento jurídico ou tributário, sem economias no banco e vivendo de mês a mês, é mais difícil.

Então, se entrarem em conflito (e é natural que entrem, pois essa é a base de nossa democracia), estarei sempre ao lado da pessoa trabalhadora, por reconhecer que precisa de toda a ajuda possível para que o conflito apenas não seja absurdamente desigual.

O Estado existe não para decidir quem está certa, mas para garantir que o conflito seja o menos desigual possível.

Para isso, paradoxalmente, ele precisa sempre se posicionar ao lado da parte mais fraca, mais vulnerável, mais indefesa.

if you are neutral in injustice you chose side of oppressor

* * *

Sou uma pessoa privilegiada em todos os quesitos: branco, hétero, classe alta, viajado, urbano, pósgraduado.

O Estado já me deu de bandeja todas as vantagens possíveis e imaginárias: não quero mais nenhuma.

O Estado não precisa fazer nada por mim. Não quero que o Estado faça nada por mim. O Estado já fez de tudo por mim. O Estado já fez demais por mim.

Voto, apoio, milito pelo projeto de país que me prometa fazer o mínimo por mim. Que prometa sobretaxar meu iTralha e reinvestir em saúde. Que prometa sobretaxar minha herança e reinvestir em educação. Que prometa a pagar às mulheres os mesmos salários que aos homens. Que reconheça os direitos gays tanto quanto os héteros. Cuja polícia trate pessoas negras igual às brancas.

Por toda a minha vida, o Estado me preparou para não precisar dele. Sei as manhas, tenho as tretas. Se o Estado se virar contra mim, tenho como me defender.

Quero um Estado que defenda as pessoas que não têm como se defender dele.

Quero um Estado que defenda as pessoas que, por falha desse mesmo Estado, têm uma educação pior que a minha, uma saúde pior que a minha, perspectivas piores que as minhas.

Quero um Estado que quebre a cabeça para facilitar a vida de quem tem pouco, nem que ao custo de dificultar a vida de quem tem muito.

Essa é minha orientação política.


* * *

Toda ela pode ser resumida em um dos diálogos de um filme lançado no ano em que completei 18 anos, Uma questão de honra.

Estão conversando dois fuzileiros navais acusados de assassinar um colega, Willy:

— O que foi que fizemos de errado? Não fizemos nada de errado!

— Fizemos sim. Nós estamos aqui para lutar pelas pessoas que não podem lutar por si mesmas. Deveríamos ter lutado pelo Willy.



15 Mar 12:02

Twitter is not a public utility

by Eugen Rochko
Photo by Tobin Rogers on Unsplash

Isn’t it a bit strange that the entire world has to wait on the CEO of Twitter to come around on what constitutes healthy discourse? I am not talking about it being too little, too late. Rather, my issue is with “instant, public, global messaging and conversation” being entirely dependent on one single privately held company’s whims. Perhaps they want to go in the right direction right now for once, but who’s to say how their opinion changes in the future? Who is Twitter really accountable to except their board of directors?

I still find it hard to believe when Jack Dorsey says that Twitter’s actions are not motivated by a drive to increase their share price. Twitter must make their shareholders happy to stay alive, and it just so happens that bots and negative interactions on their platform drive their engagements metrics upwards. Every time someone quote-tweets to highlight something toxic, it gets their followers to interact with it and continue the cycle. It is known that outrage spreads quicker than positive and uplifting content, so from a financial point of view, it makes no sense for Twitter to get rid of the sources of outrage, and their track record is a testament to that.

In my opinion, “instant, public, global messaging and conversation” should, in fact, be global. Distributed between independent organizations and actors who can self-govern. A public utility, without incentives to exploit the conversations for profit. A public utility, to outsurvive all the burn-rate-limited throwaway social networks. This is what motivated me to create Mastodon.

Besides, Twitter is still approaching the issue from the wrong end. It’s fashionable to use machine learning for everything in Sillicon Valley, and so Twitter is going to be doing sentiment analysis and whatnot when in reality… You just need human moderators. Someone users can talk to, who can understand context. Unscalable for Twitter, where millions of people are huddled together under one rule, but natural for Mastodon, where servers are small and have their own admins.

Twitter is not a public utility. This will never change. And every tweet complaining about it simply makes their quarterly report look better.

To get started with Mastodon, go to and pick a place to call home! Use the drop-down menus to help narrow your search by interest and language, and find a community to call your own! Don’t let the fediverse miss out on what you have to say!

Twitter is not a public utility was originally published in Mastodon Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

27 Dec 10:23

For more info on Kevin A. Patterson’s book,...

For more info on Kevin A. Patterson’s book, “Love’s Not Color Blind”, check out:

06 Dec 10:53

Amazon Logic

by CommitStrip

04 Dec 18:04 Was the Only Music Social Network That Made Sense

by Elia Alovisi

A version of this article originally appeared on Noisey Italy.

My first profile on was called "Nergal-Behemoth," in honor of the song by my favorite Polish death metal band. The first two tracks I scrobbled, on February 21, 2006, were "Africa" by Toto and "Electric Crown" by Testament. I didn't know it at the time, but the keyboards—soft as Steve Porcaro's velvet—had broken my faith in the God of Metal. As time passed, I'd start listening to folk music, and then classical, psych, and prog rock; I'd become obsessed with Johnny Cash, I'd go through a phase in which I resembled a fanboy of De André; I would discover emo and electronica and indie and hip-hop, and then more classical music and pop. And since I've always kept my account active, today, more than ten years later, I can study how I listened to music throughout a good part of my life. Day for day, song for song.

Between two profiles, the aforementioned Nergal-Behemoth and the subsequent "EliaSingsMiFaMi" (dedicated to that splendid album), I listened to 164,624 songs. I've listened to Sufjan Stevens 1864 times, Drake 1120, Kanye West 1058, and Caneda 985. Forty times—many more than necessary—the notes of "Follow the Reaper" by Children of Bodom entered into my ears, whereas I don't regret the 48 times I listened to the crystalline ambience of "Requiem For The Static King Part One" by A Winged Victory For The Sullen. If I hadn't read the comments and messages that I received on my profile, I would've probably never met a few of my closest friends today. If it hadn't been for the site's diary feature, I wouldn't have a list of all the concerts I attended between 2006 to the present day. But time passes, and today all that remains of is the promise of a musical democracy based on exchange and sharing—a promise that wasn't kept and which was obliterated by the evolution of the musical market and by the internet economy. was born shortly after the start of the millennium as the union of two projects. The first was an idea by Richard Jones, an Englishman who developed, for his Bachelor's thesis in Computer Science, a project called Audioscrobbler: A plug-in that tracked all the songs you listened to on your computer once installed. The information gathered—the songs scrobbled—was then uploaded to an online database, one that users of the service could access and create a library of their personal listening history, which they could then compare with that of other users. The second project,, was a web radio created by a group of German and Austrian boys who used the same program to gauge the tastes of each individual user, using an algorithm with two buttons that the user could click to express a positive or negative judgment about the track they were listening to. Jones and the boys of started collaborating in 2003, and in 2005 they united with a single website. They gave their users the ability to scrobble songs from different players. It was the beginning of a unique, collective musical experience, one that seemed impossible to replicate in the future.

A screenshot of my profile in 2007. Fortunately, has immortalized the moment when I discovered Impaled Northern Moonforest, the best band in history.

In the time that the site flourished, the music market of the previous decade wasn't prepared for the foundational revolution that brought shortly thereafter. The traditional gatekeepers of content—record labels, print magazines, radio, and television—were always addressing a formless public, and they molded the tastes of their audience through the use of commercial entities and criticism from high to low, which had been consolidated in the preceding decades. Listeners who didn't identify with this top-down approach united in online communities such as forums in order to create, on a smaller scale, a musical democracy that functioned laterally.

Even within forums and messaging boards there were structures of power, defined by admin roles and by the number of posts a user made during the course of a year; a symbol of authority earned through tenure. Instead of enjoying a flux of content on various music-related topics—things that, to listeners who experienced music solely through mainstream means, and fleeting, impalpable moments (a phone call into a radio or TV show, a text message confined the screen of your phone)—forum participants united and created online communities endowed with their own values, communication codes, and musical tastes that were constructed collectively over time. captured this spirit, seized upon it to perfection, and made its users feel like they were playing an important role in the creation of a common musical discourse.

The site functioned like a personal musical museum ("Here's everything that I listened to!") based in part on competition ("Look how much I listened to!") and recognition ("You listen to what I listen to, so we're compatible"—there was even a compatibility meter that ranked how much you had in common with other users). The site's structure encouraged such interactions: Everything was clickable, organized, up to date, and accessible in real time. The idea wasn't to apply this structure to a set catalogue of music, but to the unorganized ecosystem of MP3 files on an individual's computer. That way, even if you'd ripped the demo of a local band, you could find other people who'd also listened to them through the artist's dedicated page and talk to them about it.

These exchanges were the driving factor behind the platform's implementation of various communication methods: A comment section on every artist page and on a user's personal profile, a private messaging service, and the ability to create groups. Since it was a site for people who were passionate about music—and in turn easily intrigued by other people who shared that same passion—it wasn't rare that friendships and loves were born between one scrobble and the next. It wasn't all that weird to come across the profile of someone who listened to that very tiny post-punk band that broke up after their first EP, the one you loved so much, and fall head over heels for a 180 x 180 pixelated avatar. What could start as a "Hey, your library is bomb!" could turn into a tangential conversation about your respective message boards, and possibly turn into something more. predicted the shift of online communication towards something hyper-fragmented and specialized. No one chose the music you listened to: You were the person who created a personalized stream beginning with an artist, a tag, or the profile of another user, and then tweaked that algorithm until it produced a track agreeable to your ears. You weren't obligated to insert yourself into a general discussion; instead, you were able to make connections with people who listened to things that interested you, in an online environment designed to foster micro-conversations. There was also a blogging element, which today has disappeared: Each user could create a personal diary, which prompted different forms of posts adopted by other profiles (surveys, lists, advice). "All the concerts I've gone to" was the one most people took to, taking advantage of a function that also stopped being used later on: Events that could be added and updated directly by users, and searched according to geographic criteria.

A screenshot of my profile from 2009. There's also a link to my Netlog, with an attached quote from Vasco Brondi at the beginning of the "About Me" section. I was 18 years old. But below are the GY!BE, come on.

The golden year of was 2007, when it was acquired by CBS. The network's investment was poorly timed—a year later, Facebook (which barely resembled what it does today) experienced a popularity boom and started to dominate the internet. The music site's problems started a few years later, when it found itself in the middle of its first major media crisis: In 2009, No Line On The Horizon by U2 prematurely appeared online. TechCrunch accused and CBS of having provided the Recording Industry Association of America (RIAA), an organization that safeguards the interests of the music industry (and which fought with peer-to-peer and torrenting services for years), with the personal data of all the users who'd listened to songs from the album before its release date.

Both the website and the network denied it, but different users cancelled their accounts as a gesture of protest. After it was acquired by a major player in the media market, the site had started to devolve into something different and less free. Even in 2007, the radio started charging a membership fee of €3.00 in every country except Germany, the United States, and the United Kingdom. They removed the ability to stream individual tracks in full, swapping in short previews or a few sample songs selected by the artist themselves. The whole thing sawed the legs off of many small, independent bands seeking visibility. In 2013, the radio was resized for the first time, then issued exclusively to several countries, then substituted entirely by a series of embedded YouTube videos and by a now-defunct partnership with Spotify—an admission of surrender from the streaming component of the site, clearly crushed by the weight of competition that was already too strong and too organized for its predecessor to keep up.

All of this was compounded by a series of redesigns that pained the platform's long-standing users. The profiles became more standardized and less personal, which made feel more sterile overall. Where there used to be an "About Me" bar on the left side of the page that each user could fill with words and images (it was common to make enormous PNG's with the logo of your favorite band, worn like a badge of pride above quoted lyrics, a link to your blog, or a list of concerts you'd recently attended), today, a user can only upload a profile picture or a link, and up to 200 characters of text without any formatting.

A screenshot of my present day profile. Notice how empty it is. All the white space is due to the fact that I have AdBlock enabled, I think.

Unfortunately, the height of's success coincided with the moment that online music fell under stricter regulations. First came the crackdown on peer-to-peer services like eMule, Limewire, and Bearshare (but not Soulseek), which was a death knell for RAR services like Megaupload, Rapidshare, and Mediafire—all of which later culminated in attempts to kill torrenting. Before contemporary streaming services like Spotify, Apple Music, and Youtube came along and became the standard—bringing with them the constant presence of a 3G WiFi signal—discovering music meant downloading it and constructing a personal trove of files. was the service that had leveraged this necessity, allowing its users to discover new music and, after a generic search like "[ARTIST NAME] [ALBUM NAME] blogspot megaupload," show it off on your scrobble history.

At present, has a lot of difficulty generating a profit. Possibly because it no longer serves a purpose aside from logging what its users are listening to. It's no longer a catalyst for discussions and events, given that there's already Facebook and Songkick; nor is there need for a personalized radio thanks to algorithm-driven recommendations from various streaming services. In the end, the music industry to which was a counterpoint no longer had to the power to create renowned musicians from meager local artists, nor direct public tastes: Today, labels only try to acquire, through an artist's name, a preexisting community of fans that the artist garnered themselves. didn't pay a central role in the changing of this paradigm, maybe because it never understood how to make itself flourish economically. Investing in the concept of a personalized web radio and deciding to charge a fee for it turned out to be an unwise choice in an environment where music was practically becoming free and accessible, through tenuously legal YouTube uploads and the rise to prominence of streaming services.

"The idea of creating such a personalized space on the web acts as a counterpoint to the prevalent 'mass mentality' of the charts and invites the user to orient himself in an autonomous way, distancing himself from the typical consumer mentality,", an entity that awards the best European multimedia products each year, wrote in 2006. "The user decides, criticizes, and therefore selects the music best-adapted to his taste or humor. [Functioning] in this way, will always be relevant." Fifteen years after its founding, "relevant" isn't the most suitable word to describe's role in the digital media landscape. It's more the relic of a passionate moment of the online musical experience, a miniature era of rebellious freedom in which discovering music wasn't a question of algorithms but a personal undertaking or shared mission.

Follow Noisey on Twitter.

03 Nov 09:58

Easy and quick vegan chickpea curry

by Yasmine

Indian food has always been among my favorites, and before turning to a plant based diet, I used to be a big fan of the chicken curry at Indian restaurants.

Naturally, I immediately looked for a plant based alternative and although there are different types of plant based curries, I found that the chickpea version is the one I like the most.

Like most of the recipes I’ve shared so far, this curry is very quick and easy to make. You’ll need a couple key ingredients (coconut milk and curry paste) that you may not have on hand but that you can find pretty much everywhere.

I serve this curry with brown rice or basmati rice and it’s absolutely delicious.

Let me know if you try the recipe by leaving a comment below or by tagging me on your pictures on instagram (@theveganlifeofyas).


Easy and quick vegan chickpea curry

Created by Yasmine on August 14, 2017

You can switch basil for cilantro if you’re not a fan of the basil flavours. You can add a tablespoon of maple syrup if you’d like more sweetness.


  • onions, diced
  • 2 tbsp. tamari or soy sauce
  • 1/2 lime, juiced
  • 1/2 c basil, chopped
  • tomatoes, diced
  • 1 can chickpeas, drained and rinsed
  • 2 tbsp. tikka masala curry paste
  • 1 1/2 c coconut milk
  • tbsp. extra virgin olive oil
  • cloves garlic, minced
  • salt & pepper to taste


  1. Heat the olive oil over medium heat. Add onion and garlic and cook until onion is translucent or becoming a little brown. It should take 4-5 minutes.
  2. Stir in the coconut milk, the curry paste and mix until the paste is fully incorporated.
  3. Season with salt and pepper.
  4. Add the chickpeas, the tamari (or soy sauce) and give everything a good stir.
  5. Bring to a boil, it should take about 5 minutes.
  6. Add the tomatoes, basil, lime juice and mix well and let it cook for a couple more minutes.
  7. It's ready ! Enjoy
20 Oct 09:33

Mastodon: como navegar nessa nova rede social

by Renato Cerqueira

Atualização: uma versão mais atualizada desse guia pode ser encontrado aqui, essa versão do medium não será mais atualizada daqui pra frente.

Sobre toots, servidores e emojis customizados

Talvez você tenha ouvido falar do Mastodon, há alguns meses atrás a rede social bombou na mídia internacional como a rede que veio pra sacudir o Twitter. Mas talvez não, porque aparentemente a cobertura na mídia nacional foi bem pequena. Ainda assim, a rede acaba de chegar na versão 2.0 e está alcançando 1 milhão de usuários, além de mais de 1000 servidores ativos.

O Mastodon é uma rede social de microblogging, semelhante ao Twitter. A sua proposta é ser local onde os seus usuários podem postar status de até 500 caracteres. Até aí, tudo bem igual ao Twitter.

A diferença começa no modelo da rede, que é mais semelhante ao serviço de email, com vários servidores que se comunicam, do que ao modelo do twitter de um grande servidor com todo mundo dentro.

Começando pela parte difícil: como funcionam os servidores?

Vamos pegar a imagem bonitinha do

O Mastodon é composto por vários servidores. Tem o, que é mantido pelo líder do projeto, o Eugen Rochko. Ou mesmo o Mastodon(te), mantido por mim mesmo. Os dois estão em lugares diferentes, controlados por pessoas diferentes, mas ainda assim, eles falam entre si. Se eu quero mandar uma mensagem pro Eugen, basta eu mandar uma mensagem pra e ele vai receber ela por lá e se ele quiser me responder ele vai responder pra e eu vou receber ela de cá. Ou seja, em vez de ser só uma arroba, você é uma arroba em um endereço, que nem email.

Assim como no Twitter no início dos tempos, é possível acompanhar uma timeline especial, a timeline local, que tem todos os toots…

Peraí. Eu num disse isso, né? Quando alguém posta uma coisa no Mastodon isso se chama um toot, se pronuncia “Tut”.

Toot é a onomatopeia de uma corneta em inglês. fonte: toastmonster

Então, como eu ia dizendo, é possível acompanhar uma timeline especial onde tem todos os toots públicos dos usuários do seu servidor. É uma maneira bem legal de descobrir gente e conteúdo novo.

E aí, tem a timeline global (também chamada de federada) que é onde estão os toots de todos os usuários que são vistos pela servidor onde você está. Pode ser meio confuso, porque tem gente do mundo todo postando. Tem umas ferramentas pra filtrar línguas nas timelines local e global pra ajudar um pouco nesse sentido.

Pô, mas aí só me complicou. Qual a vantagem?

A vantagem é que cada servidor é administrado por gente diferente. Você com certeza pode achar um servidor onde você vai estar livre de conteúdo que você não quer ver e ver mais do que você quer. Tá querendo um servidor feito para brasileiros? Tem lá. Quer um servidor mais voltado pro público LGBTQ? Tem lá. Ou de repente cê tá procurando um servidor mais voltado pro público interessado em livros e também tem lá. E se quiser derrubar o capitalismo e falar de gatinhos, também tem um cantinho.

O que você vê na timeline local vai variar bastante de servidor pra servidor. O que você vê na global vai variar porque servidores podem bloquear conteúdo de outros servidores. Então se você está num servidor que não permite nazismo, fascismo e afins, você provavelmente não vai ver conteúdo desse tipo na sua timeline (e se aparecer, você pode reportar aos administradores e eles provavelmente vão bloquear).

E no final das contas, todo mundo com conhecimento técnico ou um pouco de dinheiro pode botar um servidor novo no ar. Então se você quer fazer um servidor pra fãs do campeonato brasileiro, você também pode. (Tô jogando no ar. Acho que ainda não tem, hein. Corre lá :)

Tá, e como eu escolho meu servidor, então?

Tem um site que tem um pequeno questionário pra te ajudar justamente nessa questão, o Mastodon Instances.

Toots e tweets

Os toots são parecidos com tweets, mas tem algumas diferenças.

  1. Os toots podem ter até 500 caracteres*;
  2. Os toots têm configurações de privacidade:
    > Você pode postar publicamente (ou seja, todo mundo vê seu toot e ele aparece nas timelines local e global)
    > Você pode postar não listado (ou seja, todo mundo pode ver seu toot, mas ele não aparece nas timelines local e global)
    > Você pode postar privado (e nesse caso seu toot só aparece pra quem te segue)
    > Você pode postar um toot diretamente pra alguns usuários, e nesse caso é parecido com uma mensagem, só os usuários que você citar vão ver.
  3. Spoiler / alerta de conteúdo: Isso é mara. Você pode marcar enquanto for postar um aviso de conteúdo pra um toot. Aí aparece assim:
Cuidado. Contém spoilers!

Ah, e é claro: Tudo em ordem cronológica. Nada de toot fora de ordem ou like dos amigos aparecendo na timeline.


A versão 2.0 está fresquinha, saída do forno! E com ela vem uma novidade que eu acho particularmente bem legal: emojis customizados!

Tá tendo party parrot e várias bandeiras sim!

Além dos emojis normais que você acha no seu telefone, os administradores das instâncias podem adicionar outros emojis.


Sim, tem aplicativos pra Android, pra IOS, pra desktop e até mesmo pra uns certos editores de texto 😉

Por exemplo, pra Android os mais comuns são o Tusky, Twidere (que funciona tanto pro Mastodon quanto pro Twitter), Mastalab, Subway Tooter11t.

Pra iOS o AmaroqiMast.

Além disso, ambos Android e iOS recentemente suportam PWAs e por isso você pode usar o próprio site da sua instância como um app no seu celular.

Pra outros sistemas e uma lista mais atualizada, dá pra dar uma olhada nessa lista aqui que é mantida pelo projeto: aplicativos.

Como o Mastodon é open-source, a maioria dos seus aplicativos também é. Então você pode dar uma procurada até encontrar um app que te faça se sentir mais em casa.


Mudar de rede social é um negócio complicado e é por isso que tem ferramentas pra tentar ajudar um pouco na transição.

Mastodon Bridge (a ponte): Criado pelo próprio Eugen Rochko, a ponte serve pra descobrir amigos do Twitter no Mastodon e vice-versa. Depois de criar sua conta em um dos servidores, basta ir lá e conectar a sua conta do Twitter e do Mastodon. Aí ele vai mostrar onde você pode seguir seus amigos do Twitter.

Mastodon Twitter Crossposter (postando entre as redes): Essa aí é minha. Você conecta suas contas do Twitter e do Mastodon e aí você pode decidir como você quer postar entre as redes. Do Twitter pro Mastodon, ou do Mastodon pro Twitter, que tipo de posts vão ser postados. É open-source e tem coisa pra fazer, se quiser contribuir.

Mais informações

A página do projeto é um bom ponto pra começar: The Mastodon Project. Tem tradução em português por lá. Aliás, falando em tradução: tanto a página do projeto quanto a interface do Mastodon em si foram traduzidas pro português Brasileiro pela Anna 🎉

Tem muito mais informação, muito mais detalhada, no repositório de documentação do projeto, mas a maioria das coisas ainda não está traduzida pra português ou português do Brasil. (Tá aí uma oportunidade, ó.)

Um pouco mais velho mas igualmente útil é o texto da Qina Liu: What I wish I knew before joining Mastodon. Embora esteja desatualizado em alguns pontos, ainda é bem divertido e foi o que me inspirou a escrever esse aqui :)

* Vale notar que por padrão os toots têm 500 caracteres. Na prática alguns servidores permitem mais, no finado, por exemplo, o limite era de 666 caracteres. 😜

14 Sep 08:09

C: \>_ A fear submitted by J. to Deep Dark Fears -...

C: \>_ A fear submitted by J. to Deep Dark Fears - thanks!

My new book “The Creeps” is available now from your local bookstore, Amazon, Barnes & Noble, Book Depository, iBooks, IndieBound, and wherever books are sold. You can find more information here.

01 Sep 12:59

Supervillain Plan

Someday, some big historical event will happen during the DST changeover, and all the tick-tock articles chronicling how it unfolded will have to include a really annoying explanation next to their timelines.
01 Sep 12:57

Eclipse Science

I was thinking of observing stars to verify Einstein's theory of relativity again, but I gotta say, that thing is looking pretty solid at this point.