Shared posts

06 Sep 09:30

Recreate the Conditions

We've almost finished constructing the piña collider.
02 Sep 12:12

Modern Tools

I tried to train an AI to repair my Python environment but it kept giving up and deleting itself.
02 Sep 12:11

:/ A fear submitted by Gladia to Deep Dark Fears - thanks!You...



:/ A fear submitted by Gladia to Deep Dark Fears - thanks!

You can pick up signed books and original artwork in my Etsy store - check it out!

12 Aug 07:45

Average Familiarity

How could anyone consider themselves a well-rounded adult without a basic understanding of silicate geochemistry? Silicates are everywhere! It's hard to throw a rock without throwing one!
24 Jun 08:24

Come out wherever you are! An anonymous fear submitted to Deep...

16 Jun 09:40

What was that? An anonymous fear submitted to Deep Dark Fears -...



What was that? An anonymous fear submitted to Deep Dark Fears - thanks!

Looking for a gift for that weirdo you hang out with? You can find signed copies of my Deep Dark Fears books in my Etsy store -> CLICK HERE!

#comics #deepdarkfears

10 Jun 09:17

First Time Since Early 2020

Gotten the Ferris wheel operator's attention
27 Apr 08:19

Fun in the sun! A fear submitted by Brooke to Deep Dark Fears -...

17 Feb 13:36

mRNA Vaccine

To ensure lasting immunity, doctors recommend destroying a second Death Star some time after the first.
18 Aug 07:42

Dependency

Someday ImageMagick will finally break for good and we'll have a long period of scrambling as we try to reassemble civilization from the rubble.
25 Jul 11:41

Asterisk Corrections

I like trying to make it as hard as possible. "I'd love to meet up, maybe in a few days? Next week is looking pretty empty. *witchcraft"
09 Jul 11:23

Acceptable Risk

Good thing I'm not already prone to overthinking everyday decisions!
28 Jun 12:34

Como é ser parte de um supermercado cooperativo

Quem me conhece sabe que tem um tempo que eu falo dessa tal cooperativa. Basta sentar comigo num bar que é bem provável que você vá ouvir, ou já tenha ouvido, da maravilha que é a BEES e talicoisa. Notei que ainda não tinha escrito nada sobre isso e achei que era uma boa ideia pra espalhar mais a ideia por aí.

Cresci em cidade grande e embora tivesse uma vendinha bem perto de casa, que tinha aquelas jarras de vidro cheias de balas pra eu pedir pra mamãe quando eu passava por lá no caminho do colégio, no geral pra mim supermercado sempre foi uma coisa com “super” no nome. Aquela história: um grande espaço, mil corredores e um estoque infindável de todo tipo de coisa.

A maioria dos supermercados cooperativos se baseam na idéia do Park Slope Food Coop (abre em uma nova janela ), em Nova Iorque. O Park Slope Food Coop foi fundado em 1973 e opera em um sistema em que para fazer compras no supermercado, a pessoa precisa ser uma cooperadora. Pra se tornar uma cooperadora, as pessoas pagam um valor uma vez (comprando uma parte na cooperativa) e pra manter seu status ativo, fazem um turno de trabalho de 2h45 uma vez por mês no supermercado, em tarefas que fazem o supermercado funcionar (estoque, manutenção, administração, etc).

Nunca visitei o Park Slope, mas o primeiro supermercado cooperativo que eu visitei foi La Cagette (abre em uma nova janela ), em Montpellier, no início de 2019 e eu fiquei imediatamente encantado pela ideia. A atmosfera do lugar era bem diferente, mais próxima da vendinha que eu me lembrava quando era criança. Ao invés do ambiente cheio de propaganda, com alguns produtos sendo iluminados como se fossem sagrados, o ambiente era bem mais tranquilo, as pessoas que entravam e saíam se conheciam e conversavam no caixa ou enquanto estavam estocando uma prateleira.

Um supermercado tradicional tem todos os elementos do capitalismo moderno: o marketing agressivo, descontos risíveis se você der informações suas e sempre se identificar quando faz compras de modo que eles possam fazer um perfil seu, funcionários mal pagos executando funções repetitivas por horas a fio (e muitas vezes mais de uma função ao mesmo tempo). Além disso, supermercados tendem a ter uma margem de lucro variável nos produtos, muitas vezes legumes ou macarrão são muito baratos e tem uma margem de lucro bem pequena, e quando você compra algo como uma caneta ou uma pilha, ou um produto orgânico, é nesse tipo de coisa que tem uma margem gigantesca.

Quando voltei pra Bruxelas depois de conhecer La Cagette, fui procurar pra saber se algo do tipo existia por aqui. E encontrei a BEES coop (abre em uma nova janela ).

A BEES coop começou como um grupo de compras, as pessoas que participavam do grupo tomavam decisões sobre o que comprar e compravam coletivamente para obter preços reduzidos pro grupo. Mas a ideia foi se transformando até que, também se inspirando no Park Slope, se transformou no atual supermercado cooperativo que funciona desde 2017.

O supermercado lembra um supermercado tradicional: tem cestas e carrinhos e uma gama de produtos que vai de arroz e carnes até produtos de higiene de peças de bicicleta. A grande diferença é na escolha do que entra na loja: essa escolha é também feita de maneira participativa, e prioriza produtos orgânicos, locais ou de cooperativas; e muitas vezes prioridade para o pequeno produtor, mesmo que não tenha o selo “xyz”, ao invés do grande distribuidor. Além disso, também tem um monte de coisa que é disponível a granel: macarrão, arroz, lentilha, sabonete líquido, vinho e mesmo azeite. Dá pra levar seu saco, pote ou garrafa e encher por lá mesmo.

Além dos cooperadores, existem seis pessoas assalariadas que trabalham em tempo parcial. Elas também trabalham na loja, mas uma grande parte do seu trabalho é de fazer os pedidos de produtos, decidir o posicionamento de produtos na loja, recepcionar encomendas e repassar às cooperadoras no início do turno quais são as tarefas pendentes para o turno atual.

Toda pessoa cooperadora tem um cartão, que lhe identifica na entrada da loja e com o qual pode fazer suas compras. Além disso, tem direito a nomear duas outras pessoas de mais de 18 anos que vivem sob o mesmo teto, essas não precisam trabalhar, são “comedoras” e ganham seus próprios cartões para fazer compras.

Eu, como cooperador, trabalho 2h45 na loja por mês. Às vezes fico no caixa, outras vezes reabastecendo prateleiras, na entrada da loja ou arrumando o estoque. Tem também gente que trabalha no escritório, recebendo novos membros, atualizando os status dos membros atuais e outras tarefas do tipo; Tem gente que trabalha no corte de queijos; Tem gente que limpa a loja e guarda os legumes na câmara fria no fim do dia; E tem gente que faz trabalhos em comitês específicos, como o comitê que organiza a assembléia geral ou o comitê que seleciona os produtos.

Um turno normalmente tem uma pessoa “supercooperadora”, que faz um treinamento com as pessoas assalariadas para saber mais sobre o funcionamento da loja, além de ser o ponto de contato das outras cooperadoras para ausências inesperadas. Além disso, é para a supercooperadora que os funcionários explicam quais tarefas tem mais prioridade no turno atual. Além dos trabalhos de sempre, como caixa e entrada da loja, as vezes entregas precisam ser organizadas no estoque, novos produtos precisam ser registrados no sistema da loja, ou as prateleiras de cerveja estão quase vazias e precisam de uma atenção especial ;)

Cada turno de trabalho na loja começa com os membros se reunindo, recebendo da supercooperadora as tarefas para o turno e então se dividindo pra realizar as tarefas. E depois de umas horinhas, todo mundo se despede, com o novo turno acontecendo só dali a 4 semanas.

Trabalhar e fazer as compras na BEES é uma experiência agradável. Não só você conhece os rostos que dividem seus turnos com você, mas você começa a encontrar as mesmas pessoas que fazem compras em horários similares a você, e depois a encontrar essas pessoas nas assembléias gerais. O comitê de vinhos as vezes coloca um aviso caloroso dizendo que eles acharam um novo vinho super gostoso e que você devia experimentar. Quando você está no caixa é normal ver as pessoas conversando dizendo que tal e tal produto são novos e são tão bons por causa disso e daquilo. E nos corredores é comum ver gente se reconhecendo e começando a conversar.

Uma coisa importante, que me perguntam com frequência é: “tá, tudo muito bonito, seu Lond, mas e o preço das coisas?”. E a verdade é que depende. Como eu disse lá no alto, um supermercado tradicional trabalha com uma margem variável de lucro, cada produto tem uma margem diferente. Na BEES a margem é constante. Todo produto tem uma margem de 20%. A loja não tem fins lucrativos, a margem está lá para a manutenção da loja e salários dos funcionários.

A nossa experiência é que o preço médio do carrinho é bem próximo do que é no supermercado tradicional. A gente também começou a comprar mais coisas orgânicas, porque o preço é bem próximo do não-orgânico no supermercado normal. Tem produtos mais caros e algumas marcas tradicionais simplesmente não são vendidos por lá.

E a quantidade plástico que a gente traz pra casa diminuiu de maneira espetacular. Os produtos de supermercado aqui tendem a ter muito mais embalagem plástica do que no Brasil, tenho impressão. E na BEES tem um incentivo ativo a reutilização, então a gente começou a usar sacos laváveis de pano pra legumes e granel, dá preferência pra embalagens de vidro que são retornáveis (ou mesmo quando não são, por aqui tem lixeiras especiais pra vidro, que são reciclados) e pra coisas como óleo e azeite a gente enche os galões a granel, reusando as mesmas garrafas de vidro pra encher de azeite na cozinha no dia-a-dia.

O projeto não é perfeito, claro, mas é uma demonstração agrádavel de que é possível se organizar de maneiras diferentes dentro do capitalismo, enquanto não saímos dele. Que é possível encontrar formas alternativas de fazer coisas essenciais como suas compras de mês que operam de uma forma coletiva, com uma preocupação ambiental e uma preocupação com o aspecto humano, com o trabalhador do outro lado da produção, evitando os grandes conglomerados que infectam a vida do dia-a-dia e tem tanto poder.

Tentei buscar um pouco e não achei projetos similares no Brasil. Sei que existem lojas, como o Armazém do Campo do MST, que tentam aproximar o produtor do consumidor e vendem produtos de assentamentos e de pequenos agricultores. Mas não achei nada com o mesmo modelo participativo, como a BEES ou Park Slope.

Em todo caso, deixo algumas sugestões pra você lendo esse texto: preste atenção o quanto de coisas você compra no supermercado é por causa de propaganda. Pense em comprar mais coisas a granel, no Rio eu sei que existem lojas tradicionais de granel, não são nenhuma BEES, mas você vai sentir o quanto isso diminui o uso de plástico. E por fim, procure projetos que te conectem a pequenos produtores, enquanto a gente não tira do poder do agronegócio por maneiras diretas, a gente pode tirar o poder do agronegócio indo ao pequeno produtor quando possível.

21 Apr 07:08

Sourdough Starter

Once the lockdown is over, let's all get together and swap starters!
02 Mar 14:52

Ringtone Timeline

No one likes my novelty ringtone, an audio recording of a phone on vibrate sitting on a hard surface.
15 Aug 10:30

Old Game Worlds

Ok, how many coins for a cinnamon roll?
22 Jan 12:14

Mastodon 2.7

Polish translation is available: Mastodon 2.7 The fresh release of Mastodon brings long-overdue improvements to discoverability of content and the administration interface, as well as a large number of bug fixes and extra polish. The 2.7 release consists of 376 commits by 36 contributors since October 31, 2018. For line-by-line attributions, you can peruse the changelog file, and for a historically complete list of contributors and translators, you can refer to the authors file, both included in the release.
14 Jan 22:51

2019 - O ano de deixar o instagram

No início de 2018 eu finalmente apaguei o meu Facebook.

Eu já tinha reclamado do facebook em 2015 (abre em uma nova janela ); na época a preocupação era mais em termos de como eu consumia informação na internet. Eu procurei alternativas ao Facebook que tinha se tornado meu modo de consumir notícias e de passar as notícias a frente.

As preocupações com privacidade começaram quase ao mesmo tempo, posts como esse do Splinter, de 2016 (abre em uma nova janela ) ou esse do Gizmodo, de 2017 (abre em uma nova janela ) mostravam que os algoritmos de recomendação de amigos do Facebook tinham muito mais informação do que eles deixavam transparecer.

Em seguida todo o escândalo da Cambridge Analytica ganhou força, e logo depois, uma montanha de revelações ainda piores e a campanha #DeleteFacebook começou a ganhar força. Um artigo do New York Times fala de como o Facebook lutou contra os escândalos usando de lobbying e de difamação contra outras empresas. (abre em uma nova janela ).

No entanto, uma coisa sempre necessária de lembrar é que o Facebook é dono do Instagram e do Whatsapp. Hoje em dia, já há quem diga que a compra do Instagram pelo Facebook foi a maior falha regulatória da última década. (abre em uma nova janela )

Sair do Facebook foi um passo importante, mas pra manter contato com outras pessoas, é sempre muito difícil fugir do Whatsapp e do Instagram. A verdade é que o ideal seria que o Facebook (e o Google, já que estamos falando do assunto), sofressem um processo semelhante ao que a Microsoft passou no fim dos anos 90 (abre em uma nova janela ), quando a empresa teve que ser dividida, e o resultado foi o fôlego dado a outros navegadores como Firefox e Chrome, que puderam finalmente causar uma mudança na internet, que antes era basicamente dedicada ao Internet Explorer.

No entanto, como estamos falando de aplicações sociais por natureza, ações individuais como o #DeleteFacebook podem sim ter efeitos que tiram o poder da rede como um todo, a prova disso é como o Facebook já começou a mover sua atenção para o Instagram de diversas formas, como quando notificações de que pessoas que você seguia tinham postado fotos no Facebook estavam sendo testadas no Instagram (abre em uma nova janela ). Ao mesmo tempo, o Whatsapp deve receber anúncios em breve. (abre em uma nova janela )

Tudo isso mostra como o Facebook pretende continuar extendendo sua influência para os outros apps, em uma tentativa de manter seu controle sobre o dados dos seus usuários.

Por isso, uma das minhas resoluções de 2019 é de tentar abandonar o Instagram. A minha idéia é começar a mover mais e mais para o Fediverso (abre em uma nova janela ), agora que o Pixelfed (abre em uma nova janela ), uma alternativa ao instagram, começa a tomar forma.

Resta ver se a mudança vai ser possível. Que tal também vir pro Fediverso comigo? 😉

03 Jan 12:40

Why does decentralization matter?

Japanese translation is available: なぜ脱中央集権(decentralization)が重要なのか? I’ve been writing about Mastodon for two whole years now, and it occurred to me that at no point did I lay out why anyone should care about decentralization in clear and concise text. I have, of course, explained it in interviews, and you will find some of the arguments here and there in promotional material, but this article should answer that question once and for all.
28 Nov 09:50

The Depressing Phenomenon of Men Who Ask Their Dates No Questions

by Madeleine Holden

They do, however, talk a lot about snowboarding, ‘Mad Men,’ Socrates, their own penises, Amnesty International, mushrooms, foot fetishes, monogamy, war and trash bags

Nikki, a 22-year-old journalism student from Minneapolis, is telling me about the worst date she’s ever been on, with a man called Athens she met at college. “He talked about his goals, his week, his career, his meditation, his favorite books, his respect for ‘real’ musicians and how most people pronounce ‘namaste’ wrong,” she says. Nikki waited in vain to be asked a single question about herself while Athens raved about philosophy, monogamy, wanting to live in a van and how acid could lead to a higher sense of self. She waited for him to ask her about herself the entire date. “He texted me the next day about how much fun the date was,” she continues, “and he spelled my name wrong in the text.”

Nikki’s experience is bleakly funny, but it’s far from an anomaly. In the past week, I’ve heard from more than 250 women, men and non-binary people about their experiences with men asking them zero questions on dates. For example, Diana, a 25-year-old New Zealander currently based in Indiana, recently went on a date with the man who fixed her dishwasher. Assuming she was from Australia, he monologued about snakes, Steve Irwin and prison colonies while ordering pork nachos for the two of them (Diana is a vegetarian). After several hours of unidirectional conversation, Diana hadn’t been asked to share a single personal detail. “He didn’t ask me anything,” she tells me. “Like, not one thing. To this day, I’m not sure if he knows my name.”

Some of these men went into excruciating detail about dull topics while their dates sat across from them uninterrogated about their own jobs, dreams, values, favorite TV shows and best jokes. Vanessa, a 49-year-old consultant in Wellington, tells me about a date who treated her to a speech about his new office layout without learning a single detail about her. “He talked about how Bryan at work had got a desk next to the window, which was obviously a travesty,” she says. “Then he explained at length how his phone charger wouldn’t fit the electrical plug on his desk.” I heard from people whose dates — all men — Chromecasted their haircut pictures, performed feeble magic tricks, sang songs, broadcast the date on Instagram, adopted the downward dog position, watched the bar TV or pulled out their phones and began texting; anything but ask a solitary question of their dates in return, most of whom had been sitting like free therapists for hours.

To add insult to injury, many of the women who shared these stories with me said that the men told them later that they felt the dates had gone swimmingly, often asking for a second. This makes sense: being able to speak about oneself freely and without interruption to a patient, attentive audience is a service that usually costs upwards of $150 a session. If some smart, attractive social media editor from Ohio is willing to act as a free therapist for a few hours — and as a semi-relevant aside, almost all of these men refused to pick up the check for dinner — it’s no wonder the same men were lining up for more. As Anna, another woman I spoke to about her zero-question date, puts it: “Of course he thought the date went well. He’d been able to talk about himself uninterrupted for hours, while I looked on bored.”

Most of the people I spoke to about this phenomenon were women, but several gay men and non-binary people had near-identical experiences with romantic prospects who asked them no questions. “That happens so frequently dating other queer or gay men,” Kyle Turner, a 24-year-old freelance writer based in Brooklyn, tells me. “I spend a majority of the time asking them questions and they rarely return the favor, so at a certain point, I either try to slip in things about myself in response or give up.” Several women told me that, at a certain point, they began to treat the lack of reciprocity like a game, waiting in amusement to see how long it might take to be asked something about themselves. “I invented a bad first date drinking game,” Allie, a 27-year-old organizer from the Bay Area, tells me. “See how many sips and songs you can get through before he stops talking.” The date typically ends with Allie drunk, bemused and still a stranger to her date, despite her being treated to pretty much his entire inner world.

When I asked these ignored daters to hazard a guess as to the cause of this self-absorption, I got a variety of responses. Some thought it may have been nerves, while others felt men in general were more likely to view dates as a personal marketing exercise (“Here’s why you should find me attractive”) rather than an opportunity to get to know a romantic prospect. For a professional opinion, I spoke to Elise Franklin, a psychotherapist based in L.A., who tells me that the nerves hypothesis has limited applicability. “Sure, it can definitely be nerves for some,” she says. “I know I ramble when I’m nervous, and that’s common.” However, she says that a more significant explanation for the phenomenon is narcissism; a personality trait more common in men than women. “Narcissists can’t tolerate being told, ‘Your feelings don’t match my feelings,’” she explains. “To them, their feelings are everyone’s feelings — if I feel this way, then you feel this way, and if I’m interested in this, you are too.”

“Narcissism is encouraged in men,” Franklin continues. “Men are discouraged from mirroring their parents, and other members of society in general.” Because of this, she says, men are more likely to end up in the position of the oblivious, raving dater than women are. “Women are, in general, expected to be people pleasers,” she says. “We’ve learned our worth through social currency, and we’ve been the understanding ear for centuries.” She points out that her own listening profession, therapy, is dominated by women — the American Psychological Association found that there were 2.1 female psychologists for every male, and in less professionalized roles such as counseling, the gender gap is even larger.

Is this really such a gendered phenomenon, though? Aren’t women just as capable of being bloviating, self-absorbed bores? Yes, but with the significant proviso that social attitudes to gender mean that narcissism is tolerated in men but punished in women — an argument made by Jeffrey Kluger in his 2014 book The Narcissist Next Door, and confirmed in part by studies that show men interrupt women more than the reverse and that listener bias means even when men and women are speaking equal amounts, women are perceived as speaking 55 percent more and men 45 percent less.

As far as I’m aware, there’s no statistically significant data on this topic, and it’s a phenomenon that receives little media attention or academic inquiry. But my Twitter DMs and Gmail inbox are swollen with hundreds of anecdotes, all of which make one thing clear: There’s no shortage of men more willing to wax lyrical about snowboarding, Mad Men, Socrates, their own penises, Amnesty International, mushrooms, foot fetishes, monogamy and war — and to sings songs, strike yoga poses, share the contents of their entire camera rolls and perform magic tricks — than to ask the flesh-and-blood women and men they’re presently on a date with a single question about themselves.

The kicker? Most of them walk away thinking they nailed it.

The post The Depressing Phenomenon of Men Who Ask Their Dates No Questions appeared first on MEL Magazine.

05 Nov 08:06

Round and round. A fear submitted by Alec to Deep Dark Fears -...



Round and round. A fear submitted by Alec to Deep Dark Fears - thanks!
My two Deep Dark Fears books are available now from your local bookstore, Amazon, Barnes & Noble, Book Depository, iBooks, IndieBound, and wherever books are sold. You can find more information here!

02 Nov 10:28

Mastodon 2.6 released

After more than a month of work, I am happy to announce the new version of Mastodon, with improved visuals, a new way to assert your identity, and a lot of bug fixes.Verification Verifying identity in a network with no central authority is not straightforward. But there is a way. It requires a change in mindset, though. Twitter teaches us that people who have a checkmark next to their name are real and important, and those that don’t are not.
24 Sep 09:29

by dorrismccomics
31 Aug 14:38

Mastodon quick start guide

Polish translation is available: Przewodnik po Mastodonie So you want to join Mastodon and get tooting. Great! Here’s how to dive straight in. Let’s start with the basics. What is this? Mastodon is a microblogging platform akin to others you may have seen, such as Twitter, but instead of being centralised it is a federated network which operates in a similar way to email. Like email, you choose your server and whether it’s GMail, Outlook, iCloud, wherever you sign up you know you’ll be able to email everyone you need to so long as you know their address.
09 Aug 09:36

Voting Software

There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
23 Jul 15:06

Setting up the development environment for Mastodon on Arch Linux

Well, on the last post I described how to run a mastodon instance using Arch Linux. But what if you wanted to contribute to Mastodon also?

I still plan to write maybe a small demo on how to get your hands dirty on Mastodon’s codebase, maybe fixing a small bug, but before that we need to have the development environment up and working!

Now, as it’s the case with the guide on how to run your instance, this guide is very similar to the official guide (opens in a new window ), and when in doubt, you should double check the official guide because it’s more likely to be up-to-date. This guide is also very similar to how to run an instance, I mean, it’s the same software, right?

There is also an official guide to setting up your environment using vagrant (opens in a new window ) which might be easier if you have enough resources for a VM running side-by-side with your environment and/or does not run Linux.

This guide is focused on Mastodon, but most of the setup done here will work for other ruby on rails projects you might want to contribute to.

This was last updated on 31st of January, 2019.


Note on the choices made in this guide

The official guide recommends rbenv (opens in a new window ), but I’m more used to rvm (opens in a new window ). rbenv is likely to be more lightweight. So if you don’t have any preferences, you might want to stick to rbenv and ruby-build (opens in a new window ) when installing ruby.

Since this is a development setup, I’m not mentioning any security concerns.
⚠️ Do not use this guide for running a production instance. ⚠️
Refer to how to run a mastodon instance using Arch Linux instead.

Questions are super welcome, you can contact me using any of the methods listed in the about page. Also if you notice that something doesn’t seem right, don’t hesitate to hit me up.

As with the other guide, I tested the steps on this guide on a virtual machine and they should work if you copy-paste them. Things might not work well if your computer has less than 2GB of ram.


On this page

  1. General Mastodon development tips
  2. Dependencies
  3. PostgreSQL configuration
  4. Redis
  5. Setting up ruby and node
  6. Cloning the repo and installing dependencies
    1. Run each service separately
    2. Run everything using Foreman
  7. Working on master
  8. Tests
  9. Other useful commands
  10. Troubleshooting
    1. RVM says it’s not a function
    2. Mastodon has no css

General Mastodon development tips

From the official guide:

You can use a localhost->world tunneling service like ngrok (opens in a new window ) if you want to test federation, however that should not be your primary mode of operation. If you want to have a permanently federating server, set up a proper instance on a VPS with a domain name, and simply keep it up to date with your own fork of the project while doing development on localhost.

Ngrok and similar services give you a random domain on each start up. This is good enough to test how the code you’re working on handles real-world situations. But as soon as your domain changes, for everybody else concerned you’re a different instance than before.

Generally, federation bits are tricky to work on for exactly this reason - it’s hard to test. And when you are testing with a disposable instance you are polluting the databases of the real servers you’re testing against, usually not a big deal but can be annoying. The way I have handled this so far was thus: I have used ngrok for one session, and recorded the exchanges from its web interface to create fixtures and test suites. From then on I’ve been working with those rather than live servers.

I advise to study the existing code and the RFCs before trying to implement any federation-related changes. It’s not that difficult, but I think “here be dragons” applies because it’s easy to break.

If your development environment is running remotely (e.g. on a VPS or virtual machine), setting the REMOTE_DEV environment variable will swap your instance from using “letter opener” (which launches a local browser) to “letter opener web” (which collects emails and displays them at /letter_opener ).

When trying to fix a bug or implement a new feature, it is a good idea to branch off the master branch with a new branch and then submit your pull request using that branch.

A good way to see that your environment is working as it should is to check out the latest stable release (for instance, at the time of writing the latest stable release is v2.7.1) and then run tests as suggested in the tests session. They should all pass because the tests in stable releases should always be working.


Dependencies

Since we’re trying to run the same software as in the production guide, we’ll need mostly the same dependencies, this is what we’ll need:

  • postgresql: The SQL database used by Mastodon
  • redis: Used by mastodon for in-memory data store
  • ffmpeg: Used by mastodon for conversion of GIFs to MP4s.
  • imagemagick: Used by mastodon for image related operations
  • protobuf: Used by mastodon for language detection
  • git: Used for version control.
  • python2: Used by gyp, a node tool that builds native addons modules for node.js

Besides those, it’s a good idea to install the base-devel group, since it comes with gcc and some ruby modules need to compile native extensions.

Now, you can install those with:

sudo pacman -S postgresql redis ffmpeg imagemagick protobuf git python2 base-devel

PostgreSQL configuration

Take a look at Arch Linux’s wiki about PostgreSQL (opens in a new window ). The first thing to do is to initialize the database cluster. This is done by doing:

sudo -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'

If you want to use a different language, there’s no problem.

After this completes, you can then do

sudo systemctl start postgresql # will start postgresql

You will need to start postgresql every time you want to use it for development. You could also enable it so it starts with your system if you prefer, but it will be running and using resources even when you don’t need it.

Now that postgresql is running, we can create your user in postgresql:

# Launch psql as the postgres user
sudo -u postgres psql

In the prompt that opens, you need to do the command below replacing the username you use on your setup.

-- Creates user with SUPERUSER permission level
CREATE USER <your username here> SUPERUSER;

The SUPERUSER level will let you do anything without having to change users. With great powers…


Redis

We also need to start redis. Same as postgresql:

sudo systemctl start redis # will start redis

As with postgres, you can enable it too to make it start with the system, but personally I prefer to start on demand.


Setting up ruby and node

This part is very similar to the production guide, so I’ll copy and paste a bit:

First step is that we install rvm that will be used for configuring ruby. For that we’ll follow the instructions at rvm.io (opens in a new window ). Before doing the following command, visit rvm.io (opens in a new window ) and check which keys need to be added with gpg --keyserver hkp://keys.gnupg.net --recv-keys.

\curl -sSL https://get.rvm.io | bash -s stable

After that, we’ll have rvm. You will see that to use rvm in the same session you need to execute additional commands:

source $HOME/.rvm/scripts/rvm

With rvm installed, we can then install the ruby version that Mastodon uses:

rvm install 2.6.1

Now, this will take some time, drink some water, stretch and come back.

Similarly, we will install nvm for managing which node version we’ll use.

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

Refer to nvm github (opens in a new window ) for the latest version.

You will also need to run

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

And add these same lines to ~/.bash_profile

And then to install the node version we’re using:

nvm install 8.11.3

And to install yarn:

npm install -g yarn

While we’re at it, we also need to install bundler:

gem install bundler

And with that we have ruby and npm dependencies ready to go.


Cloning the repo and installing dependencies

You need to clone the repo somewhere on your computer. I usually clone my projects in a source folder in my home directory, if you do different, change the following instructions accordingly.

cd ~/source
git clone https://github.com/tootsuite/mastodon.git

And then, cd ~/source/mastodon and we will install the dependencies of the project:

bundle install # install ruby dependencies
yarn install --pure-lockfile # install node dependencies

This will also take a while, try to relax a bit, have you listened to your favorite song today? 🎶

Since we created the postgres user before, we can setup the development database using:

bundle exec rails db:setup

This will use the default development configuration to setup the database. Which means: no password, same user as your username, using a database named mastodon_development in localhost.

In development mode the database is setup with an admin account for you to test with. The email address will be admin@YOURDOMAIN (e.g. admin@localhost:3000) and the password will be mastodonadmin.

Now, you have two options.


Run each service separately

If you checked out the guide to run an instance, you probably noticed that mastodon has three parts: A web service, a sidekiq service to run background jobs and a streaming service. In development you need those three components too, plus the webpack development server, which will compile assets (javascript, css) as needed. In production we don’t need webpack running all the time because we compile the assets only once after we update Mastodon.

To run those separately, you will need one window for each, since each of those holds your terminal while it’s running.

To run the web server:

bundle exec puma -C config/puma.rb

To run sidekiq:

bundle exec sidekiq

To run the streaming service:

PORT=4000 yarn run start

And to run webpack:

./bin/webpack-dev-server --listen-host 0.0.0.0

All of those should start immediately, except for the webpack server, which compiles the assets before starting.

To check that everything is working as expected, if you open your browser window at http://localhost:3000 you should see Mastodon landing page!


Run everything using Foreman

Now, most of the time this method is more practical. Running each service by itself is good if one is not starting to see what is the error, but most of the time you’ll want to start everything so that you can start coding away. In which case, first you’ll want to install foreman

gem install foreman

And then, when you need to start your dev environment you can do:

foreman start -f Procfile.dev

Working on master

When working on master, the steps are similar to when updating an instance, but they happen much more frequently since master changes much more frequently.

This means, every time you pull changes into your computer (for instance, when you do git pull origin master), you might need to:

# Update any gems that were changed
bundle install
# Update any node packages that were changed
yarn install --pure-lockfile
# Update the database to the latest version
bin/rails db:migrate RAILS_ENV=development

Now, you don’t need to run them all the time, you will notice if one of them is not working as it should. How?

Bundler complains like this:

Could not find proper version of railties (5.2.0) in any of the sources
Run `bundle install` to install missing gems.

The name of the gem and version will change, but this means that one of your dependencies is not up to date and you need to run bundle install again.

If the database is missing a migration, rails will complain with:

ActiveRecord::PendingMigrationError - Migrations are pending. To resolve this issue, run:

        bin/rails db:migrate RAILS_ENV=development

This will appear on your console, but also on your browser.

If the ruby being used in the project is updated, you will also see some complaints from rvm (in this example, with a hypothetical ruby 2.6.2):

$ cd .
Required ruby-2.6.2 is not installed.
To install do: 'rvm install "ruby-2.6.2"'

In that case we need to do the same as we did to install it the first time, that is:

rvm install 2.6.2

And since rvm manages gems by ruby version, you’ll need to install the dependencies again using bundle install.


Tests

Tests in the mastodon project live in the spec folder. Tests also use migrations, so if the database was updated since you last ran tests, you will need to run something like this:

bin/rails db:migrate RAILS_ENV=test

But when you try to run tests with a database missing migrations, you’ll get an error from Rails that will explain exactly that.

To run all the tests, you need to do:

rspec

To run only one test, you can run it like so:

rspec spec/validators/status_length_validator_spec.rb

Other useful commands

If you add a new string that needs to be translated, you can run

yarn manage:translations

To update the localization files. This is needed so that weblate (opens in a new window ) can inform translators that there are new strings to be translated in other languages.

You can check code quality using

rubocop

Have in mind that it might complain about code violations that you did not introduce, but you should always try not to introduce new violations.


Troubleshooting

RVM says it’s not a function

Follow recommended instructions at https://rvm.io/integration/gnome-terminal (opens in a new window )

Mastodon has no css

If mastodon has no css and you see something like #<Errno::ECONNREFUSED: Failed to open TCP connection to localhost:3035 (Connection refused - connect(2) for "::1" port 3035)> in your console, the issue is where webpacker is trying to connect. You can fix it by changing config/webpacker.yml. Instead of

  dev_server:
    host: localhost

Use

  dev_server:
    host: 127.0.0.1
16 Jul 11:45

Running a Mastodon instance using Arch Linux

It’s been a while now that I’ve been running masto.donte.com.br using Arch Linux and since the official guide recommends using Ubuntu 18.04, I figured that describing what I did could help someone out there. If in doubt, use the official guide instructions instead, since they’re more likely to be up to date.

This was last updated on 31st of January, 2019.

Notes about some choices made in this guide

The official guide recommends rbenv, but I’m more used to rvm. rbenv is likely to be more lightweight. So if you don’t have any preferences, you might want to stick to rbenv and ruby-build when installing ruby.

There’s also choices made regarding firewall. You do not need to follow those exact ones if you already have another firewall on your server or if you want to use a different one. However, do use some firewall. :)

Like the official guide, this guide assumes you’re using Let’s Encrypt for the certificates. If it’s not the case, you can ignore let’s encrypt references and configure your own certificate in Nginx.

Also note that the SSL configurations for nginx are slightly different than the ones in the official guide. They are aimed at being compatible with older android phones and were generated using Mozilla SSL Configuration Generator with the intermediate configuration and HSTS enabled. Have in mind that HSTS disables non-secure traffic and gets cached on the client side, so if you are unsure, generate a new configuration without HSTS.

Questions are super welcome, you can contact me using any of the methods listed in the about page. Also if you notice that something doesn’t seem right, don’t hesitate to hit me up.

I tested this guide using a digital ocean droplet with 1GB memory and 1vCPU. I had to enable swap to be able to compile the assets. There’s more information about this in the relevant section.


On this page

  1. Before starting the guide
  2. What you should have by the end of this
  3. General suggestions
  4. DNS
  5. Dependencies
  6. Configuring ufw
  7. Configuring Nginx
  8. Intermission: Configuring Let’s Encrypt
  9. Finishing off nginx configuration
  10. Mastodon user setup
  11. Cloning mastodon repository and installing dependencies
  12. PostgreSQL configuration
  13. Redis configuration
  14. Mastodon application configuration
  15. Intermission: Mastodon directory permissions
  16. Mastodon systemd service files
  17. Emails
  18. Monitoring with uptime robot
  19. Remote media attachment cache cleanup
  20. Renew Let’s Encrypt certificates
  21. Updating between Mastodon versions
  22. Upgrading Arch Linux
  23. (Optional) Adding elasticsearch for searching authorized statuses

Before starting the guide

You need:

  • A server running at least a base install of Arch Linux
  • Root access
  • A domain or sub-domain to use for the instance.

What is assumed:

  • There’s no service running in the same ports as Mastodon. If that’s the case, adjustments will need to be made throughout the guide.
  • You’re not using root as your base user. You do have a user configured with sudo access.
  • You already configured NTP or something similar. Some operations, like 2-Factor Authentication need the correct time on your server.

What you should have by the end of this

You should have an instance running with a basic firewall, a valid https certificate and prepared to be upgraded when needed. All the services needed will be on the same machine. Basic monitoring to know if your instance is up or not.


General suggestions

If you have no experience with Linux systems administrations, it’s a good idea to read a bit about it. You will need to keep this system up to date since it will be facing the internet.

Do not reuse passwords and force public key for your SSH user. Use sudo instead of running everything as root and disable root login over ssh.

The official guide already recommends this, but I’ll go one step further: Always use tmux or screen when doing operations on your server. You will need to learn the basic commands but it’s well worth it to avoid losing things if your connection go down and also for long operations in which you can disconnect and leave it running.

If you have 1GB it’s quite likely that asset compilation will fail. Remember to setup a swap partition or use systemd-swap


DNS

The domain you’re planning to use should have DNS records pointing to your server. If your server has a IPv6 address, you should also configure an AAAA record, otherwise, only the A record should be enough.

Now, this guide will not get into serving a different domain. Just have in mind that:

  • The domain will be part of the identifier of your instance users. Once it’s defined, you cannot change it anymore or you’ll get all kinds of federation weirdness.
  • Because of that, avoid using “temporary” domains, like the ones coming from ngrok or similar.

Dependencies

  • ufw: An easy-to-use firewall
  • certbot and certbox-nginx: used for generating the certificates from Let’s Encrypt.
  • nginx: Frontend web server that will be used in this setup
  • jemalloc: Different memory management library that improves memory usage for this setup.
  • postgresql: The SQL database used by Mastodon
  • redis: Used by mastodon for in-memory data store
  • ffmpeg: Used by mastodon for conversion of GIFs to MP4s.
  • imagemagick: Used by mastodon for image related operations
  • protobuf: Used by mastodon for language detection
  • git: Used for version control.
  • python2: Used by gyp, a node tool that builds native addons modules for node.js
  • libxslt, libyaml: I don’t know. They were in the official guide so I’m installing them, but I have to say: I do not have they installed in other instances and never noticed an issue 🤷🏽‍♂️

Besides those, it’s a good idea to install the base-devel group. It comes with sudo and other tools which might come in handy.

Now, you can install those with:

sudo pacman -S ufw certbot nginx jemalloc postgresql redis ffmpeg imagemagick protobuf git base-devel python2 libxslt libyaml
sudo pacman -S --asdeps certbot-nginx

Configuring ufw

🛑WARNING: Configuring a firewall is quite important, but if something goes wrong you might lose connectivity to your server. Make sure you have other ways of reaching your server if something goes wrong. 🛑

Now, Mastodon runs a couple of different services and to support it we will be running different services too, but since we will have everything in the same server, the only ports that should be available for the outside world are the HTTP/HTTPS ports that will be used to connect to the instance. Also, we want the SSH port open so that we can connect remotely to the server.

You should read into Arch Linux’s wiki about ufw. For this guide what you want is to do:

sudo ufw allow SSH # this allows SSH traffic to your server
sudo ufw allow WWW # this allows traffic on port 80 to your server
sudo ufw allow "WWW Secure" # this allows traffic on port 443 to your server

And then you can do:

sudo ufw enable
sudo systemctl enable ufw # Enables ufw to be started at startup
sudo systemctl start ufw # starts ufw

And with this the firewall should be up :)


Configuring Nginx

You should read into Arch Linux’s wiki about nginx, but again, what you want to do is something along these lines:

First, you want to edit nginx.conf. To remove the “welcome to nginx” page, you want to change the beginning of your server block to something like this:

    server {
        listen       80 default_server;
        server_name  '';

        return 444;

And at the very end of the http block, add:

    types_hash_max_size 4096; # sets the maximum size of the types hash tables
    include sites-enabled/*; # Includes any configuration located in /etc/nginx/sites-enabled

And then create these two directories:

sudo mkdir /etc/nginx/sites-available # All domain configurations will live here
sudo mkdir /etc/nginx/sites-enabled # The enabled ones will be linked here

Now, let’s say we’re using my.instance.com as the instance domain/sub-domain. You will need to replace this accordingly throughout this next steps.

Create a new file /etc/nginx/sites-available/my.instance.com.conf, replacing my.instance.com by your domain, and then add to it the following content:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {
  listen 80;
  listen [::]:80;
  server_name my.instance.com;
  root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name my.instance.com;

  ssl_certificate     /etc/letsencrypt/live/my.instance.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/my.instance.com/privkey.pem;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  add_header Strict-Transport-Security max-age=15768000;

  ssl_stapling on;
  ssl_stapling_verify on;

  ssl_trusted_certificate /etc/letsencrypt/live/my.instance.com/chain.pem;

  resolver 8.8.8.8 8.8.4.4 valid=300s;
  resolver_timeout 5s;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 8m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }

  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}

⚠️ It’s a good idea to take a look if something changed in relation to the official guide ⚠️

At this point nginx still doesn’t know about our instance (because we’re including files from /etc/nginx/sites-enabled and we created the file in /etc/nginx/sites-available), however, we should be able to start nginx already.

For that, we need to do:

sudo systemctl start nginx # Starts the nginx service
sudo systemctl enable nginx # Makes the service start automatically at boot

If you do curl -v <your server ip> now, you should see something like this:

$ curl -v <your server's ip>
* Rebuilt URL to: <your server's ip>/
*   Trying <your server's ip>...
* TCP_NODELAY set
* Connected to <your server's ip> (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: <your server's ip>
> User-Agent: curl/7.60.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host <your server's ip> left intact
curl: (52) Empty reply from server

And that means nginx was correctly started and that ufw is allowing connections as expected. We will now get our certificates from Let’s Encrypt before jumping back to nginx configuration


Intermission: Configuring Let’s Encrypt

Now, for Let’s Encrypt we will use certbot, that we installed previously. For more information about it you can take a look at Arch Linux’s wiki about Certbot. For this guide, you need to run the following command:

sudo certbot --nginx certonly -d my.instance.com

As usual, remind to change the url to the url for your actual instance. You will need to follow the instructions on screen.

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel):

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel:

-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
-------------------------------------------------------------------------------
(Y)es/(N)o:
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for my.instance.com
Using default address 80 for authentication.
2018/07/13 18:28:47 [notice] 4617#4617: signal process started
Waiting for verification...
Cleaning up challenges
2018/07/13 18:28:53 [notice] 4619#4619: signal process started

If everything goes as expected, you should see something like this:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/my.instance.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/my.instance.com/privkey.pem
   Your cert will expire on 2018-10-11. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"

This means that now we have a valid certificate and that we can go back to nginx. Double check that the path informed by certbot (in this example /etc/letsencrypt/live/my.instance.com/fullchain.pem) matches the one in your nginx file.

Let’s Encrypt certificates only last for 90 days, so we will still come back to this. But for now, let’s go back to nginx.

If you have an error like

Saving debug log to /var/log/letsencrypt/letsencrypt.log
An unexpected error occurred:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 10453: ordinal not in range(128)
Please see the logfiles in /var/log/letsencrypt for more details.

Check that you have set up correctly your locale! If your locale is set to C, certbot will fail.


Finishing off nginx configuration

At this point the certificate should be working. Since this configuration is using HSTS, we also need to generate a dhparam. You can do that by doing (might take a little while!)

openssl dhparam -out dhparam.pem 2048
sudo mv dhparam.pem /etc/ssl/certs/dhparam.pem

What is left for us to do is to enable the instance configuration and reload nginx. We should do this (remember to replace with your instance config!):

sudo ln -s /etc/nginx/sites-available/my.instance.com.conf /etc/nginx/sites-enabled/ # creates a softlink of the configuration we created previously to the enabled sites directory
sudo systemctl reload nginx

Now, if everything went fine, your nginx should reload. Otherwise, it will throw some error like this:

$ sudo systemctl reload nginx
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.

In that case, you need to execute one of the commands and try to see what went wrong.

However, if everything went right until now, if you do curl -v my.instance.com replacing for your domain, you should see something like this:

$ curl -v my.instance.com
* Rebuilt URL to: my.instance.com/
*   Trying <your server's ip>...
* TCP_NODELAY set
* Connected to my.instance.com (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: my.instance.com
> User-Agent: curl/7.60.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.0
< Date: Fri, 13 Jul 2018 18:41:17 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://my.instance.com/
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.14.0</center>
</body>
</html>
* Connection #0 to host my.instance.com left intact

And if you curl or visit the https address you should get a 502 Bad Gateway.


Mastodon user setup

We need to create the Mastodon user:

sudo useradd -m mastodon # create the user

Then, we will start using this user for the following commands:

sudo su - mastodon

First step is that we install rvm that will be used for configuring ruby. For that we’ll follow the instructions at rvm.io. Before doing the following command, visit rvm.io and check which keys need to be added with gpg --keyserver hkp://keys.gnupg.net --recv-keys.

\curl -sSL https://get.rvm.io | bash -s stable

After that, we’ll have rvm. You will see that to use rvm in the same session you need to execute additional commands:

source /home/mastodon/.rvm/scripts/rvm

With rvm installed, we can then install the ruby version that Mastodon uses:

rvm install 2.6.1 -C --with-jemalloc

Note that the -C --with-jemalloc parameter is there so that we use jemalloc instead the standard memory allocation library, since it’s more efficient in Mastodon’s case. Now, this will take some time, drink some water, stretch and come back.

Similarly, we will install nvm for managing which node version we’ll use.

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

Refer to nvm github for the latest version.

You will also need to run

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

And add these same lines to ~/.bash_profile

And then to install the node version we’re using:

nvm install 8.11.4

And to install yarn:

npm install -g yarn

And with that we have our mastodon user ready.


Cloning mastodon repository and installing dependencies

For these next instructions we still need to be logged in Mastodon’s user. First, we will clone the repo:

# Return to mastodon user's home directory
cd ~
# Clone the mastodon git repository into ~/live
git clone https://github.com/tootsuite/mastodon.git live

Now, it’s highly recommended to run a stable release. Why? Stable releases are bundles of finished features, if you’re running an instance for day-to-day use, they are the most recommended for being the less likely to have breaking bugs.

The stable release is the latest on tootsuite’s releases without any “rc”. At the time of writing the latest one is v2.7.1. With that in mind, we will do:

# Change directory to ~/live
cd ~/live
# Checkout to the latest stable branch
git checkout v2.7.1

And then, we will install the dependencies of the project:

# Install bundler
gem install bundler
# Use bundler to install the rest of the Ruby dependencies
bundle install -j$(getconf _NPROCESSORS_ONLN) --without development test
# Use yarn to install node.js dependencies
yarn install --pure-lockfile

After this finishes you can go back to the user you were using before. This will also take a while, try to relax a bit, have you listened to your favorite song today? 🎶


PostgreSQL configuration

Now, once more, check out Arch Linux’s wiki about PostgreSQL. The first thing to do is to initialize the database cluster. This is done by doing:

sudo -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'

If you want to use a different language, there’s no problem.

After this completes, you can then do

sudo systemctl enable postgresql # will enable postgresql to start together with the system
sudo systemctl start postgresql # will start postgresql

Now that postgresql is running, we can create mastodon’s user in postgresql:

# Launch psql as the postgres user
sudo -u postgres psql

In the prompt that opens, create the mastodon user with:

-- Creates mastodon user with CREATEDB permission level
CREATE USER mastodon CREATEDB;

Okay, after this we’re done with postgresql. Let’s move on!


Redis configuration

The last service we need to start is redis. Check out Arch Linux’s wiki about Redis.

We need to start redis and enable it on initialization:

sudo systemctl enable redis
sudo systemctl start redis

Mastodon application configuration

We’re approaching the end, I promise!

We need to go back to Mastodon user:

sudo su - mastodon

Then we change to the live directory and run the setup wizard:

cd ~/live
RAILS_ENV=production bundle exec rake mastodon:setup

This will do the instance setup: ask you about some options, generate needed secrets, setup the database and precompile the assets.

For PostgreSQL host, port and etc, you can just press enter and it will use default values. The same goes for redis. For email options, refer to the email section. You will want to allow the setup to prepare the database and compile the assets.

Precompiling the assets will take a little while! Also, pay attention to the output. It might output:

That failed! Maybe you need swap space?

All done! You can now power on the Mastodon server 🐘

Which means the asset compilation failed and you will need to try again with more memory. You can try again using RAILS_ENV=production bundle exec rails assets:precompile


Intermission: Mastodon directory permissions

The mastodon user folder cannot be accessed by nginx. The path /home/mastodon/live/public needs to be accessed by nginx because it’s where images and css are served from.

You have some options, the one I chose for this guide is:

sudo chmod 751 /home/mastodon/ # Makes mastodon home folder executable by all users in the server and readable and executable by the user group
sudo chmod 755 /home/mastodon/live/public # Makes mastodon public folder readable and executable by all users in the server
sudo chmod 640 /home/mastodon/live/.env.production # Gives read access only to the user/group for the file with production secrets

Other subfolders will also be readable by other users if they know what to search for.


Mastodon systemd service files

Now, you can go back to your user and we’ll create service files for Mastodon. You again should compare with the official guide to see if something changed, but have in mind that since we’re using rvm and nvm in this guide the final result will be a bit different.

This is what our services will look like, first the one in /etc/systemd/system/mastodon-web.service, responsible for Mastodon’s frontend and API:

[Unit]
Description=mastodon-web
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="PORT=3000"
Environment="WEB_CONCURRENCY=3"
ExecStart=/bin/bash -lc "bundle exec puma -C config/puma.rb"
ExecReload=/bin/kill -SIGUSR1 $MAINPID
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Then, the one in /etc/systemd/system/mastodon-sidekiq.service, responsible for running background jobs:

[Unit]
Description=mastodon-sidekiq
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="DB_POOL=5"
ExecStart=/bin/bash -lc "bundle exec sidekiq -c 5 -q default -q push -q mailers -q pull"
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Lastly, the one in /etc/systemd/system/mastodon-streaming.service, responsible for sending new content to users in real time:

[Unit]
Description=mastodon-streaming
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="NODE_ENV=production"
Environment="PORT=4000"
ExecStart=/bin/bash -lc "npm run start"
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Now, you can enable these services using:

sudo systemctl enable /etc/systemd/system/mastodon-*.service

And then run them using

sudo systemctl start mastodon-*.service

Check that they are running as they should using:

systemctl status mastodon-*.service

At this point, if everything is as it should, going to https://my.instance.com should give you the Mastodon landing page! 🐘


Emails

Now, you’ll probably want to send emails, since new users get an email to confirm their emails.

You should really follow the official guide on this one, because there’s no difference in this case.


Monitoring with uptime robot

I’m giving an example with Uptime Robot because they have a free tier, but you can use other services if you prefer. The idea is just to be pinged if your instance goes down and also to have an independent page where your users can be sure if everything is working as expected.

After creating an UptimeRobot account, you can create a HTTP(s) type monitor pointing to your instance full url: https://my.instance.com, don’t forget to change accordingly.

If you have IPv6, you should also create another monitor with the Ping type, in which you should use your server’s IPv6 as the IP.

Now, in the settings page, you can click on “add public status page”, then you select “for selected monitors” and select the ones you just created. You can create a CNAME DNS entry, so that for instance status.my.instance.com would show the this new status page. There’s more instructions in Uptime Robot’s page.

Now if your instance goes down or your IPv6 stops working, you should get an email.


Remote media attachment cache cleanup

Mastodon downloads media from other instances and caches them locally. If you don’t clean this from time to time, this will only keep growing. Using mastodon user, you can add a cron job that cleans it up daily using crontab -e and adding:

0 2 * * * /bin/bash -lc "cd live; RAILS_ENV=production bundle exec bin/tootctl media remove" 2>&1 /home/mastodon/remove_media.output

If you don’t have any cron installed in your server, you need to take a look in Arch Linux’s wiki page about cron.


Renew Let’s Encrypt certificates

The best way for this is to follow Arch Linux’s wiki about Certbot automatic renewal, which is:

Create a file /etc/systemd/system/certbot.service:

[Unit]
Description=Let's Encrypt renewal

[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew --quiet --agree-tos

The nginx plugin should take care of making sure the server is reloaded automatically after renewal.

Then, create a second file /etc/systemd/system/certbot.timer:

[Unit]
Description=Daily renewal of Let's Encrypt's certificates

[Timer]
OnCalendar=03:00:00
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Now, enable and start the timer service:

sudo systemctl start certbot.timer
sudo systemctl enable certbot.timer

Updating between Mastodon versions

Okay, you set it all up, everything is running and then Mastodon v2.8.0 comes out. What do you do?

Do not despair, dear reader, all is well.

Remember our tip about tmux? When updating is always a good idea to be running tmux. Database migrations can take some time and tmux will help to avoid losing data if your connection fails in the meantime.

First, we will go to the Mastodon user once again:

sudo su - mastodon

Okay, first things first, let’s go into the live directory and get the new version:

cd ~/live
git fetch origin --tags
git checkout v2.8.0
cd . # This is to force rvm to check if we're in the right ruby version

Now, suppose the ruby version changed, since the last time you were here and instead of 2.6.1 is now 2.6.2. After you do cd ., rvm will complain:

$ cd .
Required ruby-2.6.2 is not installed.
To install do: 'rvm install "ruby-2.6.2"'

In this case, we will need to use rvm to install the new version. The command is the same as last time:

rvm install 2.6.2 -C --with-jemalloc

Everything will take some time and at the end you will be ready to follow through. Notice that this won’t happen very often. Also, after you make sure everything is running as expected, you can remove the old ruby version with rvm remove <version>. Wait for you to be sure that the new version is running, though!

Now, you’ll always want to make sure that you look at the releases notes for the release you’re going to. Sometimes there’s special tasks that need to be done before following.

If there was dependencies updated, you need to do:

bundle install --without development test # if you need to update ruby dependencies or if you installed a new ruby
yarn install --pure-lockfile # if you need to update node dependencies

In most of the updates you will need to update the assets:

RAILS_ENV=production bundle exec rails assets:precompile

For comparison: in the digital ocean droplet I tested this guide on, compiling assets on v2.4.3 took around 5 minutes.

If the update includes database migrations that you’ll need to do:

RAILS_ENV=production bundle exec rails db:migrate

Sometimes database migrations will change the database in a way that the instance will stop working for a little bit until you restart the services, that’s why I usually leave them for last to reduce downtime.

⚠️ Backup your database regularly ⚠️

After the migration is finished running, you can leave the mastodon user and then restart the services:

sudo systemctl restart mastodon-sidekiq
sudo systemctl restart mastodon-streaming
sudo systemctl reload mastodon-web

Now, if there was some database changes you need to restart mastodon-web instead of reload.

Alrite, you should be in the last version of Mastodon now!


Upgrading Arch Linux

Some special notes about upgrading Arch Linux itself. If you haven’t yet, read through the Arch Linux’s wiki on Upgrading the system. Since Arch Linux is rolling, there’s some differences if you’re coming from other distros.

Ruby uses native modules in some of the gems, that is, modules compiled against local libraries. This means that if your system changes radically from one version to the other, you might have issues starting services.

However, to make your life a bit easier, you can re-compile native modules by doing (using mastodon user):

cd ~/live
bundle pristine

This will take a little while but will recompile needed gems. When in doubt, do that after a system upgrade.

I had issues in the past with gems that Mastodon uses which have native extensions and are being installed straight from git, namely posix-spawn and http_parser.rb. They were not reinstalled with bundle pristine and I had to manually rebuild them. This seems to fixed in the most recent rvm, but in case you need to do that, find where they are installed doing:

bundle show posix-spawn

With the output of that (which will be something like /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139, do:

rm -rf /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139 /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/extensions/x86_64-linux/2.5.0/posix-spawn-58465d2e2139

This is just an example and you will have to replace this with the output of your bundle show command, and then find the equivalent path in the gems/extensions folder. Do it for both posix-spawn and http_parser.rb (and any other gem that comes from git if it gives you trouble).

And after that you can do bundle install --without development test to install them again.

Now, second thing to take note: Postgresql minor version are compatible between themselves. This means that 9.6.8 is compatible with 9.6.9 and after version 10 they adopted a two number versioning, which means that 10.3 is compatible with 10.4. However, 9.6 is not compatible with 10. And 10 will not be compatible with 11. This means that: when upgrading from 10 to 11 you need to follow the official documentation and Arch Linux’s wiki orientation. With that in mind, be careful when upgrading.

🛑 Upgrading wrongly may cause data loss. 🛑


(Optional) Adding elasticsearch for searching authorized statuses

Since Mastodon v2.3.0, you can enable full text search for authorized statuses. That means toots you have written, boosted, favourited or were mentioned in. For this functionality, Mastodon uses Elasticsearch. As usual, you should take a look in Arch Linux’s wiki about Elasticsearch.

Note: I was able to run elasticsearch on my test instance using the 1GB/1vCPU droplet from Digital Ocean with 1GB of Swap by using the memory configurations suggested at the Arch Linux’s wiki about Elasticsearch, that is, -Xms128m -Xmx512m. However, I don’t have any load and I don’t know how the system would behave with more real loads.

To install elasticsearch do:

sudo pacman -S elasticsearch

Pacman then will ask which version of jdk you want to use. After installed, you can start Elasticsearch by doing:

sudo systemctl enable elasticsearch # Enables elasticsearch to be started at startup
sudo systemctl start elasticsearch # starts elasticsearch

Then you need to switch to Mastodon user, cd ~/live and edit .env.production, to add configuration related to Elasticsearch, look for the commented configs and change them:

ES_ENABLED=true
ES_HOST=localhost
ES_PORT=9200

Then, you need to build the index. This might take a while if your database is big!

RAILS_ENV=production bundle exec rails chewy:deploy

When this is finished, you need to restart all mastodon services.

The official docs have some tips on how to tune Elasticsearch.

24 May 07:30

Morning News

Support your local paper, unless it's just been bought by some sinister hedge fund or something, which it probably has.
03 May 12:54

Python Environment

The Python environmental protection agency wants to seal it in a cement chamber, with pictorial messages to future civilizations warning them about the danger of using sudo to install random Python packages.
06 Apr 11:13

Friendly Questions

Just tell me everything you're thinking about in order from most important to last, and then we'll be friends and we can eat apples together.