Shared posts

16 Jul 11:45

Running a Mastodon instance using Arch Linux

It’s been a while now that I’ve been running masto.donte.com.br using Arch Linux and since the official guide recommends using Ubuntu 18.04, I figured that describing what I did could help someone out there. If in doubt, use the official guide instructions instead, since they’re more likely to be up to date.

This was last updated on 31st of January, 2019.

Notes about some choices made in this guide

The official guide recommends rbenv, but I’m more used to rvm. rbenv is likely to be more lightweight. So if you don’t have any preferences, you might want to stick to rbenv and ruby-build when installing ruby.

There’s also choices made regarding firewall. You do not need to follow those exact ones if you already have another firewall on your server or if you want to use a different one. However, do use some firewall. :)

Like the official guide, this guide assumes you’re using Let’s Encrypt for the certificates. If it’s not the case, you can ignore let’s encrypt references and configure your own certificate in Nginx.

Also note that the SSL configurations for nginx are slightly different than the ones in the official guide. They are aimed at being compatible with older android phones and were generated using Mozilla SSL Configuration Generator with the intermediate configuration and HSTS enabled. Have in mind that HSTS disables non-secure traffic and gets cached on the client side, so if you are unsure, generate a new configuration without HSTS.

Questions are super welcome, you can contact me using any of the methods listed in the about page. Also if you notice that something doesn’t seem right, don’t hesitate to hit me up.

I tested this guide using a digital ocean droplet with 1GB memory and 1vCPU. I had to enable swap to be able to compile the assets. There’s more information about this in the relevant section.


On this page

  1. Before starting the guide
  2. What you should have by the end of this
  3. General suggestions
  4. DNS
  5. Dependencies
  6. Configuring ufw
  7. Configuring Nginx
  8. Intermission: Configuring Let’s Encrypt
  9. Finishing off nginx configuration
  10. Mastodon user setup
  11. Cloning mastodon repository and installing dependencies
  12. PostgreSQL configuration
  13. Redis configuration
  14. Mastodon application configuration
  15. Intermission: Mastodon directory permissions
  16. Mastodon systemd service files
  17. Emails
  18. Monitoring with uptime robot
  19. Remote media attachment cache cleanup
  20. Renew Let’s Encrypt certificates
  21. Updating between Mastodon versions
  22. Upgrading Arch Linux
  23. (Optional) Adding elasticsearch for searching authorized statuses

Before starting the guide

You need:

  • A server running at least a base install of Arch Linux
  • Root access
  • A domain or sub-domain to use for the instance.

What is assumed:

  • There’s no service running in the same ports as Mastodon. If that’s the case, adjustments will need to be made throughout the guide.
  • You’re not using root as your base user. You do have a user configured with sudo access.
  • You already configured NTP or something similar. Some operations, like 2-Factor Authentication need the correct time on your server.

What you should have by the end of this

You should have an instance running with a basic firewall, a valid https certificate and prepared to be upgraded when needed. All the services needed will be on the same machine. Basic monitoring to know if your instance is up or not.


General suggestions

If you have no experience with Linux systems administrations, it’s a good idea to read a bit about it. You will need to keep this system up to date since it will be facing the internet.

Do not reuse passwords and force public key for your SSH user. Use sudo instead of running everything as root and disable root login over ssh.

The official guide already recommends this, but I’ll go one step further: Always use tmux or screen when doing operations on your server. You will need to learn the basic commands but it’s well worth it to avoid losing things if your connection go down and also for long operations in which you can disconnect and leave it running.

If you have 1GB it’s quite likely that asset compilation will fail. Remember to setup a swap partition or use systemd-swap


DNS

The domain you’re planning to use should have DNS records pointing to your server. If your server has a IPv6 address, you should also configure an AAAA record, otherwise, only the A record should be enough.

Now, this guide will not get into serving a different domain. Just have in mind that:

  • The domain will be part of the identifier of your instance users. Once it’s defined, you cannot change it anymore or you’ll get all kinds of federation weirdness.
  • Because of that, avoid using “temporary” domains, like the ones coming from ngrok or similar.

Dependencies

  • ufw: An easy-to-use firewall
  • certbot and certbox-nginx: used for generating the certificates from Let’s Encrypt.
  • nginx: Frontend web server that will be used in this setup
  • jemalloc: Different memory management library that improves memory usage for this setup.
  • postgresql: The SQL database used by Mastodon
  • redis: Used by mastodon for in-memory data store
  • ffmpeg: Used by mastodon for conversion of GIFs to MP4s.
  • imagemagick: Used by mastodon for image related operations
  • protobuf: Used by mastodon for language detection
  • git: Used for version control.
  • python2: Used by gyp, a node tool that builds native addons modules for node.js
  • libxslt, libyaml: I don’t know. They were in the official guide so I’m installing them, but I have to say: I do not have they installed in other instances and never noticed an issue 🤷🏽‍♂️

Besides those, it’s a good idea to install the base-devel group. It comes with sudo and other tools which might come in handy.

Now, you can install those with:

sudo pacman -S ufw certbot nginx jemalloc postgresql redis ffmpeg imagemagick protobuf git base-devel python2 libxslt libyaml
sudo pacman -S --asdeps certbot-nginx

Configuring ufw

🛑WARNING: Configuring a firewall is quite important, but if something goes wrong you might lose connectivity to your server. Make sure you have other ways of reaching your server if something goes wrong. 🛑

Now, Mastodon runs a couple of different services and to support it we will be running different services too, but since we will have everything in the same server, the only ports that should be available for the outside world are the HTTP/HTTPS ports that will be used to connect to the instance. Also, we want the SSH port open so that we can connect remotely to the server.

You should read into Arch Linux’s wiki about ufw. For this guide what you want is to do:

sudo ufw allow SSH # this allows SSH traffic to your server
sudo ufw allow WWW # this allows traffic on port 80 to your server
sudo ufw allow "WWW Secure" # this allows traffic on port 443 to your server

And then you can do:

sudo ufw enable
sudo systemctl enable ufw # Enables ufw to be started at startup
sudo systemctl start ufw # starts ufw

And with this the firewall should be up :)


Configuring Nginx

You should read into Arch Linux’s wiki about nginx, but again, what you want to do is something along these lines:

First, you want to edit nginx.conf. To remove the “welcome to nginx” page, you want to change the beginning of your server block to something like this:

    server {
        listen       80 default_server;
        server_name  '';

        return 444;

And at the very end of the http block, add:

    types_hash_max_size 4096; # sets the maximum size of the types hash tables
    include sites-enabled/*; # Includes any configuration located in /etc/nginx/sites-enabled

And then create these two directories:

sudo mkdir /etc/nginx/sites-available # All domain configurations will live here
sudo mkdir /etc/nginx/sites-enabled # The enabled ones will be linked here

Now, let’s say we’re using my.instance.com as the instance domain/sub-domain. You will need to replace this accordingly throughout this next steps.

Create a new file /etc/nginx/sites-available/my.instance.com.conf, replacing my.instance.com by your domain, and then add to it the following content:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {
  listen 80;
  listen [::]:80;
  server_name my.instance.com;
  root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name my.instance.com;

  ssl_certificate     /etc/letsencrypt/live/my.instance.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/my.instance.com/privkey.pem;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  add_header Strict-Transport-Security max-age=15768000;

  ssl_stapling on;
  ssl_stapling_verify on;

  ssl_trusted_certificate /etc/letsencrypt/live/my.instance.com/chain.pem;

  resolver 8.8.8.8 8.8.4.4 valid=300s;
  resolver_timeout 5s;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 8m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }

  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}

⚠️ It’s a good idea to take a look if something changed in relation to the official guide ⚠️

At this point nginx still doesn’t know about our instance (because we’re including files from /etc/nginx/sites-enabled and we created the file in /etc/nginx/sites-available), however, we should be able to start nginx already.

For that, we need to do:

sudo systemctl start nginx # Starts the nginx service
sudo systemctl enable nginx # Makes the service start automatically at boot

If you do curl -v <your server ip> now, you should see something like this:

$ curl -v <your server's ip>
* Rebuilt URL to: <your server's ip>/
*   Trying <your server's ip>...
* TCP_NODELAY set
* Connected to <your server's ip> (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: <your server's ip>
> User-Agent: curl/7.60.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host <your server's ip> left intact
curl: (52) Empty reply from server

And that means nginx was correctly started and that ufw is allowing connections as expected. We will now get our certificates from Let’s Encrypt before jumping back to nginx configuration


Intermission: Configuring Let’s Encrypt

Now, for Let’s Encrypt we will use certbot, that we installed previously. For more information about it you can take a look at Arch Linux’s wiki about Certbot. For this guide, you need to run the following command:

sudo certbot --nginx certonly -d my.instance.com

As usual, remind to change the url to the url for your actual instance. You will need to follow the instructions on screen.

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel):

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel:

-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
-------------------------------------------------------------------------------
(Y)es/(N)o:
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for my.instance.com
Using default address 80 for authentication.
2018/07/13 18:28:47 [notice] 4617#4617: signal process started
Waiting for verification...
Cleaning up challenges
2018/07/13 18:28:53 [notice] 4619#4619: signal process started

If everything goes as expected, you should see something like this:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/my.instance.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/my.instance.com/privkey.pem
   Your cert will expire on 2018-10-11. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"

This means that now we have a valid certificate and that we can go back to nginx. Double check that the path informed by certbot (in this example /etc/letsencrypt/live/my.instance.com/fullchain.pem) matches the one in your nginx file.

Let’s Encrypt certificates only last for 90 days, so we will still come back to this. But for now, let’s go back to nginx.

If you have an error like

Saving debug log to /var/log/letsencrypt/letsencrypt.log
An unexpected error occurred:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 10453: ordinal not in range(128)
Please see the logfiles in /var/log/letsencrypt for more details.

Check that you have set up correctly your locale! If your locale is set to C, certbot will fail.


Finishing off nginx configuration

At this point the certificate should be working. Since this configuration is using HSTS, we also need to generate a dhparam. You can do that by doing (might take a little while!)

openssl dhparam -out dhparam.pem 2048
sudo mv dhparam.pem /etc/ssl/certs/dhparam.pem

What is left for us to do is to enable the instance configuration and reload nginx. We should do this (remember to replace with your instance config!):

sudo ln -s /etc/nginx/sites-available/my.instance.com.conf /etc/nginx/sites-enabled/ # creates a softlink of the configuration we created previously to the enabled sites directory
sudo systemctl reload nginx

Now, if everything went fine, your nginx should reload. Otherwise, it will throw some error like this:

$ sudo systemctl reload nginx
Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.

In that case, you need to execute one of the commands and try to see what went wrong.

However, if everything went right until now, if you do curl -v my.instance.com replacing for your domain, you should see something like this:

$ curl -v my.instance.com
* Rebuilt URL to: my.instance.com/
*   Trying <your server's ip>...
* TCP_NODELAY set
* Connected to my.instance.com (<your server's ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: my.instance.com
> User-Agent: curl/7.60.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.14.0
< Date: Fri, 13 Jul 2018 18:41:17 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://my.instance.com/
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.14.0</center>
</body>
</html>
* Connection #0 to host my.instance.com left intact

And if you curl or visit the https address you should get a 502 Bad Gateway.


Mastodon user setup

We need to create the Mastodon user:

sudo useradd -m mastodon # create the user

Then, we will start using this user for the following commands:

sudo su - mastodon

First step is that we install rvm that will be used for configuring ruby. For that we’ll follow the instructions at rvm.io. Before doing the following command, visit rvm.io and check which keys need to be added with gpg --keyserver hkp://keys.gnupg.net --recv-keys.

\curl -sSL https://get.rvm.io | bash -s stable

After that, we’ll have rvm. You will see that to use rvm in the same session you need to execute additional commands:

source /home/mastodon/.rvm/scripts/rvm

With rvm installed, we can then install the ruby version that Mastodon uses:

rvm install 2.6.1 -C --with-jemalloc

Note that the -C --with-jemalloc parameter is there so that we use jemalloc instead the standard memory allocation library, since it’s more efficient in Mastodon’s case. Now, this will take some time, drink some water, stretch and come back.

Similarly, we will install nvm for managing which node version we’ll use.

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

Refer to nvm github for the latest version.

You will also need to run

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

And add these same lines to ~/.bash_profile

And then to install the node version we’re using:

nvm install 8.11.4

And to install yarn:

npm install -g yarn

And with that we have our mastodon user ready.


Cloning mastodon repository and installing dependencies

For these next instructions we still need to be logged in Mastodon’s user. First, we will clone the repo:

# Return to mastodon user's home directory
cd ~
# Clone the mastodon git repository into ~/live
git clone https://github.com/tootsuite/mastodon.git live

Now, it’s highly recommended to run a stable release. Why? Stable releases are bundles of finished features, if you’re running an instance for day-to-day use, they are the most recommended for being the less likely to have breaking bugs.

The stable release is the latest on tootsuite’s releases without any “rc”. At the time of writing the latest one is v2.7.1. With that in mind, we will do:

# Change directory to ~/live
cd ~/live
# Checkout to the latest stable branch
git checkout v2.7.1

And then, we will install the dependencies of the project:

# Install bundler
gem install bundler
# Use bundler to install the rest of the Ruby dependencies
bundle install -j$(getconf _NPROCESSORS_ONLN) --without development test
# Use yarn to install node.js dependencies
yarn install --pure-lockfile

After this finishes you can go back to the user you were using before. This will also take a while, try to relax a bit, have you listened to your favorite song today? 🎶


PostgreSQL configuration

Now, once more, check out Arch Linux’s wiki about PostgreSQL. The first thing to do is to initialize the database cluster. This is done by doing:

sudo -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D '/var/lib/postgres/data'

If you want to use a different language, there’s no problem.

After this completes, you can then do

sudo systemctl enable postgresql # will enable postgresql to start together with the system
sudo systemctl start postgresql # will start postgresql

Now that postgresql is running, we can create mastodon’s user in postgresql:

# Launch psql as the postgres user
sudo -u postgres psql

In the prompt that opens, create the mastodon user with:

-- Creates mastodon user with CREATEDB permission level
CREATE USER mastodon CREATEDB;

Okay, after this we’re done with postgresql. Let’s move on!


Redis configuration

The last service we need to start is redis. Check out Arch Linux’s wiki about Redis.

We need to start redis and enable it on initialization:

sudo systemctl enable redis
sudo systemctl start redis

Mastodon application configuration

We’re approaching the end, I promise!

We need to go back to Mastodon user:

sudo su - mastodon

Then we change to the live directory and run the setup wizard:

cd ~/live
RAILS_ENV=production bundle exec rake mastodon:setup

This will do the instance setup: ask you about some options, generate needed secrets, setup the database and precompile the assets.

For PostgreSQL host, port and etc, you can just press enter and it will use default values. The same goes for redis. For email options, refer to the email section. You will want to allow the setup to prepare the database and compile the assets.

Precompiling the assets will take a little while! Also, pay attention to the output. It might output:

That failed! Maybe you need swap space?

All done! You can now power on the Mastodon server 🐘

Which means the asset compilation failed and you will need to try again with more memory. You can try again using RAILS_ENV=production bundle exec rails assets:precompile


Intermission: Mastodon directory permissions

The mastodon user folder cannot be accessed by nginx. The path /home/mastodon/live/public needs to be accessed by nginx because it’s where images and css are served from.

You have some options, the one I chose for this guide is:

sudo chmod 751 /home/mastodon/ # Makes mastodon home folder executable by all users in the server and readable and executable by the user group
sudo chmod 755 /home/mastodon/live/public # Makes mastodon public folder readable and executable by all users in the server
sudo chmod 640 /home/mastodon/live/.env.production # Gives read access only to the user/group for the file with production secrets

Other subfolders will also be readable by other users if they know what to search for.


Mastodon systemd service files

Now, you can go back to your user and we’ll create service files for Mastodon. You again should compare with the official guide to see if something changed, but have in mind that since we’re using rvm and nvm in this guide the final result will be a bit different.

This is what our services will look like, first the one in /etc/systemd/system/mastodon-web.service, responsible for Mastodon’s frontend and API:

[Unit]
Description=mastodon-web
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="PORT=3000"
Environment="WEB_CONCURRENCY=3"
ExecStart=/bin/bash -lc "bundle exec puma -C config/puma.rb"
ExecReload=/bin/kill -SIGUSR1 $MAINPID
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Then, the one in /etc/systemd/system/mastodon-sidekiq.service, responsible for running background jobs:

[Unit]
Description=mastodon-sidekiq
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="RAILS_ENV=production"
Environment="DB_POOL=5"
ExecStart=/bin/bash -lc "bundle exec sidekiq -c 5 -q default -q push -q mailers -q pull"
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Lastly, the one in /etc/systemd/system/mastodon-streaming.service, responsible for sending new content to users in real time:

[Unit]
Description=mastodon-streaming
After=network.target

[Service]
Type=simple
User=mastodon
WorkingDirectory=/home/mastodon/live
Environment="NODE_ENV=production"
Environment="PORT=4000"
ExecStart=/bin/bash -lc "npm run start"
TimeoutSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Now, you can enable these services using:

sudo systemctl enable /etc/systemd/system/mastodon-*.service

And then run them using

sudo systemctl start mastodon-*.service

Check that they are running as they should using:

systemctl status mastodon-*.service

At this point, if everything is as it should, going to https://my.instance.com should give you the Mastodon landing page! 🐘


Emails

Now, you’ll probably want to send emails, since new users get an email to confirm their emails.

You should really follow the official guide on this one, because there’s no difference in this case.


Monitoring with uptime robot

I’m giving an example with Uptime Robot because they have a free tier, but you can use other services if you prefer. The idea is just to be pinged if your instance goes down and also to have an independent page where your users can be sure if everything is working as expected.

After creating an UptimeRobot account, you can create a HTTP(s) type monitor pointing to your instance full url: https://my.instance.com, don’t forget to change accordingly.

If you have IPv6, you should also create another monitor with the Ping type, in which you should use your server’s IPv6 as the IP.

Now, in the settings page, you can click on “add public status page”, then you select “for selected monitors” and select the ones you just created. You can create a CNAME DNS entry, so that for instance status.my.instance.com would show the this new status page. There’s more instructions in Uptime Robot’s page.

Now if your instance goes down or your IPv6 stops working, you should get an email.


Remote media attachment cache cleanup

Mastodon downloads media from other instances and caches them locally. If you don’t clean this from time to time, this will only keep growing. Using mastodon user, you can add a cron job that cleans it up daily using crontab -e and adding:

0 2 * * * /bin/bash -lc "cd live; RAILS_ENV=production bundle exec bin/tootctl media remove" 2>&1 /home/mastodon/remove_media.output

If you don’t have any cron installed in your server, you need to take a look in Arch Linux’s wiki page about cron.


Renew Let’s Encrypt certificates

The best way for this is to follow Arch Linux’s wiki about Certbot automatic renewal, which is:

Create a file /etc/systemd/system/certbot.service:

[Unit]
Description=Let's Encrypt renewal

[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew --quiet --agree-tos

The nginx plugin should take care of making sure the server is reloaded automatically after renewal.

Then, create a second file /etc/systemd/system/certbot.timer:

[Unit]
Description=Daily renewal of Let's Encrypt's certificates

[Timer]
OnCalendar=03:00:00
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Now, enable and start the timer service:

sudo systemctl start certbot.timer
sudo systemctl enable certbot.timer

Updating between Mastodon versions

Okay, you set it all up, everything is running and then Mastodon v2.8.0 comes out. What do you do?

Do not despair, dear reader, all is well.

Remember our tip about tmux? When updating is always a good idea to be running tmux. Database migrations can take some time and tmux will help to avoid losing data if your connection fails in the meantime.

First, we will go to the Mastodon user once again:

sudo su - mastodon

Okay, first things first, let’s go into the live directory and get the new version:

cd ~/live
git fetch origin --tags
git checkout v2.8.0
cd . # This is to force rvm to check if we're in the right ruby version

Now, suppose the ruby version changed, since the last time you were here and instead of 2.6.1 is now 2.6.2. After you do cd ., rvm will complain:

$ cd .
Required ruby-2.6.2 is not installed.
To install do: 'rvm install "ruby-2.6.2"'

In this case, we will need to use rvm to install the new version. The command is the same as last time:

rvm install 2.6.2 -C --with-jemalloc

Everything will take some time and at the end you will be ready to follow through. Notice that this won’t happen very often. Also, after you make sure everything is running as expected, you can remove the old ruby version with rvm remove <version>. Wait for you to be sure that the new version is running, though!

Now, you’ll always want to make sure that you look at the releases notes for the release you’re going to. Sometimes there’s special tasks that need to be done before following.

If there was dependencies updated, you need to do:

bundle install --without development test # if you need to update ruby dependencies or if you installed a new ruby
yarn install --pure-lockfile # if you need to update node dependencies

In most of the updates you will need to update the assets:

RAILS_ENV=production bundle exec rails assets:precompile

For comparison: in the digital ocean droplet I tested this guide on, compiling assets on v2.4.3 took around 5 minutes.

If the update includes database migrations that you’ll need to do:

RAILS_ENV=production bundle exec rails db:migrate

Sometimes database migrations will change the database in a way that the instance will stop working for a little bit until you restart the services, that’s why I usually leave them for last to reduce downtime.

⚠️ Backup your database regularly ⚠️

After the migration is finished running, you can leave the mastodon user and then restart the services:

sudo systemctl restart mastodon-sidekiq
sudo systemctl restart mastodon-streaming
sudo systemctl reload mastodon-web

Now, if there was some database changes you need to restart mastodon-web instead of reload.

Alrite, you should be in the last version of Mastodon now!


Upgrading Arch Linux

Some special notes about upgrading Arch Linux itself. If you haven’t yet, read through the Arch Linux’s wiki on Upgrading the system. Since Arch Linux is rolling, there’s some differences if you’re coming from other distros.

Ruby uses native modules in some of the gems, that is, modules compiled against local libraries. This means that if your system changes radically from one version to the other, you might have issues starting services.

However, to make your life a bit easier, you can re-compile native modules by doing (using mastodon user):

cd ~/live
bundle pristine

This will take a little while but will recompile needed gems. When in doubt, do that after a system upgrade.

I had issues in the past with gems that Mastodon uses which have native extensions and are being installed straight from git, namely posix-spawn and http_parser.rb. They were not reinstalled with bundle pristine and I had to manually rebuild them. This seems to fixed in the most recent rvm, but in case you need to do that, find where they are installed doing:

bundle show posix-spawn

With the output of that (which will be something like /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139, do:

rm -rf /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/posix-spawn-58465d2e2139 /home/mastodon/.rvm/gems/ruby-2.5.1/bundler/gems/extensions/x86_64-linux/2.5.0/posix-spawn-58465d2e2139

This is just an example and you will have to replace this with the output of your bundle show command, and then find the equivalent path in the gems/extensions folder. Do it for both posix-spawn and http_parser.rb (and any other gem that comes from git if it gives you trouble).

And after that you can do bundle install --without development test to install them again.

Now, second thing to take note: Postgresql minor version are compatible between themselves. This means that 9.6.8 is compatible with 9.6.9 and after version 10 they adopted a two number versioning, which means that 10.3 is compatible with 10.4. However, 9.6 is not compatible with 10. And 10 will not be compatible with 11. This means that: when upgrading from 10 to 11 you need to follow the official documentation and Arch Linux’s wiki orientation. With that in mind, be careful when upgrading.

🛑 Upgrading wrongly may cause data loss. 🛑


(Optional) Adding elasticsearch for searching authorized statuses

Since Mastodon v2.3.0, you can enable full text search for authorized statuses. That means toots you have written, boosted, favourited or were mentioned in. For this functionality, Mastodon uses Elasticsearch. As usual, you should take a look in Arch Linux’s wiki about Elasticsearch.

Note: I was able to run elasticsearch on my test instance using the 1GB/1vCPU droplet from Digital Ocean with 1GB of Swap by using the memory configurations suggested at the Arch Linux’s wiki about Elasticsearch, that is, -Xms128m -Xmx512m. However, I don’t have any load and I don’t know how the system would behave with more real loads.

To install elasticsearch do:

sudo pacman -S elasticsearch

Pacman then will ask which version of jdk you want to use. After installed, you can start Elasticsearch by doing:

sudo systemctl enable elasticsearch # Enables elasticsearch to be started at startup
sudo systemctl start elasticsearch # starts elasticsearch

Then you need to switch to Mastodon user, cd ~/live and edit .env.production, to add configuration related to Elasticsearch, look for the commented configs and change them:

ES_ENABLED=true
ES_HOST=localhost
ES_PORT=9200

Then, you need to build the index. This might take a while if your database is big!

RAILS_ENV=production bundle exec rails chewy:deploy

When this is finished, you need to restart all mastodon services.

The official docs have some tips on how to tune Elasticsearch.

24 May 07:30

Morning News

Support your local paper, unless it's just been bought by some sinister hedge fund or something, which it probably has.
19 May 16:33

Saturday Morning Breakfast Cereal - Gojirasaurus

by tech@thehiveworks.com


Click here to go see the bonus panel!

Hovertext:
Also it didn't want to destroy the city because it mostly feeds off of aquatic insects.

New comic!
Today's News:
19 May 16:31

Saturday Morning Breakfast Cereal - Whistle

by tech@thehiveworks.com


Click here to go see the bonus panel!

Hovertext:
The really creepy part is how it requires you to install a tiny mouth.

New comic!
Today's News:
03 May 12:54

Python Environment

The Python environmental protection agency wants to seal it in a cement chamber, with pictorial messages to future civilizations warning them about the danger of using sudo to install random Python packages.
06 Apr 11:13

Friendly Questions

Just tell me everything you're thinking about in order from most important to last, and then we'll be friends and we can eat apples together.
23 Mar 11:13

#DeleteFacebook

by Eugen Rochko

Perspective from a platform that doesn’t put democracy in peril

Deep down you always knew it. On the edge of your perception, you always heard the people who talked about the erosion of privacy, that there was no such thing as free cheese, that if you don’t pay — then you’re the product. Now you know that it’s true. Cambridge Analytica has sucked the data so kindly and diligently collected by Facebook and used that data to influence the US elections (and who knows what else).

It doesn’t matter if you call it a “data breach” or not. The problem is how much data Facebook collects, stores and analyzes about us. You now know how Facebook’s platform was used by 3rd parties to meddle in elections. Now imagine how much more effective it would be, if it wasn’t 3rd parties, but Facebook itself putting its tools to use. Imagine, for example, if Mark Zuckerberg decided to run for president

#DeleteFacebook is trending on Twitter. Rightfully so. Some say, “even without an account, Facebook tracks you across the web and builds a shadow profile.” And that is true. So what? Use browser extensions that block Facebook’s domains. Make them work for it. Don’t just hand them the data.

Some say, “I don’t want to stop using Facebook, I want them to change.” And that is wrong. Keeping up with your friends is good. But Facebook’s business and data model is fundamentally flawed. For you, your data is who you are. For Facebook, your data is their money. Taking it from you is their entire business, everything else is fancy decoration.

Others will say, “I need Facebook because that’s where my audience is, and my livelihood depends on that.” And it is true. But depending on Facebook is not safe in the long-term, as others have learned the hard way. Ever changing, opaque algorithms make it harder and harder to reach “your” audience. So even in this case it’s wise to look for other options and have contingency plans.

There are ways to keep up with friends without Facebook. Ways that don’t require selling yourself to Big Data in exchange for a system designed around delivering bursts of dopamine in just the right way to keep you hooked indefinitely.

Mastodon is one of them. There are others, too, like Diaspora, Scuttlebutt, and Hubzilla, but I am, for obvious reasons, more familiar with Mastodon.

Mastodon is not built around data collection. No real name policies, no dates of birth, no locations — it stores only what is necessary for you to talk to and interact with your friends and followers. It does not track you across the web. The data it stores for you is yours — to delete or to download.

Mastodon does not have any investors to please or impress, because it’s not a commercial social network. It’s freely available, crowdfunded software. Its incentives are naturally aligned with its users, so there are no ads, no dark UX patterns. It’s there, growing and growing: Over 130,000 people were active on Mastodon last week.

To make an impact, we must act. It is tempting to wait until others make the switch, because what if others don’t follow? But individual actions definitely add up. One of my favourite stories from a Mastodon user is how they were asked for social media handles at a game developer conference, and when they replied with Mastodon, received understanding nods instead of confused stares. Step by step, with every new person, switching to Mastodon will become easier and easier.

Now is the time to act. Join Mastodon today.


#DeleteFacebook was originally published in Mastodon Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

19 Mar 11:17

Minha orientação política

by alexcastro

Sempre voto, apoio, milito em prol de movimentos, partidos, pessoas que se propõem a lutar por grupos, classes, categorias que não conseguem lutar por si mesmas.

Porque um Estado que atua em prol da classe alta é um Estado redundante.

A classe alta sabe se defender com seus próprios recursos: o Estado justifica sua existência defendendo os direitos daquelas pessoas que não conseguem.

eu vou te comer

* * *

Demonizar a classe alta é infantil e contraproducente.

Cresci na classe alta da Barra da Tijuca e estudei na escola mais cara do país. Em meus anos formativos, minhas pessoas amigas, colegas, familiares, eram todas empresárias e empreendedoras, executivas de multinacional e capitãs de indústria.

Atesto e dou fé que a proporção de pessoas ruins entre elas é mais ou menos a mesma de todos os outros grupos dos quais participei.

Ainda assim, voto sistematicamente contra seus interesses.

Não porque são pessoas ruins. (Não são.)

Mas porque são pessoas que sabem se defender sozinhas.

empatia vote espremedor limao

* * *

Qualquer reforma tributária deve ser feita para simplificar a vida da pessoa física que faz seu próprio imposto de renda, não da empresa que tem seu próprio departamento de contabilidade. Etc.

Então, por exemplo, não sei os detalhes da recente reforma trabalhista, mas sei que as entidades patronais estavam unanimemente a favor, e as trabalhadoras, contra.

Então, sou contra.

Não porque as integrantes das classes patronais sejam “pessoas canalhas que levam uma vida fácil”.

(É uma gente esforçada que trava uma luta hercúlea para empreender no Brasil.)

Sou contra porque as pessoas que trabalham para elas são tão esforçadas quanto e enfrentam dificuldades infinitamente maiores.

Sob qualquer métrica, se a vida da dona da fábrica é difícil, a vida da trabalhadora que precisa negociar com ela de igual pra igual, sem apoio de um departamento jurídico ou tributário, sem economias no banco e vivendo de mês a mês, é mais difícil.

Então, se entrarem em conflito (e é natural que entrem, pois essa é a base de nossa democracia), estarei sempre ao lado da pessoa trabalhadora, por reconhecer que precisa de toda a ajuda possível para que o conflito apenas não seja absurdamente desigual.

O Estado existe não para decidir quem está certa, mas para garantir que o conflito seja o menos desigual possível.

Para isso, paradoxalmente, ele precisa sempre se posicionar ao lado da parte mais fraca, mais vulnerável, mais indefesa.

if you are neutral in injustice you chose side of oppressor

* * *

Sou uma pessoa privilegiada em todos os quesitos: branco, hétero, classe alta, viajado, urbano, pósgraduado.

O Estado já me deu de bandeja todas as vantagens possíveis e imaginárias: não quero mais nenhuma.

O Estado não precisa fazer nada por mim. Não quero que o Estado faça nada por mim. O Estado já fez de tudo por mim. O Estado já fez demais por mim.

Voto, apoio, milito pelo projeto de país que me prometa fazer o mínimo por mim. Que prometa sobretaxar meu iTralha e reinvestir em saúde. Que prometa sobretaxar minha herança e reinvestir em educação. Que prometa a pagar às mulheres os mesmos salários que aos homens. Que reconheça os direitos gays tanto quanto os héteros. Cuja polícia trate pessoas negras igual às brancas.

Por toda a minha vida, o Estado me preparou para não precisar dele. Sei as manhas, tenho as tretas. Se o Estado se virar contra mim, tenho como me defender.

Quero um Estado que defenda as pessoas que não têm como se defender dele.

Quero um Estado que defenda as pessoas que, por falha desse mesmo Estado, têm uma educação pior que a minha, uma saúde pior que a minha, perspectivas piores que as minhas.

Quero um Estado que quebre a cabeça para facilitar a vida de quem tem pouco, nem que ao custo de dificultar a vida de quem tem muito.

Essa é minha orientação política.

privilégio

* * *

Toda ela pode ser resumida em um dos diálogos de um filme lançado no ano em que completei 18 anos, Uma questão de honra.

Estão conversando dois fuzileiros navais acusados de assassinar um colega, Willy:

— O que foi que fizemos de errado? Não fizemos nada de errado!

— Fizemos sim. Nós estamos aqui para lutar pelas pessoas que não podem lutar por si mesmas. Deveríamos ter lutado pelo Willy.

a-few-good-men

 

15 Mar 12:02

Twitter is not a public utility

by Eugen Rochko
Photo by Tobin Rogers on Unsplash

Isn’t it a bit strange that the entire world has to wait on the CEO of Twitter to come around on what constitutes healthy discourse? I am not talking about it being too little, too late. Rather, my issue is with “instant, public, global messaging and conversation” being entirely dependent on one single privately held company’s whims. Perhaps they want to go in the right direction right now for once, but who’s to say how their opinion changes in the future? Who is Twitter really accountable to except their board of directors?

I still find it hard to believe when Jack Dorsey says that Twitter’s actions are not motivated by a drive to increase their share price. Twitter must make their shareholders happy to stay alive, and it just so happens that bots and negative interactions on their platform drive their engagements metrics upwards. Every time someone quote-tweets to highlight something toxic, it gets their followers to interact with it and continue the cycle. It is known that outrage spreads quicker than positive and uplifting content, so from a financial point of view, it makes no sense for Twitter to get rid of the sources of outrage, and their track record is a testament to that.

In my opinion, “instant, public, global messaging and conversation” should, in fact, be global. Distributed between independent organizations and actors who can self-govern. A public utility, without incentives to exploit the conversations for profit. A public utility, to outsurvive all the burn-rate-limited throwaway social networks. This is what motivated me to create Mastodon.

Besides, Twitter is still approaching the issue from the wrong end. It’s fashionable to use machine learning for everything in Sillicon Valley, and so Twitter is going to be doing sentiment analysis and whatnot when in reality… You just need human moderators. Someone users can talk to, who can understand context. Unscalable for Twitter, where millions of people are huddled together under one rule, but natural for Mastodon, where servers are small and have their own admins.

Twitter is not a public utility. This will never change. And every tweet complaining about it simply makes their quarterly report look better.

To get started with Mastodon, go to joinmastodon.org and pick a place to call home! Use the drop-down menus to help narrow your search by interest and language, and find a community to call your own! Don’t let the fediverse miss out on what you have to say!


Twitter is not a public utility was originally published in Mastodon Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

27 Dec 10:23

For more info on Kevin A. Patterson’s book,...



For more info on Kevin A. Patterson’s book, “Love’s Not Color Blind”, check out: https://www.generosity.com/community-fundraising/the-love-s-not-color-blind-book-tour

06 Dec 10:53

Amazon Logic

by CommitStrip

04 Dec 18:04

Last.fm Was the Only Music Social Network That Made Sense

by Elia Alovisi

A version of this article originally appeared on Noisey Italy.

My first profile on Last.fm was called "Nergal-Behemoth," in honor of the song by my favorite Polish death metal band. The first two tracks I scrobbled, on February 21, 2006, were "Africa" by Toto and "Electric Crown" by Testament. I didn't know it at the time, but the keyboards—soft as Steve Porcaro's velvet—had broken my faith in the God of Metal. As time passed, I'd start listening to folk music, and then classical, psych, and prog rock; I'd become obsessed with Johnny Cash, I'd go through a phase in which I resembled a fanboy of De André; I would discover emo and electronica and indie and hip-hop, and then more classical music and pop. And since I've always kept my Last.fm account active, today, more than ten years later, I can study how I listened to music throughout a good part of my life. Day for day, song for song.

Between two profiles, the aforementioned Nergal-Behemoth and the subsequent "EliaSingsMiFaMi" (dedicated to that splendid album), I listened to 164,624 songs. I've listened to Sufjan Stevens 1864 times, Drake 1120, Kanye West 1058, and Caneda 985. Forty times—many more than necessary—the notes of "Follow the Reaper" by Children of Bodom entered into my ears, whereas I don't regret the 48 times I listened to the crystalline ambience of "Requiem For The Static King Part One" by A Winged Victory For The Sullen. If I hadn't read the comments and messages that I received on my profile, I would've probably never met a few of my closest friends today. If it hadn't been for the site's diary feature, I wouldn't have a list of all the concerts I attended between 2006 to the present day. But time passes, and today all that remains of Last.fm is the promise of a musical democracy based on exchange and sharing—a promise that wasn't kept and which was obliterated by the evolution of the musical market and by the internet economy.

Last.fm was born shortly after the start of the millennium as the union of two projects. The first was an idea by Richard Jones, an Englishman who developed, for his Bachelor's thesis in Computer Science, a project called Audioscrobbler: A plug-in that tracked all the songs you listened to on your computer once installed. The information gathered—the songs scrobbled—was then uploaded to an online database, one that users of the service could access and create a library of their personal listening history, which they could then compare with that of other users. The second project, Last.fm, was a web radio created by a group of German and Austrian boys who used the same program to gauge the tastes of each individual user, using an algorithm with two buttons that the user could click to express a positive or negative judgment about the track they were listening to. Jones and the boys of Last.fm started collaborating in 2003, and in 2005 they united with a single website. They gave their users the ability to scrobble songs from different players. It was the beginning of a unique, collective musical experience, one that seemed impossible to replicate in the future.

A screenshot of my profile in 2007. Fortunately, Last.fm has immortalized the moment when I discovered Impaled Northern Moonforest, the best band in history.

In the time that the site flourished, the music market of the previous decade wasn't prepared for the foundational revolution that Last.fm brought shortly thereafter. The traditional gatekeepers of content—record labels, print magazines, radio, and television—were always addressing a formless public, and they molded the tastes of their audience through the use of commercial entities and criticism from high to low, which had been consolidated in the preceding decades. Listeners who didn't identify with this top-down approach united in online communities such as forums in order to create, on a smaller scale, a musical democracy that functioned laterally.

Even within forums and messaging boards there were structures of power, defined by admin roles and by the number of posts a user made during the course of a year; a symbol of authority earned through tenure. Instead of enjoying a flux of content on various music-related topics—things that, to listeners who experienced music solely through mainstream means, and fleeting, impalpable moments (a phone call into a radio or TV show, a text message confined the screen of your phone)—forum participants united and created online communities endowed with their own values, communication codes, and musical tastes that were constructed collectively over time. Last.fm captured this spirit, seized upon it to perfection, and made its users feel like they were playing an important role in the creation of a common musical discourse.

The site functioned like a personal musical museum ("Here's everything that I listened to!") based in part on competition ("Look how much I listened to!") and recognition ("You listen to what I listen to, so we're compatible"—there was even a compatibility meter that ranked how much you had in common with other users). The site's structure encouraged such interactions: Everything was clickable, organized, up to date, and accessible in real time. The idea wasn't to apply this structure to a set catalogue of music, but to the unorganized ecosystem of MP3 files on an individual's computer. That way, even if you'd ripped the demo of a local band, you could find other people who'd also listened to them through the artist's dedicated page and talk to them about it.

These exchanges were the driving factor behind the platform's implementation of various communication methods: A comment section on every artist page and on a user's personal profile, a private messaging service, and the ability to create groups. Since it was a site for people who were passionate about music—and in turn easily intrigued by other people who shared that same passion—it wasn't rare that friendships and loves were born between one scrobble and the next. It wasn't all that weird to come across the profile of someone who listened to that very tiny post-punk band that broke up after their first EP, the one you loved so much, and fall head over heels for a 180 x 180 pixelated avatar. What could start as a "Hey, your library is bomb!" could turn into a tangential conversation about your respective message boards, and possibly turn into something more.

Last.fm predicted the shift of online communication towards something hyper-fragmented and specialized. No one chose the music you listened to: You were the person who created a personalized stream beginning with an artist, a tag, or the profile of another user, and then tweaked that algorithm until it produced a track agreeable to your ears. You weren't obligated to insert yourself into a general discussion; instead, you were able to make connections with people who listened to things that interested you, in an online environment designed to foster micro-conversations. There was also a blogging element, which today has disappeared: Each user could create a personal diary, which prompted different forms of posts adopted by other profiles (surveys, lists, advice). "All the concerts I've gone to" was the one most people took to, taking advantage of a function that also stopped being used later on: Events that could be added and updated directly by users, and searched according to geographic criteria.

A screenshot of my profile from 2009. There's also a link to my Netlog, with an attached quote from Vasco Brondi at the beginning of the "About Me" section. I was 18 years old. But below are the GY!BE, come on.

The golden year of Last.fm was 2007, when it was acquired by CBS. The network's investment was poorly timed—a year later, Facebook (which barely resembled what it does today) experienced a popularity boom and started to dominate the internet. The music site's problems started a few years later, when it found itself in the middle of its first major media crisis: In 2009, No Line On The Horizon by U2 prematurely appeared online. TechCrunch accused Last.fm and CBS of having provided the Recording Industry Association of America (RIAA), an organization that safeguards the interests of the music industry (and which fought with peer-to-peer and torrenting services for years), with the personal data of all the users who'd listened to songs from the album before its release date.

Both the website and the network denied it, but different users cancelled their accounts as a gesture of protest. After it was acquired by a major player in the media market, the site had started to devolve into something different and less free. Even in 2007, the radio started charging a membership fee of €3.00 in every country except Germany, the United States, and the United Kingdom. They removed the ability to stream individual tracks in full, swapping in short previews or a few sample songs selected by the artist themselves. The whole thing sawed the legs off of many small, independent bands seeking visibility. In 2013, the radio was resized for the first time, then issued exclusively to several countries, then substituted entirely by a series of embedded YouTube videos and by a now-defunct partnership with Spotify—an admission of surrender from the streaming component of the site, clearly crushed by the weight of competition that was already too strong and too organized for its predecessor to keep up.

All of this was compounded by a series of redesigns that pained the platform's long-standing users. The profiles became more standardized and less personal, which made Last.fm feel more sterile overall. Where there used to be an "About Me" bar on the left side of the page that each user could fill with words and images (it was common to make enormous PNG's with the logo of your favorite band, worn like a badge of pride above quoted lyrics, a link to your blog, or a list of concerts you'd recently attended), today, a user can only upload a profile picture or a link, and up to 200 characters of text without any formatting.

A screenshot of my present day profile. Notice how empty it is. All the white space is due to the fact that I have AdBlock enabled, I think.

Unfortunately, the height of Last.fm's success coincided with the moment that online music fell under stricter regulations. First came the crackdown on peer-to-peer services like eMule, Limewire, and Bearshare (but not Soulseek), which was a death knell for RAR services like Megaupload, Rapidshare, and Mediafire—all of which later culminated in attempts to kill torrenting. Before contemporary streaming services like Spotify, Apple Music, and Youtube came along and became the standard—bringing with them the constant presence of a 3G WiFi signal—discovering music meant downloading it and constructing a personal trove of files. Last.fm was the service that had leveraged this necessity, allowing its users to discover new music and, after a generic search like "[ARTIST NAME] [ALBUM NAME] blogspot megaupload," show it off on your scrobble history.

At present, Last.fm has a lot of difficulty generating a profit. Possibly because it no longer serves a purpose aside from logging what its users are listening to. It's no longer a catalyst for discussions and events, given that there's already Facebook and Songkick; nor is there need for a personalized radio thanks to algorithm-driven recommendations from various streaming services. In the end, the music industry to which Last.fm was a counterpoint no longer had to the power to create renowned musicians from meager local artists, nor direct public tastes: Today, labels only try to acquire, through an artist's name, a preexisting community of fans that the artist garnered themselves. Last.fm didn't pay a central role in the changing of this paradigm, maybe because it never understood how to make itself flourish economically. Investing in the concept of a personalized web radio and deciding to charge a fee for it turned out to be an unwise choice in an environment where music was practically becoming free and accessible, through tenuously legal YouTube uploads and the rise to prominence of streaming services.

"The idea of creating such a personalized space on the web acts as a counterpoint to the prevalent 'mass mentality' of the charts and invites the user to orient himself in an autonomous way, distancing himself from the typical consumer mentality," Europrix.org, an entity that awards the best European multimedia products each year, wrote in 2006. "The user decides, criticizes, and therefore selects the music best-adapted to his taste or humor. [Functioning] in this way, Last.fm will always be relevant." Fifteen years after its founding, "relevant" isn't the most suitable word to describe Last.fm's role in the digital media landscape. It's more the relic of a passionate moment of the online musical experience, a miniature era of rebellious freedom in which discovering music wasn't a question of algorithms but a personal undertaking or shared mission.

Follow Noisey on Twitter.

03 Nov 09:58

Easy and quick vegan chickpea curry

by Yasmine

Indian food has always been among my favorites, and before turning to a plant based diet, I used to be a big fan of the chicken curry at Indian restaurants.

Naturally, I immediately looked for a plant based alternative and although there are different types of plant based curries, I found that the chickpea version is the one I like the most.

Like most of the recipes I’ve shared so far, this curry is very quick and easy to make. You’ll need a couple key ingredients (coconut milk and curry paste) that you may not have on hand but that you can find pretty much everywhere.

I serve this curry with brown rice or basmati rice and it’s absolutely delicious.

Let me know if you try the recipe by leaving a comment below or by tagging me on your pictures on instagram (@theveganlifeofyas).

Enjoy!

Easy and quick vegan chickpea curry

Created by Yasmine on August 14, 2017

You can switch basil for cilantro if you’re not a fan of the basil flavours. You can add a tablespoon of maple syrup if you’d like more sweetness.

Ingredients

  • onions, diced
  • 2 tbsp. tamari or soy sauce
  • 1/2 lime, juiced
  • 1/2 c basil, chopped
  • tomatoes, diced
  • 1 can chickpeas, drained and rinsed
  • 2 tbsp. tikka masala curry paste
  • 1 1/2 c coconut milk
  • tbsp. extra virgin olive oil
  • cloves garlic, minced
  • salt & pepper to taste

Instructions

  1. Heat the olive oil over medium heat. Add onion and garlic and cook until onion is translucent or becoming a little brown. It should take 4-5 minutes.
  2. Stir in the coconut milk, the curry paste and mix until the paste is fully incorporated.
  3. Season with salt and pepper.
  4. Add the chickpeas, the tamari (or soy sauce) and give everything a good stir.
  5. Bring to a boil, it should take about 5 minutes.
  6. Add the tomatoes, basil, lime juice and mix well and let it cook for a couple more minutes.
  7. It's ready ! Enjoy
20 Oct 09:33

Mastodon: como navegar nessa nova rede social

by Renato Cerqueira

Atualização: uma versão mais atualizada desse guia pode ser encontrado aqui, essa versão do medium não será mais atualizada daqui pra frente.

Sobre toots, servidores e emojis customizados

Talvez você tenha ouvido falar do Mastodon, há alguns meses atrás a rede social bombou na mídia internacional como a rede que veio pra sacudir o Twitter. Mas talvez não, porque aparentemente a cobertura na mídia nacional foi bem pequena. Ainda assim, a rede acaba de chegar na versão 2.0 e está alcançando 1 milhão de usuários, além de mais de 1000 servidores ativos.

O Mastodon é uma rede social de microblogging, semelhante ao Twitter. A sua proposta é ser local onde os seus usuários podem postar status de até 500 caracteres. Até aí, tudo bem igual ao Twitter.

A diferença começa no modelo da rede, que é mais semelhante ao serviço de email, com vários servidores que se comunicam, do que ao modelo do twitter de um grande servidor com todo mundo dentro.

Começando pela parte difícil: como funcionam os servidores?

Vamos pegar a imagem bonitinha do joinmastodon.org

O Mastodon é composto por vários servidores. Tem o mastodon.social, que é mantido pelo líder do projeto, o Eugen Rochko. Ou mesmo o Mastodon(te), mantido por mim mesmo. Os dois estão em lugares diferentes, controlados por pessoas diferentes, mas ainda assim, eles falam entre si. Se eu quero mandar uma mensagem pro Eugen, basta eu mandar uma mensagem pra @gargron@mastodon.social e ele vai receber ela por lá e se ele quiser me responder ele vai responder pra @renatolond@masto.donte.com.br e eu vou receber ela de cá. Ou seja, em vez de ser só uma arroba, você é uma arroba em um endereço, que nem email.

Assim como no Twitter no início dos tempos, é possível acompanhar uma timeline especial, a timeline local, que tem todos os toots…

Peraí. Eu num disse isso, né? Quando alguém posta uma coisa no Mastodon isso se chama um toot, se pronuncia “Tut”.

Toot é a onomatopeia de uma corneta em inglês. fonte: toastmonster

Então, como eu ia dizendo, é possível acompanhar uma timeline especial onde tem todos os toots públicos dos usuários do seu servidor. É uma maneira bem legal de descobrir gente e conteúdo novo.

E aí, tem a timeline global (também chamada de federada) que é onde estão os toots de todos os usuários que são vistos pela servidor onde você está. Pode ser meio confuso, porque tem gente do mundo todo postando. Tem umas ferramentas pra filtrar línguas nas timelines local e global pra ajudar um pouco nesse sentido.

Pô, mas aí só me complicou. Qual a vantagem?

A vantagem é que cada servidor é administrado por gente diferente. Você com certeza pode achar um servidor onde você vai estar livre de conteúdo que você não quer ver e ver mais do que você quer. Tá querendo um servidor feito para brasileiros? Tem lá. Quer um servidor mais voltado pro público LGBTQ? Tem lá. Ou de repente cê tá procurando um servidor mais voltado pro público interessado em livros e também tem lá. E se quiser derrubar o capitalismo e falar de gatinhos, também tem um cantinho.

O que você vê na timeline local vai variar bastante de servidor pra servidor. O que você vê na global vai variar porque servidores podem bloquear conteúdo de outros servidores. Então se você está num servidor que não permite nazismo, fascismo e afins, você provavelmente não vai ver conteúdo desse tipo na sua timeline (e se aparecer, você pode reportar aos administradores e eles provavelmente vão bloquear).

E no final das contas, todo mundo com conhecimento técnico ou um pouco de dinheiro pode botar um servidor novo no ar. Então se você quer fazer um servidor pra fãs do campeonato brasileiro, você também pode. (Tô jogando no ar. Acho que ainda não tem, hein. Corre lá :)

Tá, e como eu escolho meu servidor, então?

Tem um site que tem um pequeno questionário pra te ajudar justamente nessa questão, o Mastodon Instances.

Toots e tweets

Os toots são parecidos com tweets, mas tem algumas diferenças.

  1. Os toots podem ter até 500 caracteres*;
  2. Os toots têm configurações de privacidade:
    > Você pode postar publicamente (ou seja, todo mundo vê seu toot e ele aparece nas timelines local e global)
    > Você pode postar não listado (ou seja, todo mundo pode ver seu toot, mas ele não aparece nas timelines local e global)
    > Você pode postar privado (e nesse caso seu toot só aparece pra quem te segue)
    > Você pode postar um toot diretamente pra alguns usuários, e nesse caso é parecido com uma mensagem, só os usuários que você citar vão ver.
  3. Spoiler / alerta de conteúdo: Isso é mara. Você pode marcar enquanto for postar um aviso de conteúdo pra um toot. Aí aparece assim:
Cuidado. Contém spoilers!

Ah, e é claro: Tudo em ordem cronológica. Nada de toot fora de ordem ou like dos amigos aparecendo na timeline.

Emojis

A versão 2.0 está fresquinha, saída do forno! E com ela vem uma novidade que eu acho particularmente bem legal: emojis customizados!

Tá tendo party parrot e várias bandeiras sim!

Além dos emojis normais que você acha no seu telefone, os administradores das instâncias podem adicionar outros emojis.

Aplicativos

Sim, tem aplicativos pra Android, pra IOS, pra desktop e até mesmo pra uns certos editores de texto 😉

Por exemplo, pra Android os mais comuns são o Tusky, Twidere (que funciona tanto pro Mastodon quanto pro Twitter), Mastalab, Subway Tooter11t.

Pra iOS o AmaroqiMast.

Além disso, ambos Android e iOS recentemente suportam PWAs e por isso você pode usar o próprio site da sua instância como um app no seu celular.

Pra outros sistemas e uma lista mais atualizada, dá pra dar uma olhada nessa lista aqui que é mantida pelo projeto: aplicativos.

Como o Mastodon é open-source, a maioria dos seus aplicativos também é. Então você pode dar uma procurada até encontrar um app que te faça se sentir mais em casa.

Ferramentas

Mudar de rede social é um negócio complicado e é por isso que tem ferramentas pra tentar ajudar um pouco na transição.

Mastodon Bridge (a ponte): Criado pelo próprio Eugen Rochko, a ponte serve pra descobrir amigos do Twitter no Mastodon e vice-versa. Depois de criar sua conta em um dos servidores, basta ir lá e conectar a sua conta do Twitter e do Mastodon. Aí ele vai mostrar onde você pode seguir seus amigos do Twitter.

Mastodon Twitter Crossposter (postando entre as redes): Essa aí é minha. Você conecta suas contas do Twitter e do Mastodon e aí você pode decidir como você quer postar entre as redes. Do Twitter pro Mastodon, ou do Mastodon pro Twitter, que tipo de posts vão ser postados. É open-source e tem coisa pra fazer, se quiser contribuir.

Mais informações

A página do projeto é um bom ponto pra começar: The Mastodon Project. Tem tradução em português por lá. Aliás, falando em tradução: tanto a página do projeto quanto a interface do Mastodon em si foram traduzidas pro português Brasileiro pela Anna 🎉

Tem muito mais informação, muito mais detalhada, no repositório de documentação do projeto, mas a maioria das coisas ainda não está traduzida pra português ou português do Brasil. (Tá aí uma oportunidade, ó.)

Um pouco mais velho mas igualmente útil é o texto da Qina Liu: What I wish I knew before joining Mastodon. Embora esteja desatualizado em alguns pontos, ainda é bem divertido e foi o que me inspirou a escrever esse aqui :)

* Vale notar que por padrão os toots têm 500 caracteres. Na prática alguns servidores permitem mais, no finado witches.town, por exemplo, o limite era de 666 caracteres. 😜

14 Sep 08:09

C: \>_ A fear submitted by J. to Deep Dark Fears -...



C: \>_ A fear submitted by J. to Deep Dark Fears - thanks!

My new book “The Creeps” is available now from your local bookstore, Amazon, Barnes & Noble, Book Depository, iBooks, IndieBound, and wherever books are sold. You can find more information here.

01 Sep 12:59

Supervillain Plan

Someday, some big historical event will happen during the DST changeover, and all the tick-tock articles chronicling how it unfolded will have to include a really annoying explanation next to their timelines.
01 Sep 12:57

Eclipse Science

I was thinking of observing stars to verify Einstein's theory of relativity again, but I gotta say, that thing is looking pretty solid at this point.
26 Jun 11:26

Party Time

by CommitStrip

13 May 11:04

Ink In Motion

by Macro Room

Hi All, we are back! :)
It took quite a long time to create this video and we really hope you will like it!
Support us on Patreon: ► https://www.patreon.com/macroroom

This time we dived into the hypnotising beauty of colored ink in water and the interaction of this substance with different elements.

Equipment used:

Super Macro lens:
MPE-65 https://goo.gl/YWwmr1

Secondary Macro lens:
Canon 100m L https://goo.gl/P6XYUg

Main Camera:
Panasonic GH4 https://goo.gl/cic3Gn

2 Led Panels:
https://goo.gl/zC4Mjb

Instagram: https://goo.gl/eJmu4S
Like us on Facebook: https://goo.gl/QxTTQ1
Twitter: https://twitter.com/macro_room


Thanks to:

Patreon:
1) Angela G. Richard

3D models:

Ehud Morris
Shira Kazula Noy

Main Music:
Emotions by Alexbird
https://goo.gl/yqsYls

End music: https://goo.gl/whzaUM
09 May 10:11

Something just clicked. An anonymous fear submitted to Deep...

16 Mar 15:54

Abandoned GitHub repository Caspar David Friedrich 1810 Oil on...



Abandoned GitHub repository

Caspar David Friedrich

1810

Oil on canvas

16 Mar 15:22

Chat Systems

Renato Cerqueira

If anyone used telegram it would be much easier joy

I'm one of the few Instagram users who connects solely through the Unix 'talk' gateway.
12 Mar 16:14

Wonder Woman 'Origin' Trailer (2017) | Movieclips Trailers

by Movieclips Trailers

Wonder Woman Trailer #3 (2017): Check out the trailer starring Gal Gadot, Chris Pine, and Robin Wright! Be the first to watch, comment, and share trailers and movie teasers/clips dropping @MovieclipsTrailers.

► Buy Tickets to Wonder Woman: http://www.fandango.com/wonderwoman_191725/movieoverview?cmp=MCYT_YouTube_Desc

Watch more Trailers:
► HOT New Trailers Playlist: http://bit.ly/2hp08G1
► What to Watch Playlist: http://bit.ly/2ieyw8G
► Epic Action Trailer Playlist: http://bit.ly/2hOtbnD

An Amazon princess leaves her island home to explore the world and, in doing so, becomes one of the world's greatest heroes.

About Movieclips Trailers:
► Subscribe to TRAILERS:http://bit.ly/sxaw6h
► Like us on FACEBOOK: http://bit.ly/1QyRMsE
► Follow us on TWITTER:http://bit.ly/1ghOWmt
► We’re on SNAPCHAT: http://bit.ly/2cOzfcy

The Fandango MOVIECLIPS Trailers channel is your destination for hot new trailers the second they drop. The Fandango MOVIECLIPS Trailers team is here day and night to make sure all the hottest new movie trailers are available whenever, wherever you want them.
15 Feb 15:05

Meeting Points

by Oliver Widder
14 Feb 13:32

Comic: 2016-02-06

New Comic: 2016-02-06
14 Feb 12:54

Where are the tests?

by CommitStrip

06 Feb 11:18

PDF of “POLYSATURATED” vday card and lots of other...



PDF of “POLYSATURATED” vday card and lots of other cards available to $2+ patrons! https://www.patreon.com/kimchicuddles

02 Feb 15:28

Soda Sugar Comparisons

The key is portion control, which is why I've switched to eating smaller cans of frosting instead of full bottles.
31 Jan 18:44

Honest Trailers - Willy Wonka & The Chocolate Factory (Feat. Michael Bolton)

by Screen Junkies

Special thanks to Michael Bolton for guest starring on this Honest Trailer, check out his newest album “Songs of Cinema” available on Frontiers Music at http://www.ScreenJunkies.com/Bolton

Grab your Golden Ticket and join Charlie, Grandpa, and Willy Wonka himself for Honest Trailers - Willy Wonka & The Chocolate Factory!

Thanks to everyone who voted for the Fan Appreciation Month Honest Trailers, we hope you enjoy this month's extra special Honest Trailers!

Got a tip? Email us ► feedback@screenjunkies.com
Follow us on Twitter ► http://twitter.com/screenjunkies
Like us on Facebook ► http://www.fb.com/screenjunkies
Get Screen Junkies Gear! ►► http://bit.ly/SJMerch
Download our iPhone App! ►► http://bit.ly/SJAppPlus
Download our Android App! ►►http://bit.ly/SJPlusGoogleApp

Voiceover Narration by Jon Bailey: http://youtube.com/jon3pnt0
Title design by Robert Holtby
Series Created by Andy Signore - http://twitter.com/andysignore & Brett Weiner
Written by Spencer Gilbert, Joe Starr, Dan Murrell & Andy Signore
Edited by TJ Nordaker & Bruce Guido

LIVE ACTION PORTION:
Director: Andy Signore
Producer: Warren Tessler
Director of Photography: Basil Mironer

MUSIC:
Vocals: Michael Bolton
Music Composition & Backing Vocals: Matt Citron
Vocal Production: Greg Chun
Recording Engineer: Jorge Vivo

Also while we have you, why not check out our Emmy-Nominated HONEST TRAILERS!

Deadpool (Feat. Deadpool)
http://bit.ly/HT_Deadpool

Game of Thrones Vol. 1
http://bit.ly/HT_GOTv1

Frozen
http://bit.ly/HT_Frozen

Harry Potter
http://bit.ly/HT_HarryPotter

Breaking Bad
http://bit.ly/HT_BreakingBad

The Lord Of The Rings
http://bit.ly/HT_LordOfTheRings

Star Wars Force Awakens
http://bit.ly/HT_ForceAwakens

Batman v Superman: Dawn of Justice
http://bit.ly/HT_BvS
31 Jan 13:23

Por que, e em que sentido, algumas feministas não condenam a pornografia?

by marinafuser

[por Marina Costin Fuser]

kinoko-hajime-japan-kinbaku-shibari-rope-bondage-arrest-kunkun-police-1
Kinoko Hajime

Não precisa ser feminista para constatar que a pornografia tradicional é um laboratório de sexismo.  A indústria pornô no geral segue um compêndio de ângulos e posições que ritualizam o ato sexual através do machismo posto em imagens, sons e performances que empobrecem o ato. O prazer da mulher se reduz a dar prazer ao homem.

Há, porém, fissuras nesse prontuário, vias de fuga por onde penetra a libido de mulheres. Inclusive não podemos assumir que as divas pornôs não sintam prazer dentro daqueles planos mais padronizados. Por mais que haja opressão na pornografia, pornografia não é apenas opressão. Mesmo o pornô convencional pode ser libertador de uma libido, e instigar fantasias.

1386802215895575_animate.gif
Arte de Clube de Garotas

Por mais machista que seja o pornô convencional, penso que não caiba às feministas condenar quem se mostra ou quem assiste.  O exibicionismo e o voyeurismo já são condenados pela Igreja. Penso eu que não precisamos reforçar a castração do desejo. Creio que seja mais interessante criticar o sexismo da indústria pornô, e aprofundar críticas específicas no campo da análise fílmica. Isso implica em dissecar a pornografia, e demonstrar de que modo o filme subjuga a mulher e seus desejos. Fizemos isso no ano que passei pesquisando em Berkeley com a professora Linda Williams, uma pornógrafa feminista interessantíssima. Passávamos as tardes de sexta-feira em uma sala cheia de feministas acadêmicas (e alguns caras corajosos) assistindo pornô. Não vou entrar em detalhes, mas aprendi que em vez de cair numa lógica proibicionista, é mais eficaz encorajar realizadoras (es) a desviar  os ângulos, criar outras abordagens, que possam ser mais interessantes inclusive para um público mais amplo. Muita gente não assiste pornô por acha-lo sem graça e repetitivo.

black-and-white-creepy-dark-darkness-favim-com-2864030
Arte de Favim.

Hoje há grupos que fazem pornografia nessa pegada, inclusive feministas afim de enfatizar mais o prazer de mulheres em relações heterossexuais, lésbicas cansadas de ver o lesbianismo objetificado por héteros, além de queers, gays, trans, etc. São cinemas marginais, mas há espaço, público e vontade para expandir.  Mas tem que haver consentimento, entre adultos em sã consciência. Revenge porn são outros 500, pois toda exposição não consentida é uma violência.

pierre-schmidt-565x321
Arte de Pierre Schmidt