Here is some content
", + "reblog":null, + "account":{ + "id":"123454321", + "username":"cheeseperson", + "acct":"cheeseperson@someothermastodonsite.com", + "display_name":"cheeseperson", + "locked":false, + "bot":false, + "discoverable":false, + "group":false, + "created_at":"2023-08-20T00:00:00.000Z", + "note":"", + "url":"https://someothermastodonsite.com/@cheeseperson", + "uri":"https://someothermastodonsite.com/users/cheeseperson", + "avatar":"https://someothermastadonsite.com/avatars/original/missing.png", + "avatar_static":"https://someothermastadonsite.com/avatars/original/missing.png", + "header":"locationofheader.com/image.jpg", + "header_static":"locationofheader.com/image.jpg", + "followers_count":2, + "following_count":2, + "statuses_count":95, + "last_status_at":"2023-10-26", + "emojis":[ + + ], + "fields":[ + + ] + }, + "media_attachments":[ + + ], + "mentions":[ + { + "id":"101010101010", + "username":"thirdperson", + "url":"https://thirdpersonsinstance.com/@thirdperson", + "acct":"thirdperson@emailwebsite.com" + } + ], + "tags":[ + + ], + "emojis":[ + + ], + "card":null, + "poll":null + } + ], + "rules":[ + { + "id":"2", + "text":"Don't be a meanie!" + } + ] + } +} + + +``` diff --git a/content/en/admin/optional/object-storage-proxy.md b/content/en/admin/optional/object-storage-proxy.md index 8bc00bf9..66ad68b6 100644 --- a/content/en/admin/optional/object-storage-proxy.md +++ b/content/en/admin/optional/object-storage-proxy.md @@ -3,12 +3,12 @@ title: Proxying object storage through nginx description: Serving user-uploaded files in Mastodon from your own domain --- -When you are using Mastodon with an object storage provider like Amazon S3, Wasabi, Google Cloud or other, by default the URLs of the files go through the storage providers themselves. This has the following downsides: +When you are using Mastodon with an object storage provider like Amazon S3, Wasabi, Google Cloud or others, by default the URLs of the files go through the storage providers themselves. This has the following downsides: - Bandwidth is usually metered and very expensive - URLs will be broken if you decide to switch providers later -You can instead serve the files from your own domain, caching them in the process. Access patterns on Mastodon are such that **new files are usually accessed simultaneously by a lot of clients** as new posts stream in through the streaming API or as they get distributed through federation; older content is accessed comparatively rarely. For that reason, caching alone would not reduce bandwidth consumed by your proxy from the actual object storage. To mitigate this, we can use a **cache lock** mechanism that ensures that only one proxy request is made at the same time. +You can choose to serve the files from your own domain, incorporating caching in the process. In Mastodon, access patterns show that new files are often simultaneously accessed by many clients as they appear in new posts via the streaming API or are shared through federation; in contrast, older content is accessed less frequently. Therefore, relying solely on caching won't significantly reduce the bandwidth usage of your proxy from the actual object storage. To address this, we can implement a cache lock mechanism, which ensures that only one proxy request is made at a time. Here is an example nginx configuration that accomplishes this: @@ -19,6 +19,9 @@ server { server_name files.example.com; root /var/www/html; + ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; + ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; + keepalive_timeout 30; location = / { @@ -99,7 +102,7 @@ ln -s /etc/nginx/sites-available/files.example.com /etc/nginx/sites-enabled/ systemctl reload nginx ``` -You'll also want to get a SSL certificate for it: +You'll also want to get an SSL certificate for it: ```bash certbot --nginx -d files.example.com diff --git a/content/en/admin/optional/object-storage.md b/content/en/admin/optional/object-storage.md index 668a2a28..3a48ae0d 100644 --- a/content/en/admin/optional/object-storage.md +++ b/content/en/admin/optional/object-storage.md @@ -27,7 +27,7 @@ The web server must be configured to serve those files but not allow listing the ## S3-compatible object storage backends {#S3} -Mastodon can use S3-compatible object storage backends. ACL support is recommended as it allows Mastodon to quickly make the content of temporarily suspended users unavailable, or marginally improve security of private data. +Mastodon can use S3-compatible object storage backends. ACL support is recommended as it allows Mastodon to quickly make the content of temporarily suspended users unavailable, or marginally improve the security of private data. On Mastodon's end, you need to configure the following environment variables: - `S3_ENABLED=true` @@ -37,7 +37,7 @@ On Mastodon's end, you need to configure the following environment variables: - `S3_REGION` - `S3_HOSTNAME` (optional if you use Amazon AWS) - `S3_PERMISSION` (optional, if you use a provider that does not support ACLs or want to use custom ACLs) -- `S3_FORCE_SINGLE_REQUEST=true` (optional, if you run in trouble processing large files) +- `S3_FORCE_SINGLE_REQUEST=true` (optional, if you run into trouble processing large files) {{< page-ref page="admin/optional/object-storage-proxy.md" >}} @@ -46,16 +46,16 @@ You must serve the files with CORS headers, otherwise some functions of Mastodon {{ hint >}} {{< hint style="danger" >}} -In any case, your S3 bucket must be configured so that -- ACL configuration nonwithstanding -- all objects are publicly readable but neither writable or listable, while Mastodon itself can write to it. The configuration should be similar for all S3 providers, but common ones have been highlighted below. +Regardless of the ACL configuration, your S3 bucket must be set up to ensure that all objects are publicly readable but not writable or listable. At the same time, Mastodon itself should have write access to the bucket. This configuration is generally consistent across all S3 providers, and common ones are highlighted below. {{ hint >}} ### MinIO -MinIO is an open source implementation of an S3 object provider. This section does not cover how to install it, but how to configure a bucket for use in Mastodon. +MinIO is an open-source implementation of an S3 object provider. This section does not cover how to install it, but how to configure a bucket for use in Mastodon. You need to set a policy for anonymous access that allows read-only access to objects contained by the bucket without allowing listing them. -To do this, you need to set a custom policy (replace `mastodata` by the actual name of your S3 bucket): +To do this, you need to set a custom policy (replace `mastodata` with the actual name of your S3 bucket): ```json { "Version": "2012-10-17", @@ -72,7 +72,7 @@ To do this, you need to set a custom policy (replace `mastodata` by the actual n } ``` -Mastodon itself needs to be able to write to the bucket, so either use your admin MinIO account (discouraged) or an account specific to Mastodon (recommended) with the following policy attached (replace `mastodata` by the actual name of your S3 bucket): +Mastodon itself needs to be able to write to the bucket, so either use your admin MinIO account (discouraged) or an account specific to Mastodon (recommended) with the following policy attached (replace `mastodata` with the actual name of your S3 bucket): ```json { "Version": "2012-10-17", @@ -93,7 +93,7 @@ You can set those policies from the MinIO Console (web-based user interface) or Connect to the MinIO Console web interface and create a new bucket (or navigate to your existing bucket): ![](/assets/object-storage/minio-bucket.png) -Then, configure the “Access Policy” to a custom one that allows read access (`s3:GetObject`) without write access or ability to list objects (see above): +Then, configure the “Access Policy” to a custom one that allows read access (`s3:GetObject`) without write access or the ability to list objects (see above): ![](/assets/object-storage/minio-access-policy.png) {{< hint style="info" >}} @@ -108,7 +108,7 @@ Finally, create a new `mastodon` user with the `mastodon-readwrite` policy: #### Using the command-line utility -The same can be achieved using the [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) command-line utility (can be called `mc` or `mcli` depending on where it is installed from). +The same can be achieved using the [MinIO Client](https://min.io/docs/minio/linux/reference/minio-mc.html) command-line utility (which can be called `mc` or `mcli` depending on where it is installed from). Create a new bucket: `mc mb myminio/mastodata` @@ -183,7 +183,7 @@ In your DigitalOcean Spaces Bucket, make sure that “File Listing” is “Rest ### Scaleway -If you want to use Scaleway Object Storage, we strongly recommend you to create a Scaleway project dedicaced to your Mastodon instance assets and to use a custom IAM policy. +If you want to use Scaleway Object Storage, we strongly recommend you create a Scaleway project dedicated to your Mastodon instance assets and use a custom IAM policy. First, create a new Scaleway project, in which you create your object storage bucket. You need to set your bucket visibility to "Private" to not allow objects to be listed. @@ -191,7 +191,7 @@ First, create a new Scaleway project, in which you create your object storage bu Now that your bucket is created, you need to create API keys to be used in your Mastodon instance configuration. -Head to the IAM settings (in your organisation menu, top right of the screen), and create a new IAM policy (eg `mastodon-media-access`) +Head to the IAM settings (in your organization menu, top right of the screen), and create a new IAM policy (eg `mastodon-media-access`) ![](/assets/object-storage/scaleway-policy.jpg) @@ -199,7 +199,7 @@ This policy needs to have one rule, allowing it to read, write and delete object ![](/assets/object-storage/scaleway-policy-rules.jpg) -Then head to the IAM Applications page, and a create a new one (eg `my-mastodon-instance`) and select the policy you created above. +Then head to the IAM Applications page, and create a new one (eg `my-mastodon-instance`) and select the policy you created above. Finally, click on the application you just created, then "API Keys", and create a new API key to use in your instance configuration. You should use the "Yes, set up preferred Project" option and select the project you created above as the default project for this key. @@ -225,7 +225,7 @@ Cloudflare R2 does not support ACLs, so Mastodon needs to be instructed not to t Without support for ACLs, media files from temporarily-suspended users will remain accessible. {{< /hint >}} -To get credentials for use in Mastodon, selecte “Manage R2 API Tokens” and create a new API token with “Edit” permissions. +To get credentials for use in Mastodon, select “Manage R2 API Tokens” and create a new API token with “Edit” permissions. {{< hint style="warning" >}} This section is currently under construction. diff --git a/content/en/admin/optional/tor.md b/content/en/admin/optional/tor.md index 20030bd8..d4e2efc6 100644 --- a/content/en/admin/optional/tor.md +++ b/content/en/admin/optional/tor.md @@ -138,9 +138,9 @@ server { Replace the long hash provided here with your Tor domain located in the file at `/var/lib/tor/hidden_service/hostname`. -Note that the onion hostname has been prefixed with “mastodon.”. Your Tor address acts a wildcard domain. All subdomains will be routed through, and you can configure Nginx to respond to any subdomain you wish. If you do not wish to host any other services on your Tor address you can omit the subdomain, or choose a different subdomain. +Note that the onion hostname has been prefixed with “mastodon.”. Your Tor address acts as a wildcard domain. All subdomains will be routed through, and you can configure Nginx to respond to any subdomain you wish. If you do not wish to host any other services on your tor address you can omit the subdomain, or choose a different subdomain. -Here you can see the payoff of moving your mastodon configurations to a different file. Without this all of your configurations would have to be copied to both places. Any change to your configuration would have to be made both places. +Here you can see the payoff of moving your mastodon configurations to a different file. Without this, all of your configurations would have to be copied to both places. Any change to your configuration would have to be made in both places. Restart your web server. diff --git a/content/en/admin/prerequisites.md b/content/en/admin/prerequisites.md index d2457599..691d53b0 100644 --- a/content/en/admin/prerequisites.md +++ b/content/en/admin/prerequisites.md @@ -10,7 +10,7 @@ If you are setting up a fresh machine, it is recommended that you secure it firs ## Do not allow password-based SSH login (keys only) -First make sure you are actually logging in to the server using keys and not via a password, otherwise this will lock you out. Many hosting providers support uploading a public key and automatically set up key-based root login on new machines for you. +First, make sure you are actually logging in to the server using keys and not via a password, otherwise, this will lock you out. Many hosting providers support uploading a public key and automatically set up key-based root login on new machines for you. Edit `/etc/ssh/sshd_config` and find `PasswordAuthentication`. Make sure it’s uncommented and set to `no`. If you made any changes, restart sshd: @@ -42,13 +42,10 @@ sendername = Fail2Ban [sshd] enabled = true port = 22 - -[sshd-ddos] -enabled = true -port = 22 +mode = aggressive ``` -Finally restart fail2ban: +Finally, restart fail2ban: ```bash systemctl restart fail2ban @@ -56,7 +53,7 @@ systemctl restart fail2ban ## Install a firewall and only allow SSH, HTTP and HTTPS ports -First, install iptables-persistent. During installation it will ask you if you want to keep current rules–decline. +First, install iptables-persistent. During installation, it will ask you if you want to keep the current rules–decline. ```bash apt install -y iptables-persistent @@ -80,6 +77,8 @@ Edit `/etc/iptables/rules.v4` and put this inside: # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT +# (optional) Allow HTTP/3 connections from anywhere. +-A INPUT -p udp --dport 443 -j ACCEPT # Allow SSH connections # The -dport number should be the same port number you set in sshd_config @@ -124,6 +123,8 @@ If your server is also reachable over IPv6, edit `/etc/iptables/rules.v6` and ad # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT +# (optional) Allow HTTP/3 connections from anywhere. +-A INPUT -p udp --dport 443 -j ACCEPT # Allow SSH connections # The -dport number should be the same port number you set in sshd_config diff --git a/content/en/admin/roles.md b/content/en/admin/roles.md new file mode 100644 index 00000000..079ebdfb --- /dev/null +++ b/content/en/admin/roles.md @@ -0,0 +1,98 @@ +--- +title: Roles +description: Management of roles from the admin dashboard. +menu: + docs: + parent: admin +--- + +# Roles {#roles} +When the database is seeded, roles are derived from the values present in [`~/config/roles.yml`](https://github.com/mastodon/mastodon/blob/main/config/roles.yml). + +{{< page-ref page="entities/Role" >}} + +The resultant [default roles](#default-roles) are `Owner`, `Admin`, and `Moderator`. + +A role and its attributes can be created using [Add role](#add-role), present on the *Roles* (`/admin/roles`) page. + +![](/assets/admin-roles-ui.png) + +An existing role's attributes can be changed using the [edit role](#edit-role) feature. + +## Default roles {#default-roles} +### Base role (*Default permissions*) {#default-base-role} + +Affects all users, including users without an assigned role. + +The only permission flag that can be altered for this role is **Invite Users**. Enabling this permission allows all users to send invitations. + +The base role has a priority of `0`, and this value cannot be altered. + +### Owner {#default-owner-role} + +A role that is assigned the **Administrator** permission flag, bypassing all permissions. Users with the owner role have every [permission flag](/entities/Role/#permission-flags) enabled. + +The role's *Name*, *Badge color*, and *Display badge* attributes can be changed. No permissions can be edited / revoked from this role. + +The owner role has the highest [priority](#role-priority) of any role (`1000`). The owner can modify any other role attributes. No role can be created which supersedes the owner role, as [role priority](#role-priority) for new and existing roles must be <= `999`. + +### Admin {#default-admin-role} + +A role that is assigned all **Moderation** and **Administration** permission flags. + +The **DevOps** permission flag for this role is disabled, but can be enabled by an **Owner** (or a custom role with a higher priority value). + +The role's *Name*, *Badge color*, and *Display badge* attributes can be changed. + +The admin role has a priority of `100`. + +### Moderator {#default-moderator-role} + +A role that is assigned certain **Moderation** permission flags. These include... +- **View Dashboard** +- **View Audit Log** +- **Manage Users** +- **Manage Reports** +- **Manage Taxonomies** + +The role's *Name*, *Badge color*, and *Display badge* attributes can be changed. + +The moderator role has a priority of `10`. + +## Add Role {#add-role} + +The `admin/roles/new` page allows for the creation of a custom role. + +![](/assets/admin-roles-new-ui.png) + +### Input Fields {#add-role-input-fields} + +{{< page-relref ref="entities/Role#name" caption="Name">}} + +Duplicate role names can exist. They are discerned in the database by their `id`, which cannot be set from the web interface. + +{{< page-relref ref="entities/Role#color" caption="Badge color">}} + +### Priority {#role-priority} + +- Defaults to `0` + - Cannot be > `999` + - Can be any negative integer value +- Two roles can have the same priority value + +> "Higher role decides conflict resolution in certain situations. Certain actions can only be performed on roles with a lower priority." + +{{< page-relref ref="entities/Role#highlighted" caption="Display role as badge on user profiles">}} + +{{< page-relref ref="entities/Role#permissions" caption="Permissions">}} + + +## Edit role {#edit-role} + +![](/assets/admin-roles-edit-ui.png) + +An existing role and its attributes can be edited using *Edit* in the role list. [Input fields](#add-role-input-fields) can be changed and saved, just as they can when creating a new role. The role can also be deleted using this form. + +![](/assets/admin-roles-edit-role-ui.png) + +A logged in user with permission to **Manage Roles** will always be able to see every role, but cannot modify roles that exceed or are equal to their assigned role's [priority](#role-priority). \ No newline at end of file diff --git a/content/en/admin/scaling.md b/content/en/admin/scaling.md index 97f09f82..434fa1f6 100644 --- a/content/en/admin/scaling.md +++ b/content/en/admin/scaling.md @@ -22,7 +22,7 @@ The web process serves short-lived HTTP requests for most of the application. Th - `WEB_CONCURRENCY` controls the number of worker processes - `MAX_THREADS` controls the number of threads per process -Threads share the memory of their parent process. Different processes allocate their own memory, though they share some memory via copy-on-write. A larger number of threads maxes out your CPU first, a larger number of processes maxes out your RAM first. +Threads share the memory of their parent process. Different processes allocate their own memory, though they share some memory via copy-on-write. A larger number of threads maxes out your CPU first, and a larger number of processes maxes out your RAM first. These values affect how many HTTP requests can be served at the same time. @@ -34,9 +34,9 @@ The streaming API handles long-lived HTTP and WebSockets connections, through wh - `STREAMING_API_BASE_URL` controls the base URL of the streaming API - `PORT` controls the port the streaming server will listen on, by default 4000. The `BIND` and `SOCKET` environment variables are also able to be used. -- Additionally the shared [database](/admin/config#postgresql) and [redis](/admin/config#redis) environment variables are used. +- Additionally, the shared [database](/admin/config#postgresql) and [redis](/admin/config#redis) environment variables are used. -The streaming API can be use a different subdomain if you want to by setting `STREAMING_API_BASE_URL`, this allows you to have one load balancer for streaming and one for web/API requests. However, this also requires applications to correctly request the streaming URL from the [instance endpoint](/methods/instance/#v2), instead of assuming that it's hosted on the same host as the Web API. +The streaming API can use a different subdomain if you want to by setting `STREAMING_API_BASE_URL`. This allows you to have one load balancer for streaming and one for web/API requests. However, this also requires applications to correctly request the streaming URL from the [instance endpoint](/methods/instance/#v2), instead of assuming that it's hosted on the same host as the Web API. One process of the streaming server can handle a reasonably high number of connections and throughput, but if you find that a single process isn't handling your instance's load, you can run multiple processes by varying the `PORT` number of each, and then using nginx to load balance traffic to each of those instances. For example, a community of about 50,000 accounts with 10,000-20,000 monthly active accounts, you'll typically have an average concurrent load of about 800-1200 streaming connections. @@ -88,13 +88,13 @@ Many tasks in Mastodon are delegated to background processing to ensure the HTTP While the amount of threads in the web process affects the responsiveness of the Mastodon instance to the end-user, the amount of threads allocated to background processing affects how quickly posts can be delivered from the author to anyone else, how soon e-mails are sent out, etc. -The amount of threads is not controlled by an environment variable in this case, but a command line argument in the invocation of Sidekiq, e.g.: +The number of threads is not regulated by an environment variable, but rather through a command line argument when invoking Sidekiq, as shown in the following example: ```bash bundle exec sidekiq -c 15 ``` -Would start the sidekiq process with 15 threads. Please mind that each threads needs to be able to connect to the database, which means that the database pool needs to be large enough to support all the threads. The database pool size is controlled with the `DB_POOL` environment variable and must be at least the same as the number of threads. +This would initiate the Sidekiq process with 15 threads. It's important to note that each thread requires a database connection, necessitating a sufficiently large database pool. The size of this pool is managed by the DB_POOL environment variable, which should be set to a value at least equal to the number of threads. #### Queues {#sidekiq-queues} @@ -117,19 +117,19 @@ bundle exec sidekiq -q default To run just the `default` queue. -The way Sidekiq works with queues, it first checks for tasks from the first queue, and if there are none, checks the next queue. This means, if the first queue is overfilled, the other queues will lag behind. +Sidekiq processes queues by first checking for tasks in the first queue, and if it finds none, it then checks the subsequent queue. Consequently, if the first queue is overfilled, tasks in the other queues may experience delays. As a solution, it is possible to start different Sidekiq processes for the queues to ensure truly parallel execution, by e.g. creating multiple systemd services for Sidekiq with different arguments. **Make sure you only have one `scheduler` queue running!!** -## Transaction pooling with pgBouncer {#pgbouncer} +## Transaction pooling with PgBouncer {#pgbouncer} ### Why you might need PgBouncer {#pgbouncer-why} -If you start running out of available Postgres connections (the default is 100) then you may find PgBouncer to be a good solution. This document describes some common gotchas as well as good configuration defaults for Mastodon. +If you start running out of available PostgreSQL connections (the default is 100) then you may find PgBouncer to be a good solution. This document describes some common gotchas as well as good configuration defaults for Mastodon. -Note that you can check “PgHero” in the administration view to see how many Postgres connections are currently being used. Typically Mastodon uses as many connections as there are threads both in Puma, Sidekiq and the streaming API combined. +User roles with `DevOps` permissions in Mastodon can monitor the current usage of PostgreSQL connections through the PgHero link in the Administration view. Generally, the number of connections open is equal to the total threads in Puma, Sidekiq, and the streaming API combined. ### Installing PgBouncer {#pgbouncer-install} @@ -143,7 +143,7 @@ sudo apt install pgbouncer #### Setting a password {#pgbouncer-password} -First off, if your `mastodon` user in Postgres is set up without a password, you will need to set a password. +First off, if your `mastodon` user in PostgreSQL is set up without a password, you will need to set a password. Here’s how you might reset the password: @@ -187,20 +187,20 @@ You’ll also want to create a `pgbouncer` admin user to log in to the PgBouncer "pgbouncer" "md5a45753afaca0db833a6f7c7b2864b9d9" ``` -In both cases the password is just `password`. +In both cases, the password is just `password`. #### Configuring pgbouncer.ini {#pgbouncer-ini} Edit `/etc/pgbouncer/pgbouncer.ini` -Add a line under `[databases]` listing the Postgres databases you want to connect to. Here we’ll just have PgBouncer use the same username/password and database name to connect to the underlying Postgres database: +Add a line under `[databases]` listing the PostgreSQL databases you want to connect to. Here we’ll just have PgBouncer use the same username/password and database name to connect to the underlying PostgreSQL database: ```text [databases] mastodon_production = host=127.0.0.1 port=5432 dbname=mastodon_production user=mastodon password=password ``` -The `listen_addr` and `listen_port` tells PgBouncer which address/port to accept connections. The defaults are fine: +The `listen_addr` and `listen_port` tell PgBouncer which address/port to accept connections. The defaults are fine: ```text listen_addr = 127.0.0.1 @@ -219,13 +219,13 @@ Make sure the `pgbouncer` user is an admin: admin_users = pgbouncer ``` -**This next part is very important!** The default pooling mode is session-based, but for Mastodon we want transaction-based. In other words, a Postgres connection is created when a transaction is created and dropped when the transaction is done. So you’ll want to change the `pool_mode` from `session` to `transaction`: +Mastodon requires a different pooling mode than the default session-based one. Specifically, it needs a transaction-based pooling mode. This means that a PostgreSQL connection is established at the start of a transaction and terminated upon its completion. Therefore, it's essential to change the `pool_mode` setting from `session` to `transaction`: ```ini pool_mode = transaction ``` -Next up, `max_client_conn` defines how many connections PgBouncer itself will accept, and `default_pool_size` puts a limit on how many Postgres connections will be opened under the hood. (In PgHero the number of connections reported will correspond to `default_pool_size` because it has no knowledge of PgBouncer.) +Next up, `max_client_conn` defines how many connections PgBouncer itself will accept, and `default_pool_size` puts a limit on how many PostgreSQL connections will be opened under the hood. (In PgHero the number of connections reported will correspond to `default_pool_size` because it has no knowledge of PgBouncer.) The defaults are fine to start, and you can always increase them later: @@ -234,7 +234,7 @@ max_client_conn = 100 default_pool_size = 20 ``` -Don’t forget to reload or restart pgbouncer after making your changes: +Don’t forget to reload or restart PgBouncer after making your changes: ```bash sudo systemctl reload pgbouncer @@ -242,13 +242,13 @@ sudo systemctl reload pgbouncer #### Debugging that it all works {#pgbouncer-debug} -You should be able to connect to PgBouncer just like you would with Postgres: +You should be able to connect to PgBouncer just like you would with PostgreSQL: ```bash psql -p 6432 -U mastodon mastodon_production ``` -And then use your password to log in. +Then use your password to log in. You can also check the PgBouncer logs like so: @@ -266,7 +266,7 @@ PREPARED_STATEMENTS=false Since we’re using transaction-based pooling, we can’t use prepared statements. -Next up, configure Mastodon to use port 6432 (PgBouncer) instead of 5432 (Postgres) and you should be good to go: +Next up, configure Mastodon to use port 6432 (PgBouncer) instead of 5432 (PostgreSQL) and you should be good to go: ```bash DB_HOST=localhost @@ -277,7 +277,7 @@ DB_PORT=6432 ``` {{< hint style="warning" >}} -You cannot use pgBouncer to perform `db:migrate` tasks. But this is easy to work around. If your postgres and pgbouncer are on the same host, it can be as simple as defining `DB_PORT=5432` together with `RAILS_ENV=production` when calling the task, for example: `RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate` (you can specify `DB_HOST` too if it’s different, etc) +You cannot use PgBouncer to perform `db:migrate` tasks. But this is easy to work around. If your PostgreSQL and PgBouncer are on the same host, it can be as simple as defining `DB_PORT=5432` together with `RAILS_ENV=production` when calling the task, for example: `RAILS_ENV=production DB_PORT=5432 bundle exec rails db:migrate` (you can specify `DB_HOST` too if it’s different, etc) {{< /hint >}} #### Administering PgBouncer {#pgbouncer-admin} @@ -304,16 +304,81 @@ Then use `\q` to quit. ## Separate Redis for cache {#redis} -Redis is used widely throughout the application, but some uses are more important than others. Home feeds, list feeds, and Sidekiq queues as well as the streaming API are backed by Redis and that’s important data you wouldn’t want to lose (even though the loss can be survived, unlike the loss of the PostgreSQL database - never lose that!). However, Redis is also used for volatile cache. If you are at a stage of scaling up where you are worried if your Redis can handle everything, you can use a different Redis database for the cache. In the environment, you can specify `CACHE_REDIS_URL` or individual parts like `CACHE_REDIS_HOST`, `CACHE_REDIS_PORT` etc. Unspecified parts fallback to the same values as without the cache prefix. +Redis is used widely throughout the application, but some uses are more important than others. Home feeds, list feeds, and Sidekiq queues as well as the streaming API are backed by Redis and that’s important data you wouldn’t want to lose (even though the loss can be survived, unlike the loss of the PostgreSQL database - never lose that!). However, Redis is also used for volatile cache. If you are at a stage of scaling up where you are worried about whether your Redis can handle everything, you can use a different Redis database for the cache. In the environment, you can specify `CACHE_REDIS_URL` or individual parts like `CACHE_REDIS_HOST`, `CACHE_REDIS_PORT` etc. Unspecified parts fallback to the same values as without the cache prefix. -As far as configuring the Redis database goes, basically you can get rid of background saving to disk, since it doesn’t matter if the data gets lost on restart and you can save some disk I/O on that. You can also add a maximum memory limit and a key eviction policy, for that, see this guide: [Using Redis as an LRU cache](https://redis.io/topics/lru-cache) +Additionally, Redis is used for volatile caching. If you're scaling up and concerned about Redis's capacity to handle the load, you can allocate a separate Redis database specifically for caching. To do this, set `CACHE_REDIS_URL` in the environment, or define individual components such as `CACHE_REDIS_HOST`, `CACHE_REDIS_PORT`, etc. + +Unspecified components will default to their values without the cache prefix. + +When configuring the Redis database for caching, it's possible to disable background saving to disk, as data loss on restart is not critical in this context, and this can save some disk I/O. Additionally, consider setting a maximum memory limit and implementing a key eviction policy. For more details on these configurations, refer to this guide:[Using Redis as an LRU cache](https://redis.io/topics/lru-cache) + +## Seperate Redis for Sidekiq {#redis-sidekiq} + +Redis is used in Sidekiq to keep track of its locks and queue. Although in general the performance gain is not that big, some instances may benefit from having a seperate Redis instance for Sidekiq. + +In the environment file, you can specify `SIDEKIQ_REDIS_URL` or individual parts like `SIDEKIQ_REDIS_HOST`, `SIDEKIQ_REDIS_PORT` etc. Unspecified parts fallback to the same values as without the `SIDEKIQ_` prefix. + +Creating a seperate Redis instance for Sidekiq is relatively simple: + +Start by making a copy of the default redis systemd service: +```bash +cp /etc/systemd/system/redis.service /etc/systemd/system/redis-sidekiq.service +``` + +In the `redis-sidekiq.service` file, change the following values: +```bash +ExecStart=/usr/bin/redis-server /etc/redis/redis-sidekiq.conf --supervised systemd --daemonize no +PIDFile=/run/redis/redis-server-sidekiq.pid +ReadWritePaths=-/var/lib/redis-sidekiq +Alias=redis-sidekiq.service +``` + +Make a copy of the Redis configuration file for the new Sidekiq Redis instance + +```bash +cp /etc/redis/redis.conf /etc/redis/redis-sidekiq.conf +``` + +In this `redis-sidekiq.conf` file, change the following values: +```bash +port 6479 +pidfile /var/run/redis/redis-server-sidekiq.pid +logfile /var/log/redis/redis-server-sidekiq.log +dir /var/lib/redis-sidekiq +``` + +Before starting the new Redis instance, create a data directory: + +```bash +mkdir /var/lib/redis-sidekiq +chown redis /var/lib/redis-sidekiq +``` + +Start the new Redis instance: + +```bash +systemctl enable --now redis-sidekiq +``` + +Update your environment, add the following line: + +```bash +SIDEKIQ_REDIS_URL=redis://127.0.0.1:6479/ +``` + +Restart Mastodon to use the new Redis instance, make sure to restart both web and Sidekiq (otherwise, one of them will still be working from the wrong instance): + +```bash +systemctl restart mastodon-web.service +systemctl restart redis-sidekiq.service +``` ## Read-replicas {#read-replicas} -To reduce the load on your Postgresql server, you may wish to setup hot streaming replication (read replica). [See this guide for an example](https://cloud.google.com/community/tutorials/setting-up-postgres-hot-standby). You can make use of the replica in Mastodon in these ways: +To reduce the load on your PostgreSQL server, you may wish to set up hot streaming replication (read replica). [See this guide for an example](https://cloud.google.com/community/tutorials/setting-up-postgres-hot-standby). You can make use of the replica in Mastodon in these ways: -- The streaming API server does not issue writes at all, so you can connect it straight to the replica. But it’s not querying the database very often anyway so the impact of this is little. -- Use the Makara driver in the web processes, so that writes go to the primary database, while reads go to the replica. Let’s talk about that. +* The streaming API server does not issue writes at all, so you can connect it straight to the replica (it is not querying the database very often anyway, so the impact of this is small). +* Use the Makara driver in the web and Sidekiq processes, so that writes go to the master database, while reads go to the replica. Let’s talk about that. {{< hint style="warning" >}} Read replicas are currently not supported for the Sidekiq processes, and using them will lead to failing jobs and data loss. @@ -337,8 +402,25 @@ production: url: postgresql://db_user:db_password@db_host:db_port/db_name ``` -Make sure the URLs point to wherever your PostgreSQL servers are. You can add multiple replicas. You could have a locally installed pgBouncer with configuration to connect to two different servers based on database name, e.g. “mastodon” going to the primary, “mastodon_replica” going to the replica, so in the file above both URLs would point to the local pgBouncer with the same user, password, host and port, but different database name. There are many possibilities how this could be setup! For more information on Makara, [see their documentation](https://github.com/taskrabbit/makara#databaseyml). +Make sure the URLs point to wherever your PostgreSQL servers are. You can add multiple replicas. You could have a locally installed PgBouncer with a configuration to connect to two different servers based on the database name, e.g. “mastodon” going to the primary, “mastodon_replica” going to the replica, so in the file above both URLs would point to the local PgBouncer with the same user, password, host and port, but different database name. There are many possibilities how this could be set up! For more information on Makara, [see their documentation](https://github.com/taskrabbit/makara#databaseyml). {{< hint style="warning" >}} Make sure the sidekiq processes run with the stock `config/database.yml` to avoid failing jobs and data loss! {{< /hint >}} + +## Using a web load balancer + +Cloud providers like DigitalOcean, AWS, Hetzner, etc., offer virtual load balancing solutions that distribute network traffic across multiple servers, but provide a single public IP address. + +Scaling your deployment to provision multiple web/Puma servers behind one of these virtual load balancers can help provide more consistent performance by reducing the risk that a single server may become overwhelmed by user traffic, and decrease downtime when performing maintenance or upgrades. You should consult your provider documentation on how to setup and configure a load balancer, but consider that you need to configure your load balancer to monitor the health of the backend web/Puma nodes, otherwise you may send traffic to a service that is not responsive. + +The following endpoints are available to monitor for this purpose: + +- **Web/Puma:** `/health` +- **Streaming API:** `/api/v1/streaming/health` + +These endpoints should both return an HTTP status code of 200, and the text `OK` as a result. + +{{< hint style="info" >}} +You can also use these endpoints for health checks with a third-party monitoring/alerting utility. +{{< /hint >}} \ No newline at end of file diff --git a/content/en/admin/tootctl.md b/content/en/admin/tootctl.md index cddc70a8..3ae1285e 100644 --- a/content/en/admin/tootctl.md +++ b/content/en/admin/tootctl.md @@ -32,7 +32,7 @@ RAILS_ENV=production bin/tootctl help Erase this server from the federation by broadcasting account Delete activities to all known other servers. This allows a "clean exit" from running a Mastodon server, as it leaves next to no cache behind on other servers. This command is always interactive and requires confirmation twice. -No local data is actually deleted, because emptying the database or deleting the entire VPS is faster. If you run this command then continue to operate the instance anyway, then there will be a state mismatch that might lead to glitches and issues with federation. +No local data is actually deleted because emptying the database or deleting the entire VPS is faster. If you run this command and then continue to operate the instance anyway, then there will be a state mismatch that might lead to glitches and issues with federation. {{< hint style="danger" >}} **Make sure you know exactly what you are doing before running this command.** This operation is NOT reversible, and it can take a long time. The server will be in a BROKEN STATE after this command finishes. A running Sidekiq process is required, so do not shut down the server until the queues are fully cleared. @@ -86,7 +86,7 @@ Generate and broadcast new RSA keys, as part of security maintenance. ### `tootctl accounts create` {#accounts-create} -Create a new user account with given `USERNAME` and provided `--email`. +Create a new user account with the given `USERNAME` and provided `--email`. `USERNAME` : Local username for the new account. {{Hola mundo
", - "detected_source_language": "en", + "content": "Hello world
", + "spoiler_text": "Greatings ahead", + "media_attachments": [ + { + "id": 22345792, + "description": "Status author waving at the camera" + } + ], + "poll": null, + "detected_source_language": "es", "provider": "DeepL.com" } ``` +Translation of status with poll: +```json +{ + "content": "Should I stay or should I go?
", + "spoiler_text": "", + "media_attachments": [], + "poll": [ + { + "id": 34858, + "options": [ + { + "title": "Stay" + }, + { + "title": "Go" + } + ] + } + ], + "detected_source_language": "ja", + "provider": "DeepL.com" +} +``` + + ## Attributes ### `content` {#content} -**Description:** The translated text of the status.\ +**Description:** HTML-encoded translated content of the status.\ **Type:** String (HTML)\ **Version history:**\ 4.0.0 - added +### `spoiler_warning` {#spoiler_warning} + +**Description:** The translated spoiler warning of the status.\ +**Type:** String\ +**Version history:**\ +4.2.0 - added + +### `poll` {#poll} + +**Description:** The translated poll options of the status.\ +**Type:** Array\ +**Version history:**\ +4.2.0 - added + +### `media_attachments` {#media_attachments} + +**Description:** The translated media descriptions of the status.\ +**Type:** Array\ +**Version history:**\ +4.2.0 - added + ### `detected_source_language` {#detected_source_language} **Description:** The language of the source text, as auto-detected by the machine translation provider.\ diff --git a/content/en/methods/accounts.md b/content/en/methods/accounts.md index abb1bb30..9882005f 100644 --- a/content/en/methods/accounts.md +++ b/content/en/methods/accounts.md @@ -313,7 +313,10 @@ Update the user's display and preferences. 1.1.1 - added\ 2.3.0 - added `locked` parameter\ 2.4.0 - added `source[privacy,sensitive]` parameters\ -2.7.0 - added `discoverable` parameter +2.4.2 - added `source[language]` parameter\ +2.7.0 - added `discoverable` parameter\ +4.1.0 - added `hide_collections` parameter\ +4.2.0 - added `indexable` parameter #### Request @@ -345,6 +348,12 @@ bot discoverable : Boolean. Whether the account should be shown in the profile directory. +hide_collections +: Boolean. Whether to hide followers and followed accounts. + +indexable +: Boolean. Whether public posts should be searchable to anyone. + fields_attributes : Hash. The profile fields to be set. Inside this hash, the key is an integer cast to a string (although the exact integer does not matter), and the value is another hash including `name` and `value`. By default, max 4 fields. diff --git a/content/en/methods/admin/accounts.md b/content/en/methods/admin/accounts.md index 643a3606..e53c73ed 100644 --- a/content/en/methods/admin/accounts.md +++ b/content/en/methods/admin/accounts.md @@ -3,7 +3,7 @@ title: admin/accounts API methods description: Perform moderation actions with accounts. menu: docs: - name: admin/accounts + name: accounts parent: methods-admin identifier: methods-admin-accounts aliases: [ diff --git a/content/en/methods/admin/domain_blocks.md b/content/en/methods/admin/domain_blocks.md index 6034023e..f20ba884 100644 --- a/content/en/methods/admin/domain_blocks.md +++ b/content/en/methods/admin/domain_blocks.md @@ -3,7 +3,7 @@ title: admin/domain_blocks API methods description: Disallow certain domains to federate. menu: docs: - name: admin/domain_blocks + name: domain_blocks parent: methods-admin identifier: methods-admin-domain_blocks aliases: [ diff --git a/content/en/methods/admin/reports.md b/content/en/methods/admin/reports.md index f174ea80..669a5036 100644 --- a/content/en/methods/admin/reports.md +++ b/content/en/methods/admin/reports.md @@ -3,7 +3,7 @@ title: admin/reports API methods description: Perform moderation actions with reports. menu: docs: - name: admin/reports + name: reports parent: methods-admin identifier: methods-admin-reports aliases: [ diff --git a/content/en/methods/admin/trends.md b/content/en/methods/admin/trends.md index 868bad58..957fd45c 100644 --- a/content/en/methods/admin/trends.md +++ b/content/en/methods/admin/trends.md @@ -3,7 +3,7 @@ title: admin/trends API methods description: TODO menu: docs: - name: admin/trends + name: trends parent: methods-admin identifier: methods-admin-trends aliases: [ diff --git a/content/en/methods/apps.md b/content/en/methods/apps.md index 1abce816..d1dd5b26 100644 --- a/content/en/methods/apps.md +++ b/content/en/methods/apps.md @@ -84,7 +84,7 @@ GET /api/v1/apps/verify_credentials HTTP/1.1 Confirm that the app's OAuth2 credentials work. **Returns:** [Application]({{< relref "entities/application" >}}), but without `client_id` or `client_secret`\ -**OAuth level:** App token\ +**OAuth level:** App token + `read`\ **Version history:**\ 2.0.0 - added\ 2.7.2 - now returns `vapid_key` @@ -94,7 +94,7 @@ Confirm that the app's OAuth2 credentials work. ##### Headers Authorization -: {{Hola mundo
", - "detected_source_language": "en", + "content": "Hello world
", + "spoiler_text": "Greatings ahead", + "media_attachments": [ + { + "id": 22345792, + "description": "Status author waving at the camera" + } + ], + "poll": null, + "detected_source_language": "es", + "provider": "DeepL.com" +} +``` + +Translating a status with poll into English + +```json +{ + "content": "Should I stay or should I go?
", + "spoiler_text": null, + "media_attachments": [], + "poll": [ + { + "id": 34858, + "options": [ + { + "title": "Stay" + }, + { + "title": "Go" + } + ] + } + ], + "detected_source_language": "ja", "provider": "DeepL.com" } ``` @@ -1478,6 +1512,9 @@ language media_ids[] : Array of String. Include Attachment IDs to be attached as media. If provided, `status` becomes optional, and `poll` cannot be used. +media_attributes[][] +: Array of String. Each array includes id, description, and focus. + poll[options][] : Array of String. Possible answers to the poll. If provided, `media_ids` cannot be used, and `poll[expires_in]` must be provided. @@ -1852,3 +1889,4 @@ Status does not exist or is private. {{< caption-link url="https://github.com/mastodon/mastodon/blob/main/app/controllers/api/v1/statuses/reblogs_controller.rb" caption="app/controllers/api/v1/statuses/reblogs_controller.rb" >}} {{< caption-link url="https://github.com/mastodon/mastodon/blob/main/app/controllers/api/v1/statuses/sources_controller.rb" caption="app/controllers/api/v1/statuses/sources_controller.rb" >}} + diff --git a/content/en/user/moving.md b/content/en/user/moving.md index b27c5ef6..b2a73ef5 100644 --- a/content/en/user/moving.md +++ b/content/en/user/moving.md @@ -13,7 +13,7 @@ menu: At any time you want, you can go to Settings > Export and download a CSV file for your current followed accounts, your currently created lists, your currently blocked accounts, your currently muted accounts, and your currently blocked domains. Your following, blocking, muting, and domain-blocking lists can be imported at Settings > Import, where they can either be merged or overwritten. -Requesting an archive of your posts and media can be done once every 7 days, and can be downloaded in ActivityPub JSON format. Mastodon currently does not support importing posts or media due to technical limitations, but your archive can be viewed by any software that understands how to parse ActivityPub documents. +Requesting an archive of your posts and media can be done once every 7 days, and can be downloaded in Activity Streams 2.0 JSON format. Mastodon currently does not support importing posts or media due to technical limitations, but your archive can be viewed by any software that understands how to parse Activity Streams 2.0 documents. ## Redirecting or moving your profile {#migration} diff --git a/content/zh-cn/_index.md b/content/zh-cn/_index.md index 2ae3d315..240441b3 100644 --- a/content/zh-cn/_index.md +++ b/content/zh-cn/_index.md @@ -22,7 +22,7 @@ menu: Mastodon站点可以独立运作。和传统网站一样,人们可以在上面注册、发布消息、上传图片、互相聊天。但与传统网站*不同*的是,Mastodon网站之间可以互动,让跨站用户互相交流,就好像只要你知道他们的电子邮件地址,你就可以从你的Gmail帐户发送电子邮件给使用Outlook、Fastmail、Protonmail或任何其他电子邮件供应商的用户。在Mastodon里,**你可以对任何人在任何网站上的地址进行“@”或私信**。 -{{< figure src="assets/image%20%289%29.png" caption="上图从左到右依次为:集中式、联邦式、分布式" >}} +{{< figure src="assets/network-models.jpg" caption="上图从左到右依次为:集中式、联邦式、分布式" >}} ## ActivityPub是什么? {#fediverse} diff --git a/content/zh-cn/admin/config.md b/content/zh-cn/admin/config.md index ecc07478..58561bc4 100644 --- a/content/zh-cn/admin/config.md +++ b/content/zh-cn/admin/config.md @@ -148,6 +148,8 @@ Mastodon使用环境变量作为其的配置。 * `S3_HOSTNAME` * `S3_ENDPOINT` * `S3_SIGNATURE_VERSION` +* `S3_BATCH_DELETE_LIMIT` +* `S3_BATCH_DELETE_RETRY` ### Swift {#swift} diff --git a/content/zh-cn/admin/install.md b/content/zh-cn/admin/install.md index 669408e5..6e9f7dc7 100644 --- a/content/zh-cn/admin/install.md +++ b/content/zh-cn/admin/install.md @@ -179,7 +179,17 @@ cp /home/mastodon/live/dist/nginx.conf /etc/nginx/sites-available/mastodon ln -s /etc/nginx/sites-available/mastodon /etc/nginx/sites-enabled/mastodon ``` -编辑 `/etc/nginx/sites-available/mastodon`,替换 `example.com` 为你自己的域名,你可以根据自己的需求做出其它的一些调整。 +编辑 `/etc/nginx/sites-available/mastodon` + +1. 替换 `example.com` 为你自己的域名 +2. 启用 `ssl_certificate` 和 `ssl_certificate_key` 这两行,并把它们替换成如下两行(如果你使用自己的证书的话则可以忽略这一步) + +``` +ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; +ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; +``` + +3. 你还可以根据自己的需求做出其它的一些调整。 重载 nginx 以使变更生效: diff --git a/content/zh-cn/admin/migrating.md b/content/zh-cn/admin/migrating.md index 11732a3b..c1910695 100644 --- a/content/zh-cn/admin/migrating.md +++ b/content/zh-cn/admin/migrating.md @@ -17,7 +17,7 @@ menu: 1. 依照[产品指南]({{< relref "install" >}})安装新的Mastodon服务器(切记,不要运行 `mastodon:setup`)。 2. 停止旧服务器上的Mastodon(`systemctl stop 'mastodon-*.service'`)。 -3. 依照如下指示,导出并导入Postgres数据库。 +3. 依照如下指示,导出并导入PostgreSQL数据库。 4. 依照如下指示,复制 `system/` 目录下文件。(注意:如果你使用S3存储,你可以跳过此步)。 5. 复制 `.env.production` 文件。 6. 运行 `RAILS_ENV=production bundle exec rails assets:precompile` 编译 Mastodon。 @@ -34,18 +34,18 @@ menu: 你必须需要复制如下内容: * `~/live/public/system`目录,里面包含了用户上传的图片与视频(如果使用S3,可跳过此步) -* Postgres数据库(使用[pg_dump](https://www.postgresql.org/docs/9.1/static/backup-dump.html)) +* PostgreSQL数据库(使用[pg_dump](https://www.postgresql.org/docs/9.1/static/backup-dump.html)) * `~/live/.env.production`文件,里面包含了服务器配置与密钥 不太重要的部分,为了方便起见,你也可以复制如下内容: * nginx配置文件(位于`/etc/nginx/sites-available/default`) * systemd配置文件(`/etc/systemd/system/mastodon-*.service`),里面可能包括一些你服务器的调优与个性化 -* pgbouncer配置文件,位于 `/etc/pgbouncer` (如果你使用pgbouncer的话) +* PgBouncer配置文件,位于 `/etc/pgbouncer` (如果你使用PgBouncer的话) -### 导出并导入Postgres数据库 {#dump-and-load-postgres} +### 导出并导入PostgreSQL数据库 {#dump-and-load-postgresql} -不要运行`mastodon:setup`,而是创建一个名为`template0`的空白Postgres数据库(当导入Postgres导出文件时,这是很有用的,参见[pg_dump文档](https://www.postgresql.org/docs/9.1/static/backup-dump.html#BACKUP-DUMP-RESTORE))。 +不要运行`mastodon:setup`,而是创建一个名为`template0`的空白PostgreSQL数据库(当导入PostgreSQL导出文件时,这是很有用的,参见[pg_dump文档](https://www.postgresql.org/docs/9.1/static/backup-dump.html#BACKUP-DUMP-RESTORE))。 在你的旧系统,使用`mastodon`用户运行如下命令: diff --git a/content/zh-cn/admin/scaling.md b/content/zh-cn/admin/scaling.md index e9e43d06..fe39a8e9 100644 --- a/content/zh-cn/admin/scaling.md +++ b/content/zh-cn/admin/scaling.md @@ -101,7 +101,7 @@ sudo apt install pgbouncer #### 设置密码 {#pgbouncer-password} 首先,如果你的Postgres中`mastodon`帐户没有设置密码的话,你需要设置一个密码。 -First off, if your `mastodon` user in Postgres is set up without a password, you will need to set a password. +First off, if your `mastodon` user in PostgreSQL is set up without a password, you will need to set a password. 下面是如何重置密码: @@ -212,7 +212,7 @@ PREPARED_STATEMENTS=false 因为我们使用基于事务(transaction-based)的连接池,我们不能使用参数化查询(prepared statement)。 -接下来,配置Mastodon使用6432端口(PgBouncer)而不是5432端口(Postgres)就可以了: +接下来,配置Mastodon使用6432端口(PgBouncer)而不是5432端口(PostgreSQL)就可以了: ```bash DB_HOST=localhost diff --git a/layouts/partials/sidebar.html b/layouts/partials/sidebar.html index f93a3515..67f37a7b 100644 --- a/layouts/partials/sidebar.html +++ b/layouts/partials/sidebar.html @@ -2,6 +2,16 @@ + + +