2009-05-25 09:11:09 +02:00
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
< html xmlns = "http://www.w3.org/1999/xhtml" xml:lang = "en" lang = "en" >
< head >
2012-01-16 03:09:07 +01:00
< meta http-equiv = "Content-Type" content = "text/html; charset=utf-8" / >
2013-08-04 11:02:19 +02:00
< meta name = "generator" content = "Docutils 0.11: http://docutils.sourceforge.net/" / >
2012-01-16 03:09:07 +01:00
< title > libtorrent manual< / title >
< meta name = "author" content = "Arvid Norberg, arvid@rasterbar.com" / >
< link rel = "stylesheet" type = "text/css" href = "../../css/base.css" / >
< link rel = "stylesheet" type = "text/css" href = "../../css/rst.css" / >
2010-12-08 05:44:20 +01:00
< script type = "text/javascript" >
/* < ![CDATA[ */
(function() {
var s = document.createElement('script'), t = document.getElementsByTagName('script')[0];
s.type = 'text/javascript';
s.async = true;
s.src = 'http://api.flattr.com/js/0.6/load.js?mode=auto';
t.parentNode.insertBefore(s, t);
})();
/* ]]> */
< / script >
2009-05-25 09:11:09 +02:00
< link rel = "stylesheet" href = "style.css" type = "text/css" / >
< style type = "text/css" >
/* Hides from IE-mac \*/
* html pre { height: 1%; }
/* End hide from IE-mac */
< / style >
< / head >
< body >
< div class = "document" id = "libtorrent-manual" >
< div id = "container" >
< div id = "headerNav" >
< ul >
< li class = "first" > < a href = "/" > Home< / a > < / li >
< li > < a href = "../../products.html" > Products< / a > < / li >
< li > < a href = "../../contact.html" > Contact< / a > < / li >
< / ul >
< / div >
< div id = "header" >
2013-08-04 11:02:19 +02:00
< div id = "orange" > < / div >
< div id = "logo" > < / div >
2009-05-25 09:11:09 +02:00
< / div >
< div id = "main" >
< h1 class = "title" > libtorrent manual< / h1 >
< table class = "docinfo" frame = "void" rules = "none" >
< col class = "docinfo-name" / >
< col class = "docinfo-content" / >
< tbody valign = "top" >
< tr > < th class = "docinfo-name" > Author:< / th >
< td > Arvid Norberg, < a class = "last reference external" href = "mailto:arvid@rasterbar.com" > arvid@ rasterbar.com< / a > < / td > < / tr >
< tr > < th class = "docinfo-name" > Version:< / th >
2013-01-21 20:13:24 +01:00
< td > 1.0.0< / td > < / tr >
2009-05-25 09:11:09 +02:00
< / tbody >
< / table >
< div class = "contents topic" id = "table-of-contents" >
< p class = "topic-title first" > Table of contents< / p >
< ul class = "simple" >
< li > < a class = "reference internal" href = "#tuning-libtorrent" id = "id1" > tuning libtorrent< / a > < / li >
< li > < a class = "reference internal" href = "#reducing-memory-footprint" id = "id2" > reducing memory footprint< / a > < ul >
< li > < a class = "reference internal" href = "#disable-disk-cache" id = "id3" > disable disk cache< / a > < / li >
< li > < a class = "reference internal" href = "#remove-torrents" id = "id4" > remove torrents< / a > < / li >
< li > < a class = "reference internal" href = "#socket-buffer-sizes" id = "id5" > socket buffer sizes< / a > < / li >
< li > < a class = "reference internal" href = "#peer-list-size" id = "id6" > peer list size< / a > < / li >
< li > < a class = "reference internal" href = "#send-buffer-watermark" id = "id7" > send buffer watermark< / a > < / li >
< li > < a class = "reference internal" href = "#optimize-hashing-for-memory-usage" id = "id8" > optimize hashing for memory usage< / a > < / li >
< li > < a class = "reference internal" href = "#reduce-executable-size" id = "id9" > reduce executable size< / a > < / li >
2010-02-19 05:20:11 +01:00
< li > < a class = "reference internal" href = "#reduce-statistics" id = "id10" > reduce statistics< / a > < / li >
2009-05-25 09:11:09 +02:00
< / ul >
< / li >
2010-02-19 05:20:11 +01:00
< li > < a class = "reference internal" href = "#play-nice-with-the-disk" id = "id11" > play nice with the disk< / a > < / li >
< li > < a class = "reference internal" href = "#high-performance-seeding" id = "id12" > high performance seeding< / a > < ul >
< li > < a class = "reference internal" href = "#file-pool" id = "id13" > file pool< / a > < / li >
< li > < a class = "reference internal" href = "#disk-cache" id = "id14" > disk cache< / a > < / li >
2011-02-01 10:48:28 +01:00
< li > < a class = "reference internal" href = "#utp-tcp-mixed-mode" id = "id15" > uTP-TCP mixed mode< / a > < / li >
< li > < a class = "reference internal" href = "#send-buffer-low-watermark" id = "id16" > send buffer low watermark< / a > < / li >
< li > < a class = "reference internal" href = "#peers" id = "id17" > peers< / a > < / li >
< li > < a class = "reference internal" href = "#torrent-limits" id = "id18" > torrent limits< / a > < / li >
2009-05-25 09:11:09 +02:00
< / ul >
< / li >
2011-02-01 10:48:28 +01:00
< li > < a class = "reference internal" href = "#scalability" id = "id19" > scalability< / a > < / li >
< li > < a class = "reference internal" href = "#benchmarking" id = "id20" > benchmarking< / a > < ul >
< li > < a class = "reference internal" href = "#disk-metrics" id = "id21" > disk metrics< / a > < / li >
< li > < a class = "reference internal" href = "#session-stats" id = "id22" > session stats< / a > < / li >
2009-05-27 08:45:27 +02:00
< / ul >
< / li >
2011-03-28 01:07:08 +02:00
< li > < a class = "reference internal" href = "#understanding-the-disk-thread" id = "id23" > understanding the disk thread< / a > < / li >
< li > < a class = "reference internal" href = "#contributions" id = "id24" > contributions< / a > < / li >
2009-05-25 09:11:09 +02:00
< / ul >
< / div >
< div class = "section" id = "tuning-libtorrent" >
< h1 > tuning libtorrent< / h1 >
< p > libtorrent expose most constants used in the bittorrent engine for
2013-01-21 20:13:24 +01:00
customization through the < tt class = "docutils literal" > session_settings< / tt > . This makes it possible to
2009-05-25 09:11:09 +02:00
test and tweak the parameters for certain algorithms to make a client
that fits a wide range of needs. From low memory embedded devices to
servers seeding thousands of torrents. The default settings in libtorrent
are tuned for an end-user bittorrent client running on a normal desktop
computer.< / p >
< p > This document describes techniques to benchmark libtorrent performance
and how parameters are likely to affect it.< / p >
< / div >
< div class = "section" id = "reducing-memory-footprint" >
< h1 > reducing memory footprint< / h1 >
< p > These are things you can do to reduce the memory footprint of libtorrent. You get
2013-01-21 20:13:24 +01:00
some of this by basing your default < tt class = "docutils literal" > session_settings< / tt > on the < tt class = "docutils literal" > min_memory_usage()< / tt >
2009-05-25 09:11:09 +02:00
setting preset function.< / p >
< p > Keep in mind that lowering memory usage will affect performance, always profile
and benchmark your settings to determine if it's worth the trade-off.< / p >
< p > The typical buffer usage of libtorrent, for a single download, with the cache
size set to 256 blocks (256 * 16 kiB = 4 MiB) is:< / p >
< pre class = "literal-block" >
2009-05-27 08:45:27 +02:00
read cache: 128.6 (2058 kiB)
write cache: 103.5 (1656 kiB)
receive buffers: 7.3 (117 kiB)
send buffers: 4.8 (77 kiB)
hash temp: 0.001 (19 Bytes)
2009-05-25 09:11:09 +02:00
< / pre >
< p > The receive buffers is proportional to the number of connections we make, and is
limited by the total number of connections in the session (default is 200).< / p >
< p > The send buffers is proportional to the number of upload slots that are allowed
in the session. The default is auto configured based on the observed upload rate.< / p >
< p > The read and write cache can be controlled (see section below).< / p >
< p > The " hash temp" entry size depends on whether or not hashing is optimized for
speed or memory usage. In this test run it was optimized for memory usage.< / p >
< div class = "section" id = "disable-disk-cache" >
< h2 > disable disk cache< / h2 >
< p > The bulk of the memory libtorrent will use is used for the disk cache. To save
2009-05-27 08:45:27 +02:00
the absolute most amount of memory, you can disable the cache by setting
2009-05-25 09:11:09 +02:00
< tt class = "docutils literal" > < span class = "pre" > session_settings::cache_size< / span > < / tt > to 0. You might want to consider using the cache
but just disable caching read operations. You do this by settings
< tt class = "docutils literal" > < span class = "pre" > session_settings::use_read_cache< / span > < / tt > to false. This is the main factor in how much
memory will be used by the client. Keep in mind that you will degrade performance
by disabling the cache. You should benchmark the disk access in order to make an
informed trade-off.< / p >
< / div >
< div class = "section" id = "remove-torrents" >
< h2 > remove torrents< / h2 >
2009-05-27 08:45:27 +02:00
< p > Torrents that have been added to libtorrent will inevitably use up memory, even
2009-05-25 09:11:09 +02:00
when it's paused. A paused torrent will not use any peer connection objects or
any send or receive buffers though. Any added torrent holds the entire .torrent
file in memory, it also remembers the entire list of peers that it's heard about
(which can be fairly long unless it's capped). It also retains information about
which blocks and pieces we have on disk, which can be significant for torrents
with many pieces.< / p >
< p > If you need to minimize the memory footprint, consider removing torrents from
the session rather than pausing them. This will likely only make a difference
when you have a very large number of torrents in a session.< / p >
< p > The downside of removing them is that they will no longer be auto-managed. Paused
auto managed torrents are scraped periodically, to determine which torrents are
in the greatest need of seeding, and libtorrent will prioritize to seed those.< / p >
< / div >
< div class = "section" id = "socket-buffer-sizes" >
< h2 > socket buffer sizes< / h2 >
< p > You can make libtorrent explicitly set the kernel buffer sizes of all its peer
sockets. If you set this to a low number, you may see reduced throughput, especially
2009-05-27 08:45:27 +02:00
for high latency connections. It is however an opportunity to save memory per
2009-05-25 09:11:09 +02:00
connection, and might be worth considering if you have a very large number of
peer connections. This memory will not be visible in your process, this sets
the amount of kernel memory is used for your sockets.< / p >
< p > Change this by setting < tt class = "docutils literal" > < span class = "pre" > session_settings::recv_socket_buffer_size< / span > < / tt > and
< tt class = "docutils literal" > < span class = "pre" > session_settings::send_socket_buffer_size< / span > < / tt > .< / p >
< / div >
< div class = "section" id = "peer-list-size" >
< h2 > peer list size< / h2 >
< p > The default maximum for the peer list is 4000 peers. For IPv4 peers, each peer
entry uses 32 bytes, which ends up using 128 kB per torrent. If seeding 4 popular
torrents, the peer lists alone uses about half a megabyte.< / p >
< p > The default limit is the same for paused torrents as well, so if you have a
large number of paused torrents (that are popular) it will be even more
significant.< / p >
< p > If you're short of memory, you should consider lowering the limit. 500 is probably
enough. You can do this by setting < tt class = "docutils literal" > < span class = "pre" > session_settings::max_peerlist_size< / span > < / tt > to
the max number of peers you want in the torrent's peer list.< / p >
< p > You should also lower the same limit but for paused torrents. It might even make sense
to set that even lower, since you only need a few peers to start up while waiting
for the tracker and DHT to give you fresh ones. The max peer list size for paused
torrents is set by < tt class = "docutils literal" > < span class = "pre" > session_settings::max_paused_peerlist_size< / span > < / tt > .< / p >
< p > The drawback of lowering this number is that if you end up in a position where
the tracker is down for an extended period of time, your only hope of finding live
peers is to go through your list of all peers you've ever seen. Having a large
peer list will also help increase performance when starting up, since the torrent
can start connecting to peers in parallel with connecting to the tracker.< / p >
< / div >
< div class = "section" id = "send-buffer-watermark" >
< h2 > send buffer watermark< / h2 >
< p > The send buffer watermark controls when libtorrent will ask the disk I/O thread
to read blocks from disk, and append it to a peer's send buffer.< / p >
< p > When the send buffer has fewer than or equal number of bytes as
< tt class = "docutils literal" > < span class = "pre" > session_settings::send_buffer_watermark< / span > < / tt > , the peer will ask the disk I/O thread
for more data to send. The trade-off here is between wasting memory by having too
much data in the send buffer, and hurting send rate by starving out the socket,
waiting for the disk read operation to complete.< / p >
< p > If your main objective is memory usage and you're not concerned about being able
to achieve high send rates, you can set the watermark to 9 bytes. This will guarantee
that no more than a single (16 kiB) block will be on the send buffer at a time, for
all peers. This is the least amount of memory possible for the send buffer.< / p >
< p > You should benchmark your max send rate when adjusting this setting. If you have
a very fast disk, you are less likely see a performance hit.< / p >
< / div >
< div class = "section" id = "optimize-hashing-for-memory-usage" >
< h2 > optimize hashing for memory usage< / h2 >
< p > When libtorrent is doing hash checks of a file, or when it re-reads a piece that
was just completed to verify its hash, there are two options. The default one
is optimized for speed, which allocates buffers for the entire piece, reads in
the whole piece in one read call, then hashes it.< / p >
< p > The second option is to optimize for memory usage instead, where a single buffer
is allocated, and the piece is read one block at a time, hashing it as each
block is read from the file. For low memory environments, this latter approach
is recommended. Change this by settings < tt class = "docutils literal" > < span class = "pre" > session_settings::optimize_hashing_for_speed< / span > < / tt >
to false. This will significantly reduce peak memory usage, especially for
torrents with very large pieces.< / p >
< / div >
< div class = "section" id = "reduce-executable-size" >
< h2 > reduce executable size< / h2 >
< p > Compilers generally add a significant number of bytes to executables that make use
of C++ exceptions. By disabling exceptions (-fno-exceptions on GCC), you can
2009-05-30 20:52:01 +02:00
reduce the executable size with up to 45%. In order to build without exception
support, you need to patch parts of boost.< / p >
2009-05-25 09:11:09 +02:00
< p > Also make sure to optimize for size when compiling.< / p >
2010-11-29 02:33:05 +01:00
< p > Another way of reducing the executable size is to disable code that isn't used.
2013-01-21 20:13:24 +01:00
There are a number of < tt class = "docutils literal" > TORRENT_*< / tt > macros that control which features are included
2010-11-29 02:33:05 +01:00
in libtorrent. If these macros are used to strip down libtorrent, make sure the same
macros are defined when building libtorrent as when linking against it. If these
are different the structures will look different from the libtorrent side and from
the client side and memory corruption will follow.< / p >
2013-01-21 20:13:24 +01:00
< p > One, probably, safe macro to define is < tt class = "docutils literal" > TORRENT_NO_DEPRECATE< / tt > which removes all
2010-11-29 02:33:05 +01:00
deprecated functions and struct members. As long as no deprecated functions are
relied upon, this should be a simple way to eliminate a little bit of code.< / p >
< p > For all available options, see the < a class = "reference external" href = "building.html" > building libtorrent< / a > secion.< / p >
2009-05-25 09:11:09 +02:00
< / div >
2010-02-19 05:20:11 +01:00
< div class = "section" id = "reduce-statistics" >
< h2 > reduce statistics< / h2 >
< p > You can save some memory for each connection and each torrent by reducing the
number of separate rates kept track of by libtorrent. If you build with < tt class = "docutils literal" > < span class = "pre" > full-stats=off< / span > < / tt >
(or < tt class = "docutils literal" > < span class = "pre" > -DTORRENT_DISABLE_FULL_STATS< / span > < / tt > ) you will save a few hundred bytes for each
connection and torrent. It might make a difference if you have a very large number
of peers or torrents.< / p >
< / div >
2009-05-25 09:11:09 +02:00
< / div >
< div class = "section" id = "play-nice-with-the-disk" >
< h1 > play nice with the disk< / h1 >
< p > When checking a torrent, libtorrent will try to read as fast as possible from the disk.
The only thing that might hold it back is a CPU that is slow at calculating SHA-1 hashes,
but typically the file checking is limited by disk read speed. Most operating systems
today do not prioritize disk access based on the importance of the operation, this means
that checking a torrent might delay other disk accesses, such as virtual memory swapping
or just loading file by other (interactive) applications.< / p >
< p > In order to play nicer with the disk, and leave some spare time for it to service other
processes that might be of higher importance to the end-user, you can introduce a sleep
between the disc accesses. This is a direct tradeoff between how fast you can check a
torrent and how soft you will hit the disk.< / p >
< p > You control this by setting the < tt class = "docutils literal" > < span class = "pre" > session_settings::file_checks_delay_per_block< / span > < / tt > to greater
than zero. This number is the number of milliseconds to sleep between each read of 16 kiB.< / p >
< p > The sleeps are not necessarily in between each 16 kiB block (it might be read in larger chunks),
but the number will be multiplied by the number of blocks that were read, to maintain the
same semantics.< / p >
< / div >
2009-05-27 08:45:27 +02:00
< div class = "section" id = "high-performance-seeding" >
< h1 > high performance seeding< / h1 >
< p > In the case of a high volume seed, there are two main concerns. Performance and scalability.
2009-06-12 18:41:44 +02:00
This translates into high send rates, and low memory and CPU usage per peer connection.< / p >
2009-05-27 08:45:27 +02:00
< div class = "section" id = "file-pool" >
< h2 > file pool< / h2 >
< p > libtorrent keeps an LRU file cache. Each file that is opened, is stuck in the cache. The main
2009-06-12 18:41:44 +02:00
purpose of this is because of anti-virus software that hooks on file-open and file close to
2009-05-27 08:45:27 +02:00
scan the file. Anti-virus software that does that will significantly increase the cost of
opening and closing files. However, for a high performance seed, the file open/close might
be so frequent that it becomes a significant cost. It might therefore be a good idea to allow
a large file descriptor cache. Adjust this though < tt class = "docutils literal" > < span class = "pre" > session_settings::file_pool_size< / span > < / tt > .< / p >
2009-06-12 18:41:44 +02:00
< p > Don't forget to set a high rlimit for file descriptors in your process as well. This limit
2009-05-27 08:45:27 +02:00
must be high enough to keep all connections and files open.< / p >
< / div >
< div class = "section" id = "disk-cache" >
< h2 > disk cache< / h2 >
< p > You typically want to set the cache size to as high as possible. The
< tt class = "docutils literal" > < span class = "pre" > session_settings::cache_size< / span > < / tt > is specified in 16 kiB blocks. Since you're seeding,
the cache would be useless unless you also set < tt class = "docutils literal" > < span class = "pre" > session_settings::use_read_cache< / span > < / tt >
to true.< / p >
< p > In order to increase the possibility of read cache hits, set the
< tt class = "docutils literal" > < span class = "pre" > session_settings::cache_expiry< / span > < / tt > to a large number. This won't degrade anything as
long as the client is only seeding, and not downloading any torrents.< / p >
2010-02-06 19:33:42 +01:00
< p > In order to increase the disk cache hit rate, you can enable suggest messages based on
what's in the read cache. To do this, set < tt class = "docutils literal" > < span class = "pre" > session_settings::suggest_mode< / span > < / tt > to
< tt class = "docutils literal" > < span class = "pre" > session_settings::suggest_read_cache< / span > < / tt > . This will send suggest messages to peers
for the most recently used pieces in the read cache. This is especially useful if you
also enable explicit read cache, by settings < tt class = "docutils literal" > < span class = "pre" > session_settings::explicit_read_cache< / span > < / tt >
to the number of pieces to keep in the cache. The explicit read cache will make the
disk read cache stick, and not be evicted by cache misses. The explicit read cache
will automatically pull in the rarest pieces in the read cache.< / p >
< p > Assuming that you seed much more data than you can keep in the cache, to a large
numbers of peers (so that the read cache wouldn't be useful anyway), this may be a
good idea.< / p >
< p > When peers first connect, libtorrent will send them a number of allow-fast messages,
which lets the peers download certain pieces even when they are choked, since peers
are choked by default, this often triggers immediate requests for those pieces. In the
case of using explicit read cache and suggesting those pieces, allowing fast pieces
should be disabled, to not systematically trigger requests for pieces that are not cached
for all peers. You can turn off allow-fast by settings < tt class = "docutils literal" > < span class = "pre" > session_settings::allowed_fast_set_size< / span > < / tt >
to 0.< / p >
< p > As an alternative to the explicit cache and suggest messages, there's a < em > guided cache< / em >
mode. This means the size of the read cache line that's stored in the cache is determined
based on the upload rate to the peer that triggered the read operation. The idea being
that slow peers don't use up a disproportional amount of space in the cache. This
is enabled through < tt class = "docutils literal" > < span class = "pre" > session_settings::guided_read_cache< / span > < / tt > .< / p >
< p > In cases where the assumption is that the cache is only used as a read-ahead, and that no
other peer will ever request the same block while it's still in the cache, the read
cache can be set to be < em > volatile< / em > . This means that every block that is requested out of
the read cache is removed immediately. This saves a significant amount of cache space
which can be used as read-ahead for other peers. This mode should < strong > never< / strong > be combined
2013-01-21 20:13:24 +01:00
with either < tt class = "docutils literal" > explicit_read_cache< / tt > or < tt class = "docutils literal" > suggest_read_cache< / tt > , since those uses opposite
2010-02-06 19:33:42 +01:00
strategies for the read cache. You don't want to on one hand attract peers to request
the same pieces, and on the other hand assume that they won't request the same pieces
and drop them when the first peer requests it. To enable volatile read cache, set
< tt class = "docutils literal" > < span class = "pre" > session_settings::volatile_read_cache< / span > < / tt > to true.< / p >
2009-05-27 08:45:27 +02:00
< / div >
2011-02-01 10:48:28 +01:00
< div class = "section" id = "utp-tcp-mixed-mode" >
< h2 > uTP-TCP mixed mode< / h2 >
< p > libtorrent supports < a class = "reference external" href = "utp.html" > uTP< / a > , which has a delay based congestion controller. In order to
avoid having a single TCP bittorrent connection completely starve out any uTP connection,
there is a mixed mode algorithm. This attempts to detect congestion on the uTP peers and
throttle TCP to avoid it taking over all bandwidth. This balances the bandwidth resources
between the two protocols. When running on a network where the bandwidth is in such an
abundance that it's virtually infinite, this algorithm is no longer necessary, and might
even be harmful to throughput. It is adviced to experiment with the
< tt class = "docutils literal" > < span class = "pre" > session_setting::mixed_mode_algorithm< / span > < / tt > , setting it to < tt class = "docutils literal" > < span class = "pre" > session_settings::prefer_tcp< / span > < / tt > .
This setting entirely disables the balancing and unthrottles all connections. On a typical
home connection, this would mean that none of the benefits of uTP would be preserved
(the modem's send buffer would be full at all times) and uTP connections would for the most
part be squashed by the TCP traffic.< / p >
< / div >
2009-08-26 08:23:06 +02:00
< div class = "section" id = "send-buffer-low-watermark" >
< h2 > send buffer low watermark< / h2 >
< p > libtorrent uses a low watermark for send buffers to determine when a new piece should
be requested from the disk I/O subsystem, to be appended to the send buffer. The low
watermark is determined based on the send rate of the socket. It needs to be large
enough to not draining the socket's send buffer before the disk operation completes.< / p >
< p > The watermark is bound to a max value, to avoid buffer sizes growing out of control.
The default max send buffer size might not be enough to sustain very high upload rates,
and you might have to increase it. It's specified in bytes in
2013-01-21 20:13:24 +01:00
< tt class = "docutils literal" > < span class = "pre" > session_settings::send_buffer_watermark< / span > < / tt > . The < tt class = "docutils literal" > high_performance_seed()< / tt > preset
2009-08-26 08:23:06 +02:00
sets this value to 5 MB.< / p >
< / div >
2009-05-27 08:45:27 +02:00
< div class = "section" id = "peers" >
< h2 > peers< / h2 >
< p > First of all, in order to allow many connections, set the global connection limit
high, < tt class = "docutils literal" > < span class = "pre" > session::set_max_connections()< / span > < / tt > . Also set the upload rate limit to
infinite, < tt class = "docutils literal" > < span class = "pre" > session::set_upload_rate_limit()< / span > < / tt > , passing 0 means infinite.< / p >
< p > When dealing with a large number of peers, it might be a good idea to have slightly
stricter timeouts, to get rid of lingering connections as soon as possible.< / p >
< p > There are a couple of relevant settings: < tt class = "docutils literal" > < span class = "pre" > session_settings::request_timeout< / span > < / tt > ,
< tt class = "docutils literal" > < span class = "pre" > session_settings::peer_timeout< / span > < / tt > and < tt class = "docutils literal" > < span class = "pre" > session_settings::inactivity_timeout< / span > < / tt > .< / p >
< p > For seeds that are critical for a delivery system, you most likely want to allow
multiple connections from the same IP. That way two people from behind the same NAT
can use the service simultaneously. This is controlled by
< tt class = "docutils literal" > < span class = "pre" > session_settings::allow_multiple_connections_per_ip< / span > < / tt > .< / p >
< p > In order to always unchoke peers, turn off automatic unchoke
< tt class = "docutils literal" > < span class = "pre" > session_settings::auto_upload_slots< / span > < / tt > and set the number of upload slots to a large
2009-06-26 21:03:28 +02:00
number via < tt class = "docutils literal" > < span class = "pre" > session::set_max_uploads()< / span > < / tt > , or use -1 (which means infinite).< / p >
2009-05-27 08:45:27 +02:00
< / div >
< div class = "section" id = "torrent-limits" >
< h2 > torrent limits< / h2 >
< p > To seed thousands of torrents, you need to increase the < tt class = "docutils literal" > < span class = "pre" > session_settings::active_limit< / span > < / tt >
and < tt class = "docutils literal" > < span class = "pre" > session_settings::active_seeds< / span > < / tt > .< / p >
< / div >
< / div >
2011-02-01 10:48:28 +01:00
< div class = "section" id = "scalability" >
< h1 > scalability< / h1 >
< p > In order to make more efficient use of the libtorrent interface when running a large
number of torrents simultaneously, one can use the < tt class = "docutils literal" > < span class = "pre" > session::get_torrent_status()< / span > < / tt > call
together with < tt class = "docutils literal" > < span class = "pre" > session::refresh_torrent_status()< / span > < / tt > . Keep in mind that every call into
libtorrent that return some value have to block your thread while posting a message to
the main network thread and then wait for a response (calls that don't return any data
will simply post the message and then immediately return). The time this takes might
become significant once you reach a few hundred torrents (depending on how many calls
2013-01-21 20:13:24 +01:00
you make to each torrent and how often). < tt class = "docutils literal" > get_torrent_status< / tt > lets you query the
2011-02-01 10:48:28 +01:00
status of all torrents in a single call. This will actually loop through all torrents
and run a provided predicate function to determine whether or not to include it in
the returned vector. If you have a lot of torrents, you might want to update the status
of only certain torrents. For instance, you might only be interested in torrents that
are being downloaded.< / p >
2013-01-21 20:13:24 +01:00
< p > The intended use of these functions is to start off by calling < tt class = "docutils literal" > get_torrent_status< / tt >
to get a list of all torrents that match your criteria. Then call < tt class = "docutils literal" > refresh_torrent_status< / tt >
2011-02-01 10:48:28 +01:00
on that list. This will only refresh the status for the torrents in your list, and thus
ignore all other torrents you might be running. This may save a significant amount of
time, especially if the number of torrents you're interested in is small. In order to
2013-01-21 20:13:24 +01:00
keep your list of interested torrents up to date, you can either call < tt class = "docutils literal" > get_torrent_status< / tt >
2011-02-01 10:48:28 +01:00
from time to time, to include torrents you might have become interested in since the last
time. In order to stop refreshing a certain torrent, simply remove it from the list.< / p >
< p > A more efficient way however, would be to subscribe to status alert notifications, and
update your list based on these alerts. There are alerts for when torrents are added, removed,
paused, resumed, completed etc. Doing this ensures that you only query status for the
minimal set of torrents you are actually interested in.< / p >
< / div >
2009-05-25 09:11:09 +02:00
< div class = "section" id = "benchmarking" >
< h1 > benchmarking< / h1 >
2011-02-01 10:48:28 +01:00
< p > There is a bunch of built-in instrumentation of libtorrent that can be used to get an insight
2009-05-27 08:45:27 +02:00
into what it's doing and how well it performs. This instrumentation is enabled by defining
preprocessor symbols when building.< / p >
< p > There are also a number of scripts that parses the log files and generates graphs (requires
gnuplot and python).< / p >
< div class = "section" id = "disk-metrics" >
< h2 > disk metrics< / h2 >
2013-01-21 20:13:24 +01:00
< p > To enable disk I/O instrumentation, define < tt class = "docutils literal" > TORRENT_DISK_STATS< / tt > when building. When built
2009-05-27 08:45:27 +02:00
with this configuration libtorrent will create three log files, measuring various aspects of
the disk I/O. The following table is an overview of these files and what they measure.< / p >
< table border = "1" class = "docutils" >
< colgroup >
< col width = "30%" / >
< col width = "70%" / >
< / colgroup >
< thead valign = "bottom" >
< tr > < th class = "head" > filename< / th >
< th class = "head" > description< / th >
< / tr >
< / thead >
< tbody valign = "top" >
2013-01-21 20:13:24 +01:00
< tr > < td > < tt class = "docutils literal" > disk_io_thread.log< / tt > < / td >
2009-05-27 08:45:27 +02:00
< td > This is a log of which operation the disk I/O thread is
engaged in, with timestamps. This tells you what the thread
is spending its time doing.< / td >
< / tr >
2013-01-21 20:13:24 +01:00
< tr > < td > < tt class = "docutils literal" > disk_buffers.log< / tt > < / td >
2009-05-27 08:45:27 +02:00
< td > This log keeps track of what the buffers allocated from the
disk buffer pool are used for. There are 5 categories.
receive buffer, send buffer, write cache, read cache and
temporary hash storage. This is key when optimizing memory
usage.< / td >
< / tr >
2013-01-21 20:13:24 +01:00
< tr > < td > < tt class = "docutils literal" > disk_access.log< / tt > < / td >
2009-05-27 08:45:27 +02:00
< td > This is a low level log of read and write operations, with
timestamps and file offsets. The file offsets are byte
offsets in the torrent (not in any particular file, in the
case of a multi-file torrent). This can be used as an
estimate of the physical drive location. The purpose of
this log is to identify the amount of seeking the drive has
to do.< / td >
< / tr >
< / tbody >
< / table >
< div class = "section" id = "disk-io-thread-log" >
< h3 > disk_io_thread.log< / h3 >
< p > The structure of this log is simple. For each line, there are two columns, a timestamp and
2013-01-21 20:13:24 +01:00
the operation that was started. There is a special operation called < tt class = "docutils literal" > idle< / tt > which means
2009-05-27 08:45:27 +02:00
it looped back to the top and started waiting for new jobs. If there are more jobs to
2013-01-21 20:13:24 +01:00
handle immediately, the < tt class = "docutils literal" > idle< / tt > state is still there, but the timestamp is the same as the
2009-05-27 08:45:27 +02:00
next job that is handled.< / p >
2013-01-21 20:13:24 +01:00
< p > Some operations have a 3:rd column with an optional parameter. < tt class = "docutils literal" > read< / tt > and < tt class = "docutils literal" > write< / tt > tells
you the number of bytes that were requested to be read or written. < tt class = "docutils literal" > flushing< / tt > tells you
2009-05-27 08:45:27 +02:00
the number of bytes that were flushed from the disk cache.< / p >
< p > This is an example excerpt from a log:< / p >
< pre class = "literal-block" >
3702 idle
3706 check_fastresume
3707 idle
4708 save_resume_data
4708 idle
8230 read 16384
8255 idle
8431 read 16384
< / pre >
2013-01-21 20:13:24 +01:00
< p > The script to parse this log and generate a graph is called < tt class = "docutils literal" > parse_disk_log.py< / tt > . It takes
the log file as the first command line argument, and produces a file: < tt class = "docutils literal" > disk_io.png< / tt > .
2009-05-27 08:45:27 +02:00
The time stamp is in milliseconds since start.< / p >
< p > You can pass in a second, optional, argument to specify the window size it will average
the time measurements over. The default is 5 seconds. For long test runs, it might be interesting
to increase that number. It is specified as a number of seconds.< / p >
< img alt = "disk_io.png" src = "disk_io.png" / >
< p > This is an example graph generated by the parse script.< / p >
< / div >
< div class = "section" id = "disk-buffers-log" >
< h3 > disk_buffers.log< / h3 >
< p > The disk buffer log tells you where the buffer memory is used. The log format has a time stamp,
the name of the buffer usage which use-count changed, colon, and the new number of blocks that are
in use for this particular key. For example:< / p >
< pre class = "literal-block" >
23671 write cache: 18
23671 receive buffer: 3
24153 receive buffer: 2
24153 write cache: 19
24154 receive buffer: 3
24198 receive buffer: 2
24198 write cache: 20
24202 receive buffer: 3
24305 send buffer: 0
24305 send buffer: 1
24909 receive buffer: 2
24909 write cache: 21
24910 receive buffer: 3
< / pre >
< p > The time stamp is in milliseconds since start.< / p >
2013-01-21 20:13:24 +01:00
< p > To generate a graph, use < tt class = "docutils literal" > parse_disk_buffer_log.py< / tt > . It takes the log file as the first
command line argument. It generates < tt class = "docutils literal" > disk_buffer.png< / tt > .< / p >
2009-05-27 08:45:27 +02:00
< img alt = "disk_buffer_sample.png" src = "disk_buffer_sample.png" / >
< p > This is an example graph generated by the parse script.< / p >
< / div >
< div class = "section" id = "disk-access-log" >
< h3 > disk_access.log< / h3 >
< p > The disc access log has three fields. The timestamp (milliseconds since start), operation
and offset. The offset is the absolute offset within the torrent (not within a file). This
log is only useful when you're downloading a single torrent, otherwise the offsets will not
be unique.< / p >
< p > In order to easily plot this directly in gnuplot, without parsing it, there are two lines
associated with each read or write operation. The first one is the offset where the operation
started, and the second one is where the operation ended.< / p >
< p > Example:< / p >
< pre class = "literal-block" >
15437 read 301187072
15437 read_end 301203456
16651 read 213385216
16680 read_end 213647360
25879 write 249036800
25879 write_end 249298944
26811 read 325582848
26943 read_end 325844992
36736 read 367001600
36766 read_end 367263744
< / pre >
< p > The disk access log does not have any good visualization tool yet. There is however a gnuplot
2013-01-21 20:13:24 +01:00
file, < tt class = "docutils literal" > disk_access.gnuplot< / tt > which assumes < tt class = "docutils literal" > disk_access.log< / tt > is in the current directory.< / p >
2009-05-27 08:45:27 +02:00
< img alt = "disk_access.png" src = "disk_access.png" / >
< p > The density of the disk seeks tells you how hard the drive has to work.< / p >
2009-05-25 09:11:09 +02:00
< / div >
< / div >
< div class = "section" id = "session-stats" >
< h2 > session stats< / h2 >
2013-01-21 20:13:24 +01:00
< p > By defining < tt class = "docutils literal" > TORRENT_STATS< / tt > libtorrent will write a log file called < tt class = "docutils literal" > < span class = "pre" > session_stats/< pid> .< sequence> .log< / span > < / tt > which
is in a format ready to be passed directly into gnuplot. The parser script < tt class = "docutils literal" > parse_session_stats.py< / tt >
generates a report in < tt class = "docutils literal" > session_stats_report/index.html< / tt > .< / p >
2009-05-27 08:45:27 +02:00
< p > The first line in the log contains all the field names, separated by colon:< / p >
< pre class = "literal-block" >
second:upload rate:download rate:downloading torrents:seeding torrents:peers...
< / pre >
< p > The rest of the log is one line per second with all the fields' values.< / p >
< p > These are the fields:< / p >
< table border = "1" class = "docutils" >
< colgroup >
< col width = "25%" / >
< col width = "75%" / >
< / colgroup >
< thead valign = "bottom" >
< tr > < th class = "head" > field name< / th >
< th class = "head" > description< / th >
< / tr >
< / thead >
< tbody valign = "top" >
< tr > < td > second< / td >
< td > the time, in seconds, for this log line< / td >
< / tr >
< tr > < td > upload rate< / td >
< td > the number of bytes uploaded in the last second< / td >
< / tr >
< tr > < td > download rate< / td >
< td > the number of bytes downloaded in the last second< / td >
< / tr >
< tr > < td > downloading torrents< / td >
< td > the number of torrents that are not seeds< / td >
< / tr >
< tr > < td > seeding torrents< / td >
< td > the number of torrents that are seed< / td >
< / tr >
< tr > < td > peers< / td >
< td > the total number of connected peers< / td >
< / tr >
< tr > < td > connecting peers< / td >
< td > the total number of peers attempting to connect (half-open)< / td >
< / tr >
< tr > < td > disk block buffers< / td >
< td > the total number of disk buffer blocks that are in use< / td >
< / tr >
< tr > < td > unchoked peers< / td >
< td > the total number of unchoked peers< / td >
< / tr >
< tr > < td > num list peers< / td >
< td > the total number of known peers, but not necessarily connected< / td >
< / tr >
< tr > < td > peer allocations< / td >
< td > the total number of allocations for the peer list pool< / td >
< / tr >
< tr > < td > peer storage bytes< / td >
< td > the total number of bytes allocated for the peer list pool< / td >
< / tr >
< / tbody >
< / table >
< p > This is an example of a graph that can be generated from this log:< / p >
< img alt = "session_stats_peers.png" src = "session_stats_peers.png" / >
< p > It shows statistics about the number of peers and peers states. How at the startup
there are a lot of half-open connections, which tapers off as the total number of
peers approaches the limit (50). It also shows how the total peer list slowly but steadily
grows over time. This list is plotted against the right axis, as it has a different scale
as the other fields.< / p >
< / div >
2009-05-25 09:11:09 +02:00
< / div >
2011-03-28 01:07:08 +02:00
< div class = "section" id = "understanding-the-disk-thread" >
< h1 > understanding the disk thread< / h1 >
< p > All disk operations are funneled through a separate thread, referred to as the disk thread.
The main interface to the disk thread is a queue where disk jobs are posted, and the results
of these jobs are then posted back on the main thread's io_service.< / p >
< p > A disk job is essentially one of:< / p >
< ol class = "arabic simple" >
< li > write this block to disk, i.e. a write job. For the most part this is just a matter of sticking the block in the disk cache, but if we've run out of cache space or completed a whole piece, we'll also flush blocks to disk. This is typically very fast, since the OS just sticks these buffers in its write cache which will be flushed at a later time, presumably when the drive head will pass the place on the platter where the blocks go.< / li >
< li > read this block from disk. The first thing that happens is we look in the cache to see if the block is already in RAM. If it is, we'll return immediately with this block. If it's a cache miss, we'll have to hit the disk. Here we decide to defer this job. We find the physical offset on the drive for this block and insert the job in an ordere queue, sorted by the physical location. At a later time, once we don't have any more non-read jobs left in the queue, we pick one read job out of the ordered queue and service it. The order we pick jobs out of the queue is according to an elevator cursor moving up and down along the ordered queue of read jobs. If we have enough space in the cache we'll read read_cache_line_size number of blocks and stick those in the cache. This defaults to 32 blocks.< / li >
< / ol >
< p > Other disk job consist of operations that needs to be synchronized with the disk I/O, like renaming files, closing files, flushing the cache, updating the settings etc. These are relatively rare though.< / p >
< / div >
2009-05-27 08:45:27 +02:00
< div class = "section" id = "contributions" >
< h1 > contributions< / h1 >
< p > If you have added instrumentation for some part of libtorrent that is not covered here, or
if you have improved any of the parser scrips, please consider contributing it back to the
project.< / p >
< p > If you have run tests and found that some algorithm or default value in libtorrent is
suboptimal, please contribute that knowledge back as well, to allow us to improve the library.< / p >
< p > If you have additional suggestions on how to tune libtorrent for any specific use case,
please let us know and we'll update this document.< / p >
2009-05-25 09:11:09 +02:00
< / div >
< / div >
< div id = "footer" >
2013-08-04 11:02:19 +02:00
< span > Copyright © 2005-2013 Rasterbar Software.< / span >
2009-05-25 09:11:09 +02:00
< / div >
< / div >
< script src = "http://www.google-analytics.com/urchin.js" type = "text/javascript" >
< / script >
< script type = "text/javascript" >
_uacct = "UA-1599045-1";
urchinTracker();
< / script >
< / div >
< / body >
< / html >