<p>These are things you can do to reduce the memory footprint of libtorrent. You get
some of this by basing your default <ttclass="docutils literal"><spanclass="pre">session_settings</span></tt> on the <ttclass="docutils literal"><spanclass="pre">min_memory_usage()</span></tt>
setting preset function.</p>
<p>Keep in mind that lowering memory usage will affect performance, always profile
and benchmark your settings to determine if it's worth the trade-off.</p>
<p>The typical buffer usage of libtorrent, for a single download, with the cache
size set to 256 blocks (256 * 16 kiB = 4 MiB) is:</p>
<p>In the case of a high volume seed, there are two main concerns. Performance and scalability.
This transelates into high send rates, and low memory and CPU usage per peer connection.</p>
<divclass="section"id="file-pool">
<h2>file pool</h2>
<p>libtorrent keeps an LRU file cache. Each file that is opened, is stuck in the cache. The main
purpose of this is because of anti-virus software that hooks och file-open and file close to
scan the file. Anti-virus software that does that will significantly increase the cost of
opening and closing files. However, for a high performance seed, the file open/close might
be so frequent that it becomes a significant cost. It might therefore be a good idea to allow
a large file descriptor cache. Adjust this though <ttclass="docutils literal"><spanclass="pre">session_settings::file_pool_size</span></tt>.</p>
<p>Don't forget to set a high rlimit for file descriptors in yor process as well. This limit
must be high enough to keep all connections and files open.</p>
</div>
<divclass="section"id="disk-cache">
<h2>disk cache</h2>
<p>You typically want to set the cache size to as high as possible. The
<ttclass="docutils literal"><spanclass="pre">session_settings::cache_size</span></tt> is specified in 16 kiB blocks. Since you're seeding,
the cache would be useless unless you also set <ttclass="docutils literal"><spanclass="pre">session_settings::use_read_cache</span></tt>
to true.</p>
<p>In order to increase the possibility of read cache hits, set the
<ttclass="docutils literal"><spanclass="pre">session_settings::cache_expiry</span></tt> to a large number. This won't degrade anything as
long as the client is only seeding, and not downloading any torrents.</p>
</div>
<divclass="section"id="peers">
<h2>peers</h2>
<p>First of all, in order to allow many connections, set the global connection limit
high, <ttclass="docutils literal"><spanclass="pre">session::set_max_connections()</span></tt>. Also set the upload rate limit to
infinite, <ttclass="docutils literal"><spanclass="pre">session::set_upload_rate_limit()</span></tt>, passing 0 means infinite.</p>
<p>When dealing with a large number of peers, it might be a good idea to have slightly
stricter timeouts, to get rid of lingering connections as soon as possible.</p>
<p>There are a couple of relevant settings: <ttclass="docutils literal"><spanclass="pre">session_settings::request_timeout</span></tt>,
<ttclass="docutils literal"><spanclass="pre">session_settings::peer_timeout</span></tt> and <ttclass="docutils literal"><spanclass="pre">session_settings::inactivity_timeout</span></tt>.</p>
<p>For seeds that are critical for a delivery system, you most likely want to allow
multiple connections from the same IP. That way two people from behind the same NAT
can use the service simultaneously. This is controlled by
<td>This is a low level log of read and write operations, with
timestamps and file offsets. The file offsets are byte
offsets in the torrent (not in any particular file, in the
case of a multi-file torrent). This can be used as an
estimate of the physical drive location. The purpose of
this log is to identify the amount of seeking the drive has
to do.</td>
</tr>
</tbody>
</table>
<divclass="section"id="disk-io-thread-log">
<h3>disk_io_thread.log</h3>
<p>The structure of this log is simple. For each line, there are two columns, a timestamp and
the operation that was started. There is a special operation called <ttclass="docutils literal"><spanclass="pre">idle</span></tt> which means
it looped back to the top and started waiting for new jobs. If there are more jobs to
handle immediately, the <ttclass="docutils literal"><spanclass="pre">idle</span></tt> state is still there, but the timestamp is the same as the
next job that is handled.</p>
<p>Some operations have a 3:rd column with an optional parameter. <ttclass="docutils literal"><spanclass="pre">read</span></tt> and <ttclass="docutils literal"><spanclass="pre">write</span></tt> tells
you the number of bytes that were requested to be read or written. <ttclass="docutils literal"><spanclass="pre">flushing</span></tt> tells you
the number of bytes that were flushed from the disk cache.</p>
<p>This is an example excerpt from a log:</p>
<preclass="literal-block">
3702 idle
3706 check_fastresume
3707 idle
4708 save_resume_data
4708 idle
8230 read 16384
8255 idle
8431 read 16384
</pre>
<p>The script to parse this log and generate a graph is called <ttclass="docutils literal"><spanclass="pre">parse_disk_log.py</span></tt>. It takes
the log file as the first command line argument, and produces a file: <ttclass="docutils literal"><spanclass="pre">disk_io.png</span></tt>.
The time stamp is in milliseconds since start.</p>
<p>You can pass in a second, optional, argument to specify the window size it will average
the time measurements over. The default is 5 seconds. For long test runs, it might be interesting
to increase that number. It is specified as a number of seconds.</p>
<imgalt="disk_io.png"src="disk_io.png"/>
<p>This is an example graph generated by the parse script.</p>
</div>
<divclass="section"id="disk-buffers-log">
<h3>disk_buffers.log</h3>
<p>The disk buffer log tells you where the buffer memory is used. The log format has a time stamp,
the name of the buffer usage which use-count changed, colon, and the new number of blocks that are
in use for this particular key. For example:</p>
<preclass="literal-block">
23671 write cache: 18
23671 receive buffer: 3
24153 receive buffer: 2
24153 write cache: 19
24154 receive buffer: 3
24198 receive buffer: 2
24198 write cache: 20
24202 receive buffer: 3
24305 send buffer: 0
24305 send buffer: 1
24909 receive buffer: 2
24909 write cache: 21
24910 receive buffer: 3
</pre>
<p>The time stamp is in milliseconds since start.</p>
<p>To generate a graph, use <ttclass="docutils literal"><spanclass="pre">parse_disk_buffer_log.py</span></tt>. It takes the log file as the first
command line argument. It generates <ttclass="docutils literal"><spanclass="pre">disk_buffer.png</span></tt>.</p>
<p>This is an example graph generated by the parse script.</p>
</div>
<divclass="section"id="disk-access-log">
<h3>disk_access.log</h3>
<p>The disc access log has three fields. The timestamp (milliseconds since start), operation
and offset. The offset is the absolute offset within the torrent (not within a file). This
log is only useful when you're downloading a single torrent, otherwise the offsets will not
be unique.</p>
<p>In order to easily plot this directly in gnuplot, without parsing it, there are two lines
associated with each read or write operation. The first one is the offset where the operation
started, and the second one is where the operation ended.</p>
<p>Example:</p>
<preclass="literal-block">
15437 read 301187072
15437 read_end 301203456
16651 read 213385216
16680 read_end 213647360
25879 write 249036800
25879 write_end 249298944
26811 read 325582848
26943 read_end 325844992
36736 read 367001600
36766 read_end 367263744
</pre>
<p>The disk access log does not have any good visualization tool yet. There is however a gnuplot
file, <ttclass="docutils literal"><spanclass="pre">disk_access.gnuplot</span></tt> which assumes <ttclass="docutils literal"><spanclass="pre">disk_access.log</span></tt> is in the current directory.</p>
<imgalt="disk_access.png"src="disk_access.png"/>
<p>The density of the disk seeks tells you how hard the drive has to work.</p>
<p>By defining <ttclass="docutils literal"><spanclass="pre">TORRENT_STATS</span></tt> libtorrent will write a log file called <ttclass="docutils literal"><spanclass="pre">session_stats.log</span></tt> which
is in a format ready to be passed directly into gnuplot. The parser script <ttclass="docutils literal"><spanclass="pre">parse_session_stats.py</span></tt>
will however parse out the field names and generate 3 different views of the data. This script
is easy to modify to generate the particular view you're interested in.</p>
<p>The first line in the log contains all the field names, separated by colon:</p>