-
+
Session
diff --git a/docs/stats_counters.rst b/docs/stats_counters.rst
index 9bef6c11e..b6c51da75 100644
--- a/docs/stats_counters.rst
+++ b/docs/stats_counters.rst
@@ -92,6 +92,50 @@ these counters break down the peer errors into more specific
categories. These errors are what the underlying transport
reported (i.e. TCP or uTP)
+.. _peer.piece_requests:
+
+.. _peer.max_piece_requests:
+
+.. _peer.invalid_piece_requests:
+
+.. _peer.choked_piece_requests:
+
+.. _peer.cancelled_piece_requests:
+
+.. _peer.piece_rejects:
+
+.. raw:: html
+
+
+
+
+
+
+
+
++-------------------------------+---------+
+| name | type |
++===============================+=========+
+| peer.piece_requests | counter |
++-------------------------------+---------+
+| peer.max_piece_requests | counter |
++-------------------------------+---------+
+| peer.invalid_piece_requests | counter |
++-------------------------------+---------+
+| peer.choked_piece_requests | counter |
++-------------------------------+---------+
+| peer.cancelled_piece_requests | counter |
++-------------------------------+---------+
+| peer.piece_rejects | counter |
++-------------------------------+---------+
+
+
+the total number of incoming piece requests we've received followed
+by the number of rejected piece requests for various reasons.
+max_piece_requests mean we already had too many outstanding requests
+from this peer, so we rejected it. cancelled_piece_requests are ones
+where the other end explicitly asked for the piece to be rejected.
+
.. _peer.error_incoming_peers:
.. _peer.error_outgoing_peers:
@@ -895,6 +939,47 @@ bittorrent message counters. These counters are incremented
every time a message of the corresponding type is received from
or sent to a bittorrent peer.
+.. _ses.waste_piece_timed_out:
+
+.. _ses.waste_piece_cancelled:
+
+.. _ses.waste_piece_unknown:
+
+.. _ses.waste_piece_seed:
+
+.. _ses.waste_piece_end_game:
+
+.. _ses.waste_piece_closing:
+
+.. raw:: html
+
+
+
+
+
+
+
+
++---------------------------+---------+
+| name | type |
++===========================+=========+
+| ses.waste_piece_timed_out | counter |
++---------------------------+---------+
+| ses.waste_piece_cancelled | counter |
++---------------------------+---------+
+| ses.waste_piece_unknown | counter |
++---------------------------+---------+
+| ses.waste_piece_seed | counter |
++---------------------------+---------+
+| ses.waste_piece_end_game | counter |
++---------------------------+---------+
+| ses.waste_piece_closing | counter |
++---------------------------+---------+
+
+
+the number of wasted downloaded bytes by reason of the bytes being
+wasted.
+
.. _picker.piece_picker_partial_loops:
.. _picker.piece_picker_suggest_loops:
@@ -1236,47 +1321,6 @@ hash a piece (when verifying against the piece hash)
cumulative time spent in various disk jobs, as well
as total for all disk jobs. Measured in microseconds
-.. _ses.waste_piece_timed_out:
-
-.. _ses.waste_piece_cancelled:
-
-.. _ses.waste_piece_unknown:
-
-.. _ses.waste_piece_seed:
-
-.. _ses.waste_piece_end_game:
-
-.. _ses.waste_piece_closing:
-
-.. raw:: html
-
-
-
-
-
-
-
-
-+---------------------------+---------+
-| name | type |
-+===========================+=========+
-| ses.waste_piece_timed_out | counter |
-+---------------------------+---------+
-| ses.waste_piece_cancelled | counter |
-+---------------------------+---------+
-| ses.waste_piece_unknown | counter |
-+---------------------------+---------+
-| ses.waste_piece_seed | counter |
-+---------------------------+---------+
-| ses.waste_piece_end_game | counter |
-+---------------------------+---------+
-| ses.waste_piece_closing | counter |
-+---------------------------+---------+
-
-
-the number of wasted downloaded bytes by reason of the bytes being
-wasted.
-
.. _dht.dht_nodes:
.. raw:: html
@@ -1509,6 +1553,26 @@ the total number of bytes sent and received by the DHT
the number of DHT messages we've sent and received
by kind.
+.. _dht.sent_dht_bytes:
+
+.. _dht.recv_dht_bytes:
+
+.. raw:: html
+
+
+
+
++--------------------+---------+
+| name | type |
++====================+=========+
+| dht.sent_dht_bytes | counter |
++--------------------+---------+
+| dht.recv_dht_bytes | counter |
++--------------------+---------+
+
+
+the number of bytes sent and received by the DHT
+
.. _utp.utp_packet_loss:
.. _utp.utp_timeout:
diff --git a/docs/todo.html b/docs/todo.html
index f5bded044..f902df147 100644
--- a/docs/todo.html
+++ b/docs/todo.html
@@ -22,7 +22,7 @@
libtorrent todo-list
0 urgent
-21 important
+20 important28 relevant8 feasible134 notes
@@ -496,9 +496,9 @@ namespace libtorrent
#if TORRENT_USE_IPV6
if (!ipv4)
-
it would be really nice to update these counters as they are incremented. This depends on the session being ticked, which has a fairly coarse grained resolution
it would be really nice to update these counters
+
it would be really nice to update these counters as they are incremented. This depends on the session being ticked, which has a fairly coarse grained resolution
it would be really nice to update these counters
as they are incremented. This depends on the session
-being ticked, which has a fairly coarse grained resolution
../src/session_impl.cpp:4493
t->status(&alert->status.back(), ~torrent_handle::query_accurate_download_counters);
+being ticked, which has a fairly coarse grained resolution
../src/session_impl.cpp:4486
t->status(&alert->status.back(), ~torrent_handle::query_accurate_download_counters);
t->clear_in_state_update();
}
state_updates.clear();
@@ -524,8 +524,6 @@ being ticked, which has a fairly coarse grained resolution
../src/sessio
, m_stat.total_transfer(stat::upload_payload));
m_stats_counters.set_value(counters::sent_ip_overhead_bytes
, m_stat.total_transfer(stat::upload_ip_protocol));
- m_stats_counters.set_value(counters::sent_tracker_bytes
- , m_stat.total_transfer(stat::upload_tracker_protocol));
m_stats_counters.set_value(counters::recv_bytes
, m_stat.total_download());
@@ -533,8 +531,6 @@ being ticked, which has a fairly coarse grained resolution
../src/sessio
, m_stat.total_transfer(stat::download_payload));
m_stats_counters.set_value(counters::recv_ip_overhead_bytes
, m_stat.total_transfer(stat::download_ip_protocol));
- m_stats_counters.set_value(counters::recv_tracker_bytes
- , m_stat.total_transfer(stat::download_tracker_protocol));
m_stats_counters.set_value(counters::limiter_up_queue
, m_upload_rate.queue_size());
@@ -549,8 +545,64 @@ being ticked, which has a fairly coarse grained resolution
../src/sessio
for (int i = 0; i < counters::num_counters; ++i)
values[i] = m_stats_counters[i];
-
If socket jobs could be higher level, to include RC4 encryption and decryption, we would offload the main thread even more
If socket jobs could be higher level, to include RC4 encryption and decryption,
+we would offload the main thread even more
../src/session_impl.cpp:5938
{
int num_threads = m_settings.get_int(settings_pack::network_threads);
int num_pools = num_threads > 0 ? num_threads : 1;
while (num_pools > m_net_thread_pool.size())
@@ -601,7 +653,7 @@ we would offload the main thread even more
../src/session_impl.cpp:5972<
, end(m_connections.end()); i != end; ++i)
{
int type = (*i)->type();
-
if peer is a really good peer, maybe we shouldn't disconnect it
if peer is a really good peer, maybe we shouldn't disconnect it
../src/torrent.cpp:7685
#if defined TORRENT_LOGGING || defined TORRENT_ERROR_LOGGING
debug_log("incoming peer (%d)", int(m_connections.size()));
#endif
@@ -703,57 +755,6 @@ we would offload the main thread even more
../src/session_impl.cpp:5972<
if (m_abort) return false;
if (!m_connections.empty()) return true;
-
// if we're also flushing the read cache, this piece
// should be removed as soon as all write jobs finishes
// otherwise it will turn into a read piece
}
@@ -1139,7 +1094,7 @@ should not include internal state.
// from disk_io_thread::do_delete, which is a fence job and should
// have any other jobs active, i.e. there should not be any references
// keeping pieces or blocks alive
if ((flags & flush_delete_cache) && (flags & flush_expect_clear))
@@ -1190,7 +1145,7 @@ should not include internal state.
../include/libtorrent/torrent_info.hp
if (e->num_dirty == 0) continue;
pieces.push_back(std::make_pair(e->storage.get(), int(e->piece)));
}
-
use vm_copy here, if available, and if buffers are aligned
use vm_copy here, if available, and if buffers are aligned
../src/file.cpp:1491
CloseHandle(native_handle());
m_path.clear();
#else
if (m_file_handle != INVALID_HANDLE_VALUE)
@@ -1220,7 +1175,7 @@ should not include internal state.
../include/libtorrent/torrent_info.hp
int offset = 0;
for (int i = 0; i < num_bufs; ++i)
{
-
use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually
use a deadline_timer for timeouts. Don't rely on second_tick()!
+
use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually
use a deadline_timer for timeouts. Don't rely on second_tick()!
Hook this up to connect timeout as well. This would improve performance
because of less work in second_tick(), and might let use remove ticking
entirely eventually
../src/peer_connection.cpp:4839
if (is_i2p(*m_socket))
@@ -1325,7 +1280,7 @@ entirely eventually
the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list
the udp socket(s) should be using the same generic
+
the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list
the udp socket(s) should be using the same generic
mechanism and not be restricted to a single one
we should open a one listen socket for each entry in the
listen_interfaces list
make a list for torrents that want to be announced on the DHT so we don't have to loop over all torrents, just to find the ones that want to announce
make a list for torrents that want to be announced on the DHT so we
+don't have to loop over all torrents, just to find the ones that want to announce
../src/session_impl.cpp:3394
if (!m_dht_torrents.empty())
{
boost::shared_ptr<torrent> t;
do
@@ -1579,7 +1534,7 @@ don't have to loop over all torrents, just to find the ones that want to announc
if (m_torrents.empty()) return;
if (m_next_lsd_torrent == m_torrents.end())
-
state_updated();
set_state(torrent_status::downloading);
@@ -1630,7 +1585,7 @@ don't have to loop over all torrents, just to find the ones that want to announc
TORRENT_ASSERT(piece >= 0);
TORRENT_ASSERT(m_verified.get_bit(piece) == false);
++m_num_verified;
-
create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection
create a mapping of file-index to redirection URLs. Use that to form
+
create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection
create a mapping of file-index to redirection URLs. Use that to form
URLs instead. Support to reconnect to a new server without destructing this
peer_connection
find_node should write directly to the response entry
find_node should write directly to the response entry
../src/kademlia/node.cpp:804
TORRENT_LOG(node) << " values: " << reply["values"].list().size();
}
#endif
}
@@ -1888,7 +1843,7 @@ void nop() {}
// listen port and instead use the source port of the packet?
if (msg_keys[5] && msg_keys[5]->int_value() != 0)
port = m.addr.port();
-
remove this class and transition over to using shared_ptr and make_shared instead
remove this class and transition over to using shared_ptr and
make_shared instead
../include/libtorrent/intrusive_ptr_base.hpp:44
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
@@ -2045,7 +2000,7 @@ namespace libtorrent
intrusive_ptr_base(): m_refs(0) {}
-
this class probably doesn't need to have virtual functions.
this class probably doesn't need to have virtual functions.
../include/libtorrent/tracker_manager.hpp:270
int m_completion_timeout;
typedef mutex mutex_t;
mutable mutex_t m_mutex;
@@ -2302,7 +2257,7 @@ specifically to turn some std::string and std::vector into pointers
// this is only used for SOCKS packets, since
// they may be addressed to hostname
virtual bool incoming_packet(error_code const& e, char const* hostname
, char const* buf, int size);
@@ -2334,13 +2289,14 @@ specifically to turn some std::string and std::vector into pointers
in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size
in chunked encoding mode, this assert won't hold.
the chunk headers should be subtracted from the receive_buffer_size
../src/http_seed_connection.cpp:124
boost::optional<piece_block_progress>
http_seed_connection::downloading_piece_progress() const
{
@@ -2444,8 +2400,8 @@ the chunk headers should be subtracted from the receive_buffer_size
report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address
report the proper address of the router as the source IP of
+this understanding of our external address, instead of the empty address
../src/session_impl.cpp:5316
void session_impl::on_port_mapping(int mapping, address const& ip, int port
, error_code const& ec, int map_transport)
{
TORRENT_ASSERT(is_single_thread());
@@ -2492,13 +2448,9 @@ this understanding of our external address, instead of the empty address
we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily
we only need to do this if our global IPv4 address has changed
+
we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily
we only need to do this if our global IPv4 address has changed
since the DHT (currently) only supports IPv4. Since restarting the DHT
-is kind of expensive, it would be nice to not do it unnecessarily
../src/session_impl.cpp:6515
#endif
+is kind of expensive, it would be nice to not do it unnecessarily
../src/session_impl.cpp:6481
#endif
if (!m_external_ip.cast_vote(ip, source_type, source)) return;
@@ -2549,7 +2501,7 @@ is kind of expensive, it would be nice to not do it unnecessarily
make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file
make this depend on the error and on the filesystem the
+
make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file
make this depend on the error and on the filesystem the
files are being downloaded to. If the error is no_space_left_on_device
and the filesystem doesn't support sparse files, only zero the priorities
of the pieces that are at the tails of all files, leaving everything
@@ -2604,7 +2556,7 @@ up to the highest written piece in each file
save the send_stats state instead of throwing them away it may pose an issue when downgrading though
save the send_stats state instead of throwing them away
it may pose an issue when downgrading though
../src/torrent.cpp:6837
for (int k = 0; k < bits; ++k)
v |= (i->info[j*8+k].state == piece_picker::block_info::state_finished)
? (1 << k) : 0;
@@ -2656,7 +2608,7 @@ it may pose an issue when downgrading though
// it's supposed to be a cache hit
TEST_CHECK(ret >= 0);
// return the reference to the buffer we just read
RETURN_BUFFER;
@@ -2812,7 +2764,7 @@ int test_main()
it would be nice to test reversing which session is making the connection as well
it would be nice to test reversing
which session is making the connection as well
../test/test_metadata_extension.cpp:87
, boost::shared_ptr<libtorrent::torrent_plugin> (*constructor)(libtorrent::torrent*, void*)
, int timeout)
{
@@ -2864,7 +2816,7 @@ which session is making the connection as well
for (int i = 0; i < 100; ++i)
{
torrent_peer* peer = p.add_peer(rand_tcp_ep(), 0, 0, &st);
TEST_EQUAL(st.erased.size(), 0);
@@ -2888,7 +2840,7 @@ which session is making the connection as well
test the case where a sample is lower than the history entry but not lower than the base
test the case where a sample is lower than the history entry but not lower than the base
../test/test_primitives.cpp:214
TEST_CHECK(!filter.find(k3));
TEST_CHECK(filter.find(k4));
// test timestamp_history
@@ -2939,7 +2891,7 @@ which session is making the connection as well
wait for an alert rather than just waiting 10 seconds. This is kind of silly
wait for an alert rather than just waiting 10 seconds. This is kind of silly
../test/test_torrent.cpp:132
TEST_EQUAL(h.file_priorities().size(), info->num_files());
TEST_EQUAL(h.file_priorities()[0], 0);
if (info->num_files() > 1)
TEST_EQUAL(h.file_priorities()[1], 0);
@@ -3128,7 +3080,7 @@ but that differs from the SNI hash
torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)
torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)
torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)
torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)
test all failure paths invalid bencoding not a dictionary no files entry in scrape response no info-hash entry in scrape response malformed peers in peer list of dictionaries uneven number of bytes in peers and peers6 string responses
test all failure paths invalid bencoding not a dictionary no files entry in scrape response no info-hash entry in scrape response malformed peers in peer list of dictionaries uneven number of bytes in peers and peers6 string responses
test all failure paths
invalid bencoding
not a dictionary
no files entry in scrape response
@@ -3236,7 +3188,7 @@ int test_main()
snprintf(tracker_url, sizeof(tracker_url), "http://127.0.0.1:%d/announce", http_port);
t->add_tracker(tracker_url, 0);
-
file hashes don't work with the new torrent creator reading async
file hashes don't work with the new torrent creator reading async
../test/web_seed_suite.cpp:373
// corrupt the files now, so that the web seed will be banned
if (test_url_seed)
{
create_random_files(combine_path(save_path, "torrent_dir"), file_sizes, sizeof(file_sizes)/sizeof(file_sizes[0]));
@@ -3338,7 +3290,7 @@ int run_upnp_test(char const* root_filename, char const* router_model, char cons
, chunked_encoding, test_ban, keepalive);
if (test_url_seed && test_rename)
-
it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list
it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list
it's somewhat expensive
to iterate over this linked list. Presumably because of the random
access of memory. It would be nice if pieces with no evictable blocks
weren't in this list
instead of doing a lookup each time through the loop, save cached_piece_entry pointers with piece_refcount incremented to pin them
instead of doing a lookup each time through the loop, save
cached_piece_entry pointers with piece_refcount incremented to pin them
../src/disk_io_thread.cpp:921
// this is why we pass in 1 as cont_block to the flushing functions
void disk_io_thread::try_flush_write_blocks(int num, tailqueue& completed_jobs
, mutex::scoped_lock& l)
@@ -3648,7 +3600,7 @@ cached_piece_entry pointers with piece_refcount incremented to pin them
cached_piece_entry* pe = m_disk_cache.find_piece(i->first, i->second);
if (pe == NULL) continue;
if (pe->num_dirty == 0) continue;
-
instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.
instead of doing this. pass in the settings to each storage_interface
+
instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.
instead of doing this. pass in the settings to each storage_interface
call. Each disk thread could hold its most recent understanding of the settings
in a shared_ptr, and update it every time it wakes up from a job. That way
each access to the settings won't require a mutex to be held.
../src/disk_io_thread.cpp:1132
{
@@ -3695,7 +3647,7 @@ each access to the settings won't require a mutex to be held.
../src/dis
// our quanta in case there aren't any other
// jobs to run in between
-
a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if m_outstanding_jobs > 0
a potentially more efficient solution would be to have a special
+
a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if m_outstanding_jobs > 0
a potentially more efficient solution would be to have a special
queue for retry jobs, that's only ever run when a job completes, in
any thread. It would only work if m_outstanding_jobs > 0
../src/disk_io_thread.cpp:1160
ptime start_time = time_now_hires();
@@ -3728,7 +3680,7 @@ any thread. It would only work if m_outstanding_jobs > 0
we should probably just hang the job on the piece and make sure the hasher gets kicked
we should probably just hang the job on the piece and make sure the hasher gets kicked
../src/disk_io_thread.cpp:2387
if (pe == NULL)
{
int cache_state = (j->flags & disk_io_job::volatile_read)
? cached_piece_entry::volatile_read_lru
@@ -3934,7 +3886,7 @@ it would be to have a fence for just this one piece.
../src/disk_io_thre
// increment the refcounts of all
// blocks up front, and then hash them without holding the lock
-
what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal
what do we do if someone is currently reading from the disk
+
what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal
what do we do if someone is currently reading from the disk
from this piece? does it matter? Since we won't actively erase the
data from disk, but it may be overwritten soon, it's probably not that
big of a deal
../src/part_file.cpp:252
if (((mode & file::rw_mask) != file::read_only)
@@ -4297,7 +4249,7 @@ big of a deal
int rate = 0;
// if we haven't received any data recently, the current download rate
@@ -4400,7 +4352,7 @@ and flushing it, update the slot entries as we go
../src/part_file.cpp:3
if (m_ignore_stats) return;
boost::shared_ptr<torrent> t = m_torrent.lock();
if (!t) return;
-
// if the peer has the piece and we want
// to download it, request it
if (int(m_have_piece.size()) > index
@@ -4451,7 +4403,7 @@ and flushing it, update the slot entries as we go
../src/part_file.cpp:3
boost::shared_ptr<torrent> t = m_torrent.lock();
TORRENT_ASSERT(t);
TORRENT_ASSERT(t->has_picker());
-
when expanding pieces for cache stripe reasons, the !downloading condition doesn't make much sense
when expanding pieces for cache stripe reasons,
the !downloading condition doesn't make much sense
../src/piece_picker.cpp:2407
TORRENT_ASSERT(index < (int)m_piece_map.size() || m_piece_map.empty());
if (index+1 == (int)m_piece_map.size())
return m_blocks_in_last_piece;
@@ -4503,7 +4455,7 @@ the !downloading condition doesn't make much sense
../src/piece_picker.c
// the second bool is true if this is the only active peer that is requesting
// and downloading blocks from this piece. Active means having a connection.
boost::tuple<bool, bool> requested_from(piece_picker::downloading_piece const& p
-
there's no rule here to make uTP connections not have the global or local rate limits apply to it. This used to be the default.
there's no rule here to make uTP connections not have the global or
local rate limits apply to it. This used to be the default.
../src/session_impl.cpp:532
m_global_class = m_classes.new_peer_class("global");
m_tcp_peer_class = m_classes.new_peer_class("tcp");
m_local_peer_class = m_classes.new_peer_class("local");
@@ -4555,7 +4507,7 @@ local rate limits apply to it. This used to be the default.
../src/sessi
// futexes, shared objects etc.
rl.rlim_cur -= 20;
-
instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.
instead of having a special case for this, just make the
+
instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.
instead of having a special case for this, just make the
default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use
the generic path. That would even allow for not listening at all.
../src/session_impl.cpp:1744
// reset the retry counter
@@ -4608,7 +4560,7 @@ retry:
, retries, flags, ec);
if (s.sock)
-
these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be avoided alltogether.
these vectors could be copied from m_torrent_lists,
+ m_alerts.post_alert(performance_alert(torrent_handle()
+ , performance_alert::upload_limit_too_low));
+ }
+ }
+
+ m_peak_up_rate = (std::max)(m_stat.upload_rate(), m_peak_up_rate);
+ m_peak_down_rate = (std::max)(m_stat.download_rate(), m_peak_down_rate);
+
these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be avoided alltogether.
these vectors could be copied from m_torrent_lists,
if we would maintain them. That way the first pass over
all torrents could be avoided. It would be especially
efficient if most torrents are not auto-managed
@@ -4769,7 +4721,7 @@ whenever we receive a scrape response (or anything
that may change the rank of a torrent) that one torrent
could re-sort itself in a list that's kept sorted at all
times. That way, this pass over all torrents could be
-avoided alltogether.
../src/session_impl.cpp:3509
#if defined TORRENT_VERBOSE_LOGGING || defined TORRENT_LOGGING
+avoided alltogether.
../src/session_impl.cpp:3502
#if defined TORRENT_VERBOSE_LOGGING || defined TORRENT_LOGGING
if (t->allows_peers())
t->log_to_all_peers("AUTO MANAGER PAUSING TORRENT");
#endif
@@ -4820,7 +4772,7 @@ avoided alltogether.
use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections
use a lower limit than m_settings.connections_limit
to allocate the to 10% or so of connection slots for incoming
-connections
../src/session_impl.cpp:3756
// robin fashion, so that every torrent is equally likely to connect to a
+connections
../src/session_impl.cpp:3749
// robin fashion, so that every torrent is equally likely to connect to a
// peer
// boost connections are connections made by torrent connection
@@ -4924,8 +4876,8 @@ connections
post a message to have this happen immediately instead of waiting for the next tick
post a message to have this happen
+immediately instead of waiting for the next tick
../src/session_impl.cpp:3910
// we've unchoked this peer, and it hasn't reciprocated
// we may want to increase our estimated reciprocation rate
p->increase_est_reciprocation_rate();
}
@@ -4976,7 +4928,7 @@ immediately instead of waiting for the next tick
#ifdef TORRENT_DEBUG
for (std::vector<peer_connection*>::const_iterator i = peers.begin()
, end(peers.end()), prev(peers.end()); i != end; ++i)
@@ -5009,7 +4961,7 @@ immediately instead of waiting for the next tick
// we don't know at what rate we can upload. If we have a
// measurement of the peak, use that + 10kB/s, otherwise
// assume 20 kB/s
upload_capacity_left = (std::max)(20000, m_peak_up_rate + 10000);
@@ -5111,10 +5063,10 @@ immediately instead of waiting for the next tick
it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always pushed back
it might be a nice feature here to limit the number of torrents
+
it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always pushed back
it might be a nice feature here to limit the number of torrents
to send in a single update. By just posting the first n torrents, they
would nicely be round-robined because the torrent lists are always
-pushed back
../src/session_impl.cpp:4459
t->status(&*i, flags);
+pushed back
../src/session_impl.cpp:4452
t->status(&*i, flags);
}
}
@@ -5164,7 +5116,7 @@ pushed back
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info
make this more generic to not just work if files have been
+
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info
make this more generic to not just work if files have been
renamed, but also if they have been merged into a single file for instance
maybe use the same format as .torrent files and reuse some code from torrent_info
../src/storage.cpp:710
for (;;)
{
@@ -5217,7 +5169,7 @@ maybe use the same format as .torrent files and reuse some code from torrent_inf
if (file_sizes_ent->list_size() == 0)
{
ec.ec = errors::no_files_in_resume_data;
-
if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile
if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile
if everything moves OK, except for the partfile
we currently won't update the save path, which breaks things.
it would probably make more sense to give up on the partfile
../src/storage.cpp:1006
if (ec)
{
@@ -5270,7 +5222,7 @@ it would probably make more sense to give up on the partfile
is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash
is verify_peer_cert called once per certificate in the chain, and
+
is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash
is verify_peer_cert called once per certificate in the chain, and
this function just tells us which depth we're at right now? If so, the comment
makes sense.
any certificate that isn't the leaf (i.e. the one presented by the peer)
@@ -5430,7 +5382,7 @@ need to be verified to make sure its DN matches the info-hash
../src/tor
{
#if defined(TORRENT_VERBOSE_LOGGING) || defined(TORRENT_LOGGING)
match = true;
-
there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();
there may be peer extensions relying on the torrent extension
+
there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();
there may be peer extensions relying on the torrent extension
still being alive. Only do this if there are no peers. And when the last peer
is disconnected, if the torrent is unloaded, clear the extensions
m_extensions.clear();
../src/torrent.cpp:2034
// pinned torrents are not allowed to be swapped out
@@ -5536,7 +5488,7 @@ m_extensions.clear();
really, we should just keep the picker around in this case to maintain the availability counters
really, we should just keep the picker around
in this case to maintain the availability counters
../src/torrent.cpp:4617
pieces.reserve(cs.pieces.size());
// sort in ascending order, to get most recently used first
@@ -5743,7 +5695,7 @@ in this case to maintain the availability counters
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync
make this more generic to not just work if files have been
+
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync
make this more generic to not just work if files have been
renamed, but also if they have been merged into a single file for instance
maybe use the same format as .torrent files and reuse some code from torrent_info
The mapped_files needs to be read both in the network thread
@@ -5799,7 +5751,7 @@ which are kept in sync
if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents
if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents
if this is a merkle torrent and we can't
restore the tree, we need to wipe all the
bits in the have array, but not necessarily
we might want to do a full check to see if we have
@@ -5855,7 +5807,7 @@ no one uses merkle torrents
add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file
add a flag to ignore stats, and only care about resume data for
+
add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file
add a flag to ignore stats, and only care about resume data for
content. For unchanged files, don't trigger a load of the metadata
just to save an empty resume data file
../src/torrent.cpp:8887
if (m_complete != 0xffffff) seeds = m_complete;
else seeds = m_policy ? m_policy->num_seeds() : 0;
@@ -5961,7 +5913,7 @@ just to save an empty resume data file
go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece
go through the pieces we have and count the total number
+
go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece
go through the pieces we have and count the total number
of downloaders we have. Only count peers that are interested in us
since some peers might not send have messages for pieces we have
it num_interested == 0, we need to pick a new piece
../src/torrent.cpp:9849
}
@@ -6015,7 +5967,7 @@ it num_interested == 0, we need to pick a new piece
../src/torrent.cpp:9
if (num_cache_pieces > m_torrent_file->num_pieces())
num_cache_pieces = m_torrent_file->num_pieces();
-
we really need to increment the refcounter on the torrent while this buffer is still in the peer's send buffer
we really need to increment the refcounter on the torrent
while this buffer is still in the peer's send buffer
../src/ut_metadata.cpp:316
if (!m_tp.need_loaded()) return;
metadata = m_tp.metadata().begin + offset;
metadata_piece_size = (std::min)(
@@ -6272,7 +6224,7 @@ while this buffer is still in the peer's send buffer
../src/ut_metadata.
#ifdef TORRENT_VERBOSE_LOGGING
m_pc.peer_log("<== UT_METADATA [ not a dictionary ]");
#endif
-
make this 32 bits and to count seconds since the block cache was created
make this 32 bits and to count seconds since the block cache was created
../include/libtorrent/block_cache.hpp:218
bool operator==(cached_piece_entry const& rhs) const
{ return storage.get() == rhs.storage.get() && piece == rhs.piece; }
// if this is set, we'll be calculating the hash
@@ -6580,7 +6532,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
// this is set to true once we flush blocks past
// the hash cursor. Once this happens, there's
-
try to remove the observers, only using the async_allocate handlers
try to remove the observers, only using the async_allocate handlers
../include/libtorrent/disk_buffer_pool.hpp:128
// number of bytes per block. The BitTorrent
// protocol defines the block size to 16 KiB.
const int m_block_size;
@@ -6734,7 +6686,7 @@ namespace libtorrent
// the pointer to the block of virtual address space
// making up the mmapped cache space
char* m_cache_pool;
-
make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers
make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers
make this a raw pointer (to save size in
the first cache line) and make the constructor
take a raw pointer. torrent objects should always
outlive their peers
../include/libtorrent/peer_connection.hpp:216
, m_snubbed(false)
@@ -6788,7 +6740,7 @@ outlive their peers
factor this out into its own class with a virtual interface torrent and session should implement this interface
factor this out into its own class with a virtual interface
torrent and session should implement this interface
../include/libtorrent/peer_connection.hpp:1123
// the local endpoint for this peer, i.e. our address
// and our port. If this is set for outgoing connections
@@ -6840,7 +6792,7 @@ torrent and session should implement this interface
../include/libtorren
// |
// | m_recv_start (logical start of current
// | | receive buffer, as perceived by upper layers)
-
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
@@ -6891,7 +6843,7 @@ namespace libtorrent
virtual tcp::endpoint const& remote() const = 0;
virtual tcp::endpoint local_endpoint() const = 0;
virtual void disconnect(error_code const& ec, operation_t op, int error = 0) = 0;
-
// a connect candidate
connection_attempt_loops,
// successful incoming connections (not rejected for any reason)
@@ -6943,9 +6895,9 @@ how about dont-have, share-mode, upload-only
std::vector<downloading_piece>::const_iterator find_dl_piece(int queue, int index) const;
std::vector<downloading_piece>::iterator find_dl_piece(int queue, int index);
// returns an iterator to the downloading piece, whichever
@@ -7029,7 +6981,7 @@ synchronization points
../include/libtorrent/performance_counters.hpp:40
// and some are still in the requested state
// 2: downloading pieces where every block is
// finished or writing
-
//
// The ``peer_class`` argument cannot be greater than 31. The bitmasks
// representing peer classes in the ``peer_class_filter`` are 32 bits.
//
@@ -7132,7 +7084,7 @@ m_sock.bind(endpoint, ec);
../include/libtorrent/proxy_base.hpp:171
// destructs.
//
// For more information on peer classes, see peer-classes_.
-
deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.
deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.
deprecate this
``max_rejects`` is the number of piece requests we will reject in a row
while a peer is choked before the peer is considered abusive and is
disconnected.
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
*/
@@ -7212,7 +7164,7 @@ namespace libtorrent
#endif
-
typedef std::list<boost::shared_ptr<torrent_plugin> > extension_list_t;
extension_list_t m_extensions;
#endif
@@ -7263,7 +7215,7 @@ namespace libtorrent
// if this was added from an RSS feed, this is the unique
// identifier in the feed.
-
These two bitfields should probably be coalesced into one
These two bitfields should probably be coalesced into one
../include/libtorrent/torrent.hpp:1272
// the .torrent file from m_url
// std::vector<char> m_torrent_file_buf;
// this is a list of all pieces that we have announced
@@ -7314,7 +7266,7 @@ namespace libtorrent
// this is the time last any of our peers saw a seed
// in this swarm
time_t m_swarm_last_seen_complete;
-
include the number of peers received from this tracker, at last announce
include the number of peers received from this tracker, at last announce
../include/libtorrent/torrent_info.hpp:124
// if this tracker failed the last time it was contacted
// this error code specifies what error occurred
error_code last_error;
@@ -7365,7 +7317,7 @@ namespace libtorrent
// flags for the source bitmask, each indicating where
// we heard about this tracker
enum tracker_source
-
support using the windows API for UPnP operations as well
support using the windows API for UPnP operations as well
../include/libtorrent/upnp.hpp:112
// specific port
external_port_must_be_wildcard = 727
};
@@ -7416,7 +7368,7 @@ public:
// is -1, which means failure. There will not be any error alert notification for
// mappings that fail with a -1 return value.
int add_mapping(protocol_type p, int external_port, int local_port);
-
move the login info into the tracker_request object
move the login info into the tracker_request object
../include/libtorrent/aux_/session_impl.hpp:378
void on_lsd_announce(error_code const& e);
// called when a port mapping is successful, or a router returns
@@ -7570,7 +7522,7 @@ public:
#ifndef TORRENT_DISABLE_EXTENSIONS
void add_extensions_to_torrent(
-
// listen socket. For each retry the port number
// is incremented by one
int m_listen_port_retries;
@@ -7621,7 +7573,7 @@ public:
mutable boost::uint8_t m_interface_index;
void open_new_incoming_socks_connection();
-
mutable boost::uint8_t m_interface_index;
void open_new_incoming_socks_connection();
@@ -7645,7 +7597,7 @@ public:
// this is used to decide when to recalculate which
// torrents to keep queued and which to activate
-
void setup_listener(listen_socket_t* s, std::string const& device
, bool ipv4, int port, int& retries, int flags, error_code& ec);
#ifndef TORRENT_DISABLE_DHT
@@ -7671,7 +7623,7 @@ public:
// is only decresed when the unchoke set
// is recomputed, and when it reaches zero,
// the optimistic unchoke is moved to another peer.
-
// the number of unchoked peers as set by the auto-unchoker
// this should always be >= m_max_uploads
int m_allowed_upload_slots;
@@ -7722,7 +7674,7 @@ public:
int m_suggest_timer;
// statistics gathered from all torrents.
-
@@ -224,14 +222,6 @@ deprecated functions and struct members. As long as no deprecated functions are
relied upon, this should be a simple way to eliminate a little bit of code.
You can save some memory for each connection and each torrent by reducing the
-number of separate rates kept track of by libtorrent. If you build with full-stats=off
-(or -DTORRENT_DISABLE_FULL_STATS) you will save a few hundred bytes for each
-connection and torrent. It might make a difference if you have a very large number
-of peers or torrents.
-
play nice with the disk
@@ -514,6 +504,7 @@ command line argument. It generates disk_buffer.png
disk_access.log
+
The disk access log is now binary
The disc access log has three fields. The timestamp (milliseconds since start), operation
and offset. The offset is the absolute offset within the torrent (not within a file). This
log is only useful when you're downloading a single torrent, otherwise the offsets will not
@@ -540,96 +531,61 @@ file, disk_access.gnuplot which assumes The density of the disk seeks tells you how hard the drive has to work.
-
-
session stats
-
By defining TORRENT_STATS libtorrent will write a log file called session_stats/<pid>.<sequence>.log which
-is in a format ready to be passed directly into gnuplot. The parser script parse_session_stats.py
-generates a report in session_stats_report/index.html.
-
The first line in the log contains all the field names, separated by colon:
The rest of the log is one line per second with all the fields' values.
-
These are the fields:
-
-
-
-
-
-
-
field name
-
description
-
-
-
-
second
-
the time, in seconds, for this log line
-
-
upload rate
-
the number of bytes uploaded in the last second
-
-
download rate
-
the number of bytes downloaded in the last second
-
-
downloading torrents
-
the number of torrents that are not seeds
-
-
seeding torrents
-
the number of torrents that are seed
-
-
peers
-
the total number of connected peers
-
-
connecting peers
-
the total number of peers attempting to connect (half-open)
-
-
disk block buffers
-
the total number of disk buffer blocks that are in use
-
-
unchoked peers
-
the total number of unchoked peers
-
-
num list peers
-
the total number of known peers, but not necessarily connected
-
-
peer allocations
-
the total number of allocations for the peer list pool
-
-
peer storage bytes
-
the total number of bytes allocated for the peer list pool
-
-
-
-
This is an example of a graph that can be generated from this log:
-
-
It shows statistics about the number of peers and peers states. How at the startup
-there are a lot of half-open connections, which tapers off as the total number of
-peers approaches the limit (50). It also shows how the total peer list slowly but steadily
-grows over time. This list is plotted against the right axis, as it has a different scale
-as the other fields.
-
-
-
understanding the disk thread
-
All disk operations are funneled through a separate thread, referred to as the disk thread.
-The main interface to the disk thread is a queue where disk jobs are posted, and the results
-of these jobs are then posted back on the main thread's io_service.
+
+
understanding the disk threads
+
This section is somewhat outdated, there are potentially more than one disk
+thread
+
All disk operations are funneled through a separate thread, referred to as the
+disk thread. The main interface to the disk thread is a queue where disk jobs
+are posted, and the results of these jobs are then posted back on the main
+thread's io_service.
A disk job is essentially one of:
-
-
write this block to disk, i.e. a write job. For the most part this is just a matter of sticking the block in the disk cache, but if we've run out of cache space or completed a whole piece, we'll also flush blocks to disk. This is typically very fast, since the OS just sticks these buffers in its write cache which will be flushed at a later time, presumably when the drive head will pass the place on the platter where the blocks go.
-
read this block from disk. The first thing that happens is we look in the cache to see if the block is already in RAM. If it is, we'll return immediately with this block. If it's a cache miss, we'll have to hit the disk. Here we decide to defer this job. We find the physical offset on the drive for this block and insert the job in an ordered queue, sorted by the physical location. At a later time, once we don't have any more non-read jobs left in the queue, we pick one read job out of the ordered queue and service it. The order we pick jobs out of the queue is according to an elevator cursor moving up and down along the ordered queue of read jobs. If we have enough space in the cache we'll read read_cache_line_size number of blocks and stick those in the cache. This defaults to 32 blocks. If the system supports asynchronous I/O (Windows, Linux, Mac OS X, BSD, Solars for instance), jobs will be issued immediately to the OS. This especially increases read throughput, since the OS has a much greater flexibility to reorder the read jobs.
+
+
+
write this block to disk, i.e. a write job. For the most part this is just a
+
matter of sticking the block in the disk cache, but if we've run out of
+cache space or completed a whole piece, we'll also flush blocks to disk.
+This is typically very fast, since the OS just sticks these buffers in its
+write cache which will be flushed at a later time, presumably when the drive
+head will pass the place on the platter where the blocks go.
+
+
+
+
+
read this block from disk. The first thing that happens is we look in the
+
cache to see if the block is already in RAM. If it is, we'll return
+immediately with this block. If it's a cache miss, we'll have to hit the
+disk. Here we decide to defer this job. We find the physical offset on the
+drive for this block and insert the job in an ordered queue, sorted by the
+physical location. At a later time, once we don't have any more non-read
+jobs left in the queue, we pick one read job out of the ordered queue and
+service it. The order we pick jobs out of the queue is according to an
+elevator cursor moving up and down along the ordered queue of read jobs. If
+we have enough space in the cache we'll read read_cache_line_size number of
+blocks and stick those in the cache. This defaults to 32 blocks. If the
+system supports asynchronous I/O (Windows, Linux, Mac OS X, BSD, Solars for
+instance), jobs will be issued immediately to the OS. This especially
+increases read throughput, since the OS has a much greater flexibility to
+reorder the read jobs.
+
+
+
-
Other disk job consist of operations that needs to be synchronized with the disk I/O, like renaming files, closing files, flushing the cache, updating the settings etc. These are relatively rare though.
+
Other disk job consist of operations that needs to be synchronized with the
+disk I/O, like renaming files, closing files, flushing the cache, updating the
+settings etc. These are relatively rare though.
contributions
-
If you have added instrumentation for some part of libtorrent that is not covered here, or
-if you have improved any of the parser scrips, please consider contributing it back to the
-project.
-
If you have run tests and found that some algorithm or default value in libtorrent is
-suboptimal, please contribute that knowledge back as well, to allow us to improve the library.
-
If you have additional suggestions on how to tune libtorrent for any specific use case,
-please let us know and we'll update this document.
+
If you have added instrumentation for some part of libtorrent that is not
+covered here, or if you have improved any of the parser scrips, please consider
+contributing it back to the project.
+
If you have run tests and found that some algorithm or default value in
+libtorrent is suboptimal, please contribute that knowledge back as well, to
+allow us to improve the library.
+
If you have additional suggestions on how to tune libtorrent for any specific
+use case, please let us know and we'll update this document.