diff --git a/docs/dht_sec.html b/docs/dht_sec.html index f94e88888..ce68c6075 100644 --- a/docs/dht_sec.html +++ b/docs/dht_sec.html @@ -171,14 +171,18 @@ random numbers.

bootstrapping

In order to set ones initial node ID, the external IP needs to be known. This -is not a trivial problem. With this extension, all DHT requests whose node -ID does not match its IP address MUST be serviced and MUST also include one -extra result value (inside the r dictionary) called ip. The IP field -contains the raw (big endian) byte representation of the external IP address. -This is the same byte sequence used to verify the node ID.

+is not a trivial problem. With this extension, all DHT responses SHOULD include +a top-level field called ip, containing a compact binary representation of +the requestor's IP and port. That is big endian IP followed by 2 bytes of big endian +port.

+

The IP portion is the same byte sequence used to verify the node ID.

+

It is important that the ip field is in the top level dictionary. Nodes that +enforce the node-ID will respond with an error message ("y": "e", "e": { ... }), +whereas a node that supports this extension but without enforcing it will respond +with a normal reply ("y": "r", "r": { ... }).

A DHT node which receives an ip result in a request SHOULD consider restarting its DHT node with a new node ID, taking this IP into account. Since a single node -can not be trusted, there should be some mechanism of determining whether or +can not be trusted, there should be some mechanism to determine whether or not the node has a correct understanding of its external IP or not. This could be done by voting, or only restart the DHT once at least a certain number of nodes, from separate searches, tells you your node ID is incorrect.

diff --git a/docs/dht_store.html b/docs/dht_store.html index 4c1d734d2..783ec96a4 100644 --- a/docs/dht_store.html +++ b/docs/dht_store.html @@ -3,7 +3,7 @@ - + BitTorrent extension for arbitrary DHT store @@ -190,7 +190,7 @@ version, the sequence number seq must be monot and a node hosting the list node MUST not downgrade a list head from a higher sequence number to a lower one, only upgrade. The sequence number SHOULD not exceed MAX_INT64, (i.e. 0x7fffffffffffffff. A client MAY reject any message with a sequence number -exceeding this.

+exceeding this. A client MAY also reject any message with a negative sequence number.

The signature is a 64 byte ed25519 signature of the bencoded sequence number concatenated with the v key. e.g. something like this:: 3:seqi4e1:v12:Hello world!.

@@ -223,12 +223,14 @@ message with code 302 (see error codes below).

Note that this request does not contain a target hash. The target hash under which this blob is stored is implied by the k argument. The key is the SHA-1 hash of the key (k).

-

The cas field is optional. If present it is interpreted of the sha-1 hash of +

The cas field is optional. If present it is interpreted as the sha-1 hash of the sequence number and v field that is expected to be replaced. The buffer to hash is the same as the one signed when storing. cas is short for compare and swap, it has similar semantics as CAS CPU instructions. If specified as part of the put command, and the current value stored under the public key differs from -the expected value, the store fails. The cas field only applies to mutable puts.

+the expected value, the store fails. The cas field only applies to mutable puts. +If there is no current value, the cas field SHOULD be ignored, not preventing +the put based on it.

Response:

 {
diff --git a/docs/dht_store.rst b/docs/dht_store.rst
index 8f8571a29..85065c1c7 100644
--- a/docs/dht_store.rst
+++ b/docs/dht_store.rst
@@ -153,7 +153,7 @@ version, the sequence number ``seq`` must be monotonically increasing for each u
 and a node hosting the list node MUST not downgrade a list head from a higher sequence
 number to a lower one, only upgrade. The sequence number SHOULD not exceed ``MAX_INT64``,
 (i.e. ``0x7fffffffffffffff``. A client MAY reject any message with a sequence number
-exceeding this.
+exceeding this. A client MAY also reject any message with a negative sequence number.
 
 The signature is a 64 byte ed25519 signature of the bencoded sequence
 number concatenated with the ``v`` key. e.g. something like this:: ``3:seqi4e1:v12:Hello world!``.
@@ -200,6 +200,8 @@ to hash is the same as the one signed when storing. ``cas`` is short for *compar
 and swap*, it has similar semantics as CAS CPU instructions. If specified as part
 of the put command, and the current value stored under the public key differs from
 the expected value, the store fails. The ``cas`` field only applies to mutable puts.
+If there is no current value, the ``cas`` field SHOULD be ignored, not preventing
+the put based on it.
 
 Response:
 
diff --git a/docs/todo.html b/docs/todo.html
index 39526b556..11082713b 100644
--- a/docs/todo.html
+++ b/docs/todo.html
@@ -21,7 +21,7 @@
 
 
 

libtorrent todo-list

-2 important +3 important 4 relevant 15 feasible 36 notes @@ -80,7 +80,7 @@ do as well with NATs)

../src/session_impl.cpp:667

relevance 3../src/torrent.cpp:6177if peer is a really good peer, maybe we shouldn't disconnect it

if peer is a really good peer, maybe we shouldn't disconnect it

../src/torrent.cpp:6177

			return false;
+
relevance 3../src/torrent.cpp:6175if peer is a really good peer, maybe we shouldn't disconnect it

if peer is a really good peer, maybe we shouldn't disconnect it

../src/torrent.cpp:6175

			return false;
 		}
 		TORRENT_ASSERT(m_connections.find(p) == m_connections.end());
 		m_connections.insert(p);
@@ -131,7 +131,58 @@ do as well with NATs)

../src/session_impl.cpp:667

relevance 3../include/libtorrent/kademlia/find_data.hpp:60rename this class to find_peers, since that's what it does find_data is an unnecessarily generic name

rename this class to find_peers, since that's what it does +

relevance 3../src/kademlia/routing_table.cpp:131cache the depth!

cache the depth!

../src/kademlia/routing_table.cpp:131

{
+	int deepest_bucket = 0;
+	int deepest_size = 0;
+	for (table_t::const_iterator i = m_buckets.begin()
+		, end(m_buckets.end()); i != end; ++i)
+	{
+		deepest_size = i->live_nodes.size(); // + i->replacements.size();
+		if (deepest_size < m_bucket_size) break;
+		// this bucket is full
+		++deepest_bucket;
+	}
+
+	if (deepest_bucket == 0) return 1 + deepest_size;
+
+	if (deepest_size < m_bucket_size / 2) return (size_type(1) << deepest_bucket) * m_bucket_size;
+	else return (size_type(2) << deepest_bucket) * deepest_size;
+}
+
+int routing_table::depth() const
+{
+
int deepest_bucket = 0; +
for (table_t::const_iterator i = m_buckets.begin() + , end(m_buckets.end()); i != end; ++i) + { + if (i->live_nodes.size() < m_bucket_size) + break; + // this bucket is full + ++deepest_bucket; + } + return deepest_bucket; +} + +#if (defined TORRENT_DHT_VERBOSE_LOGGING || defined TORRENT_DEBUG) && TORRENT_USE_IOSTREAM + +void routing_table::print_state(std::ostream& os) const +{ + os << "kademlia routing table state\n" + << "bucket_size: " << m_bucket_size << "\n" + << "global node count: " << num_global_nodes() << "\n" + << "node_id: " << m_id << "\n\n"; + + os << "number of nodes per bucket:\n-- live "; + for (int i = 8; i < 160; ++i) + os << "-"; + os << "\n"; + + int max_size = bucket_limit(0); + for (int k = 0; k < max_size; ++k) + { + for (table_t::const_iterator i = m_buckets.begin(), end(m_buckets.end()); + i != end; ++i) +
relevance 3../include/libtorrent/kademlia/find_data.hpp:60rename this class to get_peers, since that's what it does find_data is an unnecessarily generic name

rename this class to get_peers, since that's what it does find_data is an unnecessarily generic name

../include/libtorrent/kademlia/find_data.hpp:60

#include <libtorrent/kademlia/node_id.hpp>
 #include <libtorrent/kademlia/routing_table.hpp>
 #include <libtorrent/kademlia/rpc_manager.hpp>
@@ -183,7 +234,7 @@ protected:
 	nodes_callback m_nodes_callback;
 	std::map<node_id, std::string> m_write_tokens;
 	node_id const m_target;
-
relevance 2../src/torrent.cpp:8347will pick_pieces ever return an empty set?

will pick_pieces ever return an empty set?

../src/torrent.cpp:8347

				if (added_request)
+
relevance 2../src/torrent.cpp:8356will pick_pieces ever return an empty set?

will pick_pieces ever return an empty set?

../src/torrent.cpp:8356

				if (added_request)
 				{
 					peers_with_requests.insert(peers_with_requests.begin(), &c);
 					if (i->first_requested == min_time()) i->first_requested = now;
@@ -234,7 +285,7 @@ protected:
 	void torrent::remove_web_seed(std::string const& url, web_seed_entry::type_t type)
 	{
 		std::list<web_seed_entry>::iterator i = std::find_if(m_web_seeds.begin(), m_web_seeds.end()
-
relevance 2../src/utp_stream.cpp:1862we might want to do something else here as well, to resend the packet immediately without it being an MTU probe

we might want to do something else here +

relevance 2../src/utp_stream.cpp:1862we might want to do something else here as well, to resend the packet immediately without it being an MTU probe

we might want to do something else here as well, to resend the packet immediately without it being an MTU probe

../src/utp_stream.cpp:1862

//	if ((rand() % 100) > 0)
 #endif
@@ -287,7 +338,7 @@ it being an MTU probe

../src/utp_stream.cpp:1862

relevance 2../src/utp_stream.cpp:2505sequence number, source IP and connection ID should be verified before accepting a reset packet

sequence number, source IP and connection ID should be +

relevance 2../src/utp_stream.cpp:2505sequence number, source IP and connection ID should be verified before accepting a reset packet

sequence number, source IP and connection ID should be verified before accepting a reset packet

../src/utp_stream.cpp:2505

		m_reply_micro = boost::uint32_t(total_microseconds(receive_time - min_time()))
 			- ph->timestamp_microseconds;
 		boost::uint32_t prev_base = m_their_delay_hist.initialized() ? m_their_delay_hist.base() : 0;
@@ -339,19 +390,19 @@ verified before accepting a reset packet

../src/utp_stream.cpp:2505

, this, int(ph->ack_nr), m_seq_nr); m_sm->inc_stats_counter(utp_socket_manager::redundant_pkts_in); return true; -
relevance 2../src/kademlia/node.cpp:64make this configurable in dht_settings

make this configurable in dht_settings

../src/kademlia/node.cpp:64

#include "libtorrent/socket.hpp"
-#include "libtorrent/random.hpp"
-#include "libtorrent/aux_/session_impl.hpp"
-#include "libtorrent/kademlia/node_id.hpp"
-#include "libtorrent/kademlia/rpc_manager.hpp"
-#include "libtorrent/kademlia/routing_table.hpp"
+
relevance 2../src/kademlia/node.cpp:69make this configurable in dht_settings

make this configurable in dht_settings

../src/kademlia/node.cpp:69

#include "libtorrent/kademlia/routing_table.hpp"
 #include "libtorrent/kademlia/node.hpp"
+#include <libtorrent/kademlia/dht_observer.hpp>
 
 #include "libtorrent/kademlia/refresh.hpp"
 #include "libtorrent/kademlia/find_data.hpp"
 
 #include "ed25519.h"
 
+#ifdef TORRENT_USE_VALGRIND
+#include <valgrind/memcheck.h>
+#endif
+
 namespace libtorrent { namespace dht
 {
 
@@ -390,8 +441,8 @@ void purge_peers(std::set<peer_entry>& peers)
 
 void nop() {}
 
-
relevance 1../src/http_seed_connection.cpp:120in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size

in chunked encoding mode, this assert won't hold. -the chunk headers should be subtracted from the receive_buffer_size

../src/http_seed_connection.cpp:120

	boost::optional<piece_block_progress>
+
relevance 1../src/http_seed_connection.cpp:117in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size

in chunked encoding mode, this assert won't hold. +the chunk headers should be subtracted from the receive_buffer_size

../src/http_seed_connection.cpp:117

	boost::optional<piece_block_progress>
 	http_seed_connection::downloading_piece_progress() const
 	{
 		if (m_requests.empty())
@@ -442,8 +493,8 @@ the chunk headers should be subtracted from the receive_buffer_size

../s std::string request; request.reserve(400); -

relevance 1../src/peer_connection.cpp:2568peers should really be corked/uncorked outside of all completed disk operations

peers should really be corked/uncorked outside of -all completed disk operations

../src/peer_connection.cpp:2568

		}
+
relevance 1../src/peer_connection.cpp:2567peers should really be corked/uncorked outside of all completed disk operations

peers should really be corked/uncorked outside of +all completed disk operations

../src/peer_connection.cpp:2567

		}
 
 		if (is_disconnecting()) return;
 
@@ -494,8 +545,8 @@ all completed disk operations

../src/peer_connection.cpp:2568

relevance 1../src/session_impl.cpp:5694report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address

report the proper address of the router as the source IP of -this understanding of our external address, instead of the empty address

../src/session_impl.cpp:5694

	void session_impl::on_port_mapping(int mapping, address const& ip, int port
+
relevance 1../src/session_impl.cpp:5719report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address

report the proper address of the router as the source IP of +this understanding of our external address, instead of the empty address

../src/session_impl.cpp:5719

	void session_impl::on_port_mapping(int mapping, address const& ip, int port
 		, error_code const& ec, int map_transport)
 	{
 		TORRENT_ASSERT(is_network_thread());
@@ -546,7 +597,7 @@ this understanding of our external address, instead of the empty address

relevance 1../src/session_impl.cpp:5904report errors as alerts

report errors as alerts

../src/session_impl.cpp:5904

	}
+
relevance 1../src/session_impl.cpp:5929report errors as alerts

report errors as alerts

../src/session_impl.cpp:5929

	}
 
 	void session_impl::add_dht_router(std::pair<std::string, int> const& node)
 	{
@@ -597,9 +648,9 @@ this understanding of our external address, instead of the empty address

relevance 1../src/session_impl.cpp:6375we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily

we only need to do this if our global IPv4 address has changed +

relevance 1../src/session_impl.cpp:6400we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily

we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT -is kind of expensive, it would be nice to not do it unnecessarily

../src/session_impl.cpp:6375

	void session_impl::set_external_address(address const& ip
+is kind of expensive, it would be nice to not do it unnecessarily

../src/session_impl.cpp:6400

	void session_impl::set_external_address(address const& ip
 		, int source_type, address const& source)
 	{
 #if defined TORRENT_VERBOSE_LOGGING
@@ -650,7 +701,7 @@ is kind of expensive, it would be nice to not do it unnecessarily

../src #ifdef TORRENT_DISK_STATS TORRENT_ASSERT(m_buffer_allocations >= 0); -

relevance 1../src/torrent.cpp:1161make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file

make this depend on the error and on the filesystem the +

relevance 1../src/torrent.cpp:1161make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file

make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything @@ -705,8 +756,8 @@ up to the highest written piece in each file

../src/torrent.cpp:1161

relevance 1../src/torrent.cpp:5439save the send_stats state instead of throwing them away it may pose an issue when downgrading though

save the send_stats state instead of throwing them away -it may pose an issue when downgrading though

../src/torrent.cpp:5439

						? (1 << k) : 0;
+
relevance 1../src/torrent.cpp:5437save the send_stats state instead of throwing them away it may pose an issue when downgrading though

save the send_stats state instead of throwing them away +it may pose an issue when downgrading though

../src/torrent.cpp:5437

						? (1 << k) : 0;
 					bitmask.append(1, v);
 					TORRENT_ASSERT(bits == 8 || j == num_bitmask_bytes - 1);
 				}
@@ -757,9 +808,9 @@ it may pose an issue when downgrading though

../src/torrent.cpp:5439

relevance 1../src/torrent.cpp:6344should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though

should disconnect all peers that have the pieces we have +

relevance 1../src/torrent.cpp:6342should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though

should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces -for all peers though

../src/torrent.cpp:6344

		TORRENT_ASSERT(is_finished());
+for all peers though

../src/torrent.cpp:6342

		TORRENT_ASSERT(is_finished());
 		TORRENT_ASSERT(m_state != torrent_status::finished && m_state != torrent_status::seeding);
 
 		set_state(torrent_status::finished);
@@ -810,7 +861,7 @@ for all peers though

../src/torrent.cpp:6344

relevance 1../src/torrent_info.cpp:181we might save constructing a std::string if this would take a char const* instead

we might save constructing a std::string if this would take a char const* instead

../src/torrent_info.cpp:181

			{
+
relevance 1../src/torrent_info.cpp:181we might save constructing a std::string if this would take a char const* instead

we might save constructing a std::string if this would take a char const* instead

../src/torrent_info.cpp:181

			{
 				tmp_path += i[0];
 				tmp_path += i[1];
 				tmp_path += i[2];
@@ -861,9 +912,9 @@ for all peers though

../src/torrent.cpp:6344

relevance 1../src/torrent_info.cpp:385this logic should be a separate step done once the torrent is loaded, and the original filenames should be preserved!

this logic should be a separate step +

relevance 1../src/torrent_info.cpp:387this logic should be a separate step done once the torrent is loaded, and the original filenames should be preserved!

this logic should be a separate step done once the torrent is loaded, and the original -filenames should be preserved!

../src/torrent_info.cpp:385

	
+filenames should be preserved!

../src/torrent_info.cpp:387

	
 			while (*s1 != 0 || *s2 != 0)
 			{
 				c1 = to_lower(*s1);
@@ -911,8 +962,8 @@ filenames should be preserved!

../src/torrent_info.cpp:385

relevance 1../src/torrent_info.cpp:416once the filename renaming is removed from here this check can be removed as well

once the filename renaming is removed from here -this check can be removed as well

../src/torrent_info.cpp:416

			if (!extract_single_file(*list.list_at(i), e, root_dir
+
relevance 1../src/torrent_info.cpp:418once the filename renaming is removed from here this check can be removed as well

once the filename renaming is removed from here +this check can be removed as well

../src/torrent_info.cpp:418

			if (!extract_single_file(*list.list_at(i), e, root_dir
 				, &file_hash, &fee, &mtime))
 				return false;
 
@@ -963,7 +1014,7 @@ this check can be removed as well

../src/torrent_info.cpp:416

relevance 1../src/kademlia/node.cpp:739find_node should write directly to the response entry

find_node should write directly to the response entry

../src/kademlia/node.cpp:739

		{
+
relevance 1../src/kademlia/node.cpp:772find_node should write directly to the response entry

find_node should write directly to the response entry

../src/kademlia/node.cpp:772

		{
 			TORRENT_LOG(node) << " values: " << reply["values"].list().size();
 		}
 #endif
@@ -1014,7 +1065,7 @@ this check can be removed as well

../src/torrent_info.cpp:416

relevance 1../include/libtorrent/ip_voter.hpp:100instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

../include/libtorrent/ip_voter.hpp:100

		bloom_filter<32> m_external_address_voters;
+
relevance 1../include/libtorrent/ip_voter.hpp:100instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

../include/libtorrent/ip_voter.hpp:100

		bloom_filter<32> m_external_address_voters;
 		std::vector<external_ip_t> m_external_addresses;
 		address m_external_address;
 	};
@@ -1041,7 +1092,7 @@ this check can be removed as well

../src/torrent_info.cpp:416

relevance 1../include/libtorrent/utp_stream.hpp:376implement blocking write. Low priority since it's not used (yet)

implement blocking write. Low priority since it's not used (yet)

../include/libtorrent/utp_stream.hpp:376

		for (typename Mutable_Buffers::const_iterator i = buffers.begin()
+
relevance 1../include/libtorrent/utp_stream.hpp:376implement blocking write. Low priority since it's not used (yet)

implement blocking write. Low priority since it's not used (yet)

../include/libtorrent/utp_stream.hpp:376

		for (typename Mutable_Buffers::const_iterator i = buffers.begin()
 			, end(buffers.end()); i != end; ++i)
 		{
 			using asio::buffer_cast;
@@ -1092,11 +1143,9 @@ this check can be removed as well

../src/torrent_info.cpp:416

relevance 1../include/libtorrent/web_peer_connection.hpp:127if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer

if we make this be a disk_buffer_holder instead +

relevance 1../include/libtorrent/web_peer_connection.hpp:126if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer

if we make this be a disk_buffer_holder instead we would save a copy sometimes -use allocate_disk_receive_buffer and release_disk_receive_buffer

../include/libtorrent/web_peer_connection.hpp:127

-	private:
-
+use allocate_disk_receive_buffer and release_disk_receive_buffer

../include/libtorrent/web_peer_connection.hpp:126

 		bool maybe_harvest_block();
 
 		// returns the block currently being
@@ -1111,6 +1160,8 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer

../incl std::deque<int> m_file_requests; std::string m_url; + + web_seed_entry& m_web; // this is used for intermediate storage of pieces // that are received in more than one HTTP response @@ -1138,12 +1189,14 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer

../incl // this is the number of bytes we've already received // from the next chunk header we're waiting for int m_partial_chunk_header; + + // the number of responses we've received so far on + // this connection + int m_num_responses; }; } -#endif // TORRENT_WEB_PEER_CONNECTION_HPP_INCLUDED - -

relevance 0../src/bt_peer_connection.cpp:655this could be optimized using knuth morris pratt

this could be optimized using knuth morris pratt

../src/bt_peer_connection.cpp:655

		if (m_encrypted && m_rc4_encrypted)
+
relevance 0../src/bt_peer_connection.cpp:662this could be optimized using knuth morris pratt

this could be optimized using knuth morris pratt

../src/bt_peer_connection.cpp:662

		if (m_encrypted && m_rc4_encrypted)
 		{
 			fun = encrypt;
 			userdata = m_enc_handler.get();
@@ -1194,7 +1247,7 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer

../incl // } // no complete sync -

relevance 0../src/bt_peer_connection.cpp:2074if we're finished, send upload_only message

if we're finished, send upload_only message

../src/bt_peer_connection.cpp:2074

			if (msg[5 + k / 8] & (0x80 >> (k % 8))) bitfield_string[k] = '1';
+
relevance 0../src/bt_peer_connection.cpp:2081if we're finished, send upload_only message

if we're finished, send upload_only message

../src/bt_peer_connection.cpp:2081

			if (msg[5 + k / 8] & (0x80 >> (k % 8))) bitfield_string[k] = '1';
 			else bitfield_string[k] = '0';
 		}
 		peer_log("==> BITFIELD [ %s ]", bitfield_string.c_str());
@@ -1245,8 +1298,8 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer

../incl std::back_insert_iterator<std::string> out(remote_address); detail::write_address(remote().address(), out); handshake["yourip"] = remote_address; -

relevance 0../src/bt_peer_connection.cpp:3316move the erasing into the loop above remove all payload ranges that has been sent

move the erasing into the loop above -remove all payload ranges that has been sent

../src/bt_peer_connection.cpp:3316

			for (std::vector<range>::iterator i = m_payloads.begin();
+
relevance 0../src/bt_peer_connection.cpp:3323move the erasing into the loop above remove all payload ranges that has been sent

move the erasing into the loop above +remove all payload ranges that has been sent

../src/bt_peer_connection.cpp:3323

			for (std::vector<range>::iterator i = m_payloads.begin();
 				i != m_payloads.end(); ++i)
 			{
 				i->start -= bytes_transferred;
@@ -1297,7 +1350,7 @@ remove all payload ranges that has been sent

../src/bt_peer_connection.c TORRENT_ASSERT(m_sent_handshake); } -

relevance 0../src/file.cpp:1327is there any way to pre-fetch data from a file on windows?

is there any way to pre-fetch data from a file on windows?

../src/file.cpp:1327

+
relevance 0../src/file.cpp:1344is there any way to pre-fetch data from a file on windows?

is there any way to pre-fetch data from a file on windows?

../src/file.cpp:1344

 	void file::init_file()
 	{
 		if (m_page_size != 0) return;
@@ -1348,7 +1401,7 @@ remove all payload ranges that has been sent

../src/bt_peer_connection.c #ifdef TORRENT_DEBUG if (m_open_mode & no_buffer) { -

relevance 0../src/http_tracker_connection.cpp:99support authentication (i.e. user name and password) in the URL

support authentication (i.e. user name and password) in the URL

../src/http_tracker_connection.cpp:99

		, aux::session_impl const& ses
+
relevance 0../src/http_tracker_connection.cpp:99support authentication (i.e. user name and password) in the URL

support authentication (i.e. user name and password) in the URL

../src/http_tracker_connection.cpp:99

		, aux::session_impl const& ses
 		, proxy_settings const& ps
 		, std::string const& auth
 #if TORRENT_USE_I2P
@@ -1399,39 +1452,39 @@ remove all payload ranges that has been sent

../src/bt_peer_connection.c size_t arguments_start = url.find('?'); if (arguments_start != std::string::npos) url += "&"; -

relevance 0../src/i2p_stream.cpp:181move this to proxy_base and use it in all proxies

move this to proxy_base and use it in all proxies

../src/i2p_stream.cpp:181

	{
-		m_state = sam_idle;
+
relevance 0../src/i2p_stream.cpp:204move this to proxy_base and use it in all proxies

move this to proxy_base and use it in all proxies

../src/i2p_stream.cpp:204

+	i2p_stream::i2p_stream(io_service& io_service)
+		: proxy_base(io_service)
+		, m_id(0)
+		, m_command(cmd_create_session)
+		, m_state(0)
+	{
+#if defined TORRENT_DEBUG || TORRENT_RELEASE_ASSERTS
+		m_magic = 0x1337;
+#endif
+	}
 
-		std::string name = m_sam_socket->name_lookup();
-		if (!m_name_lookup.empty())
-		{
-			std::pair<std::string, name_lookup_handler>& nl = m_name_lookup.front();
-			do_name_lookup(nl.first, nl.second);
-			m_name_lookup.pop_front();
-		}
-
-		if (ec)
-		{
-			handler(ec, 0);
-			return;
-		}
-
-		handler(ec, name.c_str());
+	i2p_stream::~i2p_stream()
+	{
+#if defined TORRENT_DEBUG || TORRENT_RELEASE_ASSERTS
+		TORRENT_ASSERT(m_magic == 0x1337);
+		m_magic = 0;
+#endif
 	}
 
 
bool i2p_stream::handle_error(error_code const& e, boost::shared_ptr<handler_type> const& h)
{ + TORRENT_ASSERT(m_magic == 0x1337); if (!e) return false; // fprintf(stderr, "i2p error \"%s\"\n", e.message().c_str()); (*h)(e); - error_code ec; - close(ec); return true; } void i2p_stream::do_connect(error_code const& e, tcp::resolver::iterator i , boost::shared_ptr<handler_type> h) { + TORRENT_ASSERT(m_magic == 0x1337); if (e || i == tcp::resolver::iterator()) { (*h)(e); @@ -1449,8 +1502,8 @@ remove all payload ranges that has been sent

../src/bt_peer_connection.c void i2p_stream::connected(error_code const& e, boost::shared_ptr<handler_type> h) { -#if defined TORRENT_ASIO_DEBUGGING -

relevance 0../src/packet_buffer.cpp:176use compare_less_wrap for this comparison as well

use compare_less_wrap for this comparison as well

../src/packet_buffer.cpp:176

		while (new_size < size)
+		TORRENT_ASSERT(m_magic == 0x1337);
+
relevance 0../src/packet_buffer.cpp:176use compare_less_wrap for this comparison as well

use compare_less_wrap for this comparison as well

../src/packet_buffer.cpp:176

		while (new_size < size)
 			new_size <<= 1;
 
 		void** new_storage = (void**)malloc(sizeof(void*) * new_size);
@@ -1501,9 +1554,9 @@ remove all payload ranges that has been sent

../src/bt_peer_connection.c if (m_storage[m_last & mask]) break; ++m_last; m_last &= 0xffff; -

relevance 0../src/peer_connection.cpp:2731this might need something more so that once we have the metadata we can construct a full bitfield

this might need something more +

relevance 0../src/peer_connection.cpp:2730this might need something more so that once we have the metadata we can construct a full bitfield

this might need something more so that once we have the metadata -we can construct a full bitfield

../src/peer_connection.cpp:2731

+we can construct a full bitfield

../src/peer_connection.cpp:2730

 #ifdef TORRENT_VERBOSE_LOGGING
 		peer_log("*** THIS IS A SEED [ p: %p ]", m_peer_info);
 #endif
@@ -1554,7 +1607,7 @@ we can construct a full bitfield

../src/peer_connection.cpp:2731

relevance 0../src/peer_connection.cpp:2862sort the allowed fast set in priority order

sort the allowed fast set in priority order

../src/peer_connection.cpp:2862

		// this piece index later
+
relevance 0../src/peer_connection.cpp:2861sort the allowed fast set in priority order

sort the allowed fast set in priority order

../src/peer_connection.cpp:2861

		// this piece index later
 		m_allowed_fast.push_back(index);
 
 		// if the peer has the piece and we want
@@ -1605,8 +1658,8 @@ we can construct a full bitfield

../src/peer_connection.cpp:2731

relevance 0../src/peer_connection.cpp:4575peers should really be corked/uncorked outside of all completed disk operations

peers should really be corked/uncorked outside of -all completed disk operations

../src/peer_connection.cpp:4575

				// this means we're in seed mode and we haven't yet
+
relevance 0../src/peer_connection.cpp:4574peers should really be corked/uncorked outside of all completed disk operations

peers should really be corked/uncorked outside of +all completed disk operations

../src/peer_connection.cpp:4574

				// this means we're in seed mode and we haven't yet
 				// verified this piece (r.piece)
 				t->filesystem().async_read_and_hash(r, boost::bind(&peer_connection::on_disk_read_complete
 					, self(), _1, _2, r), cache.second);
@@ -1657,7 +1710,7 @@ all completed disk operations

../src/peer_connection.cpp:4575

relevance 0../src/policy.cpp:857only allow _one_ connection to use this override at a time

only allow _one_ connection to use this +

relevance 0../src/policy.cpp:857only allow _one_ connection to use this override at a time

only allow _one_ connection to use this override at a time

../src/policy.cpp:857

				" external: " << external.external_address(m_peers[candidate]->address()) <<
 				" t: " << (session_time - m_peers[candidate]->last_connected) <<
 				" ]\n";
@@ -1709,7 +1762,7 @@ override at a time

../src/policy.cpp:857

relevance 0../src/policy.cpp:1897how do we deal with our external address changing? Pass in a force-update maybe? and keep a version number in policy

how do we deal with our external address changing? Pass in a force-update maybe? and keep a version number in policy

../src/policy.cpp:1897

#endif
+
relevance 0../src/policy.cpp:1900how do we deal with our external address changing? Pass in a force-update maybe? and keep a version number in policy

how do we deal with our external address changing? Pass in a force-update maybe? and keep a version number in policy

../src/policy.cpp:1900

#endif
 		, on_parole(false)
 		, banned(false)
 #ifndef TORRENT_DISABLE_DHT
@@ -1760,7 +1813,7 @@ override at a time

../src/policy.cpp:857

relevance 0../src/session_impl.cpp:1942recalculate all connect candidates for all torrents

recalculate all connect candidates for all torrents

../src/session_impl.cpp:1942

		m_upload_rate.close();
+
relevance 0../src/session_impl.cpp:1943recalculate all connect candidates for all torrents

recalculate all connect candidates for all torrents

../src/session_impl.cpp:1943

		m_upload_rate.close();
 
 		// #error closing the udp socket here means that
 		// the uTP connections cannot be closed gracefully
@@ -1811,7 +1864,7 @@ override at a time

../src/policy.cpp:857

relevance 0../src/session_impl.cpp:3372have a separate list for these connections, instead of having to loop through all of them

have a separate list for these connections, instead of having to loop through all of them

../src/session_impl.cpp:3372

		// --------------------------------------------------------------
+
relevance 0../src/session_impl.cpp:3393have a separate list for these connections, instead of having to loop through all of them

have a separate list for these connections, instead of having to loop through all of them

../src/session_impl.cpp:3393

		// --------------------------------------------------------------
 		if (!m_paused) m_auto_manage_time_scaler--;
 		if (m_auto_manage_time_scaler < 0)
 		{
@@ -1862,7 +1915,7 @@ override at a time

../src/policy.cpp:857

relevance 0../src/session_impl.cpp:4459allow extensions to sort torrents for queuing

allow extensions to sort torrents for queuing

../src/session_impl.cpp:4459

			else if (!t->is_paused())
+
relevance 0../src/session_impl.cpp:4483allow extensions to sort torrents for queuing

allow extensions to sort torrents for queuing

../src/session_impl.cpp:4483

			else if (!t->is_paused())
 			{
 				TORRENT_ASSERT(t->m_resume_data_loaded || !t->valid_metadata());
 				--hard_limit;
@@ -1913,9 +1966,9 @@ override at a time

../src/policy.cpp:857

relevance 0../src/session_impl.cpp:4615use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections

use a lower limit than m_settings.connections_limit +

relevance 0../src/session_impl.cpp:4639use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections

use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming -connections

../src/session_impl.cpp:4615

		{
+connections

../src/session_impl.cpp:4639

		{
 			if (m_boost_connections > max_connections)
 			{
 				m_boost_connections -= max_connections;
@@ -1966,7 +2019,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/session_impl.cpp:4649make this bias configurable

make this bias configurable

../src/session_impl.cpp:4649

relevance 0../src/session_impl.cpp:4650also take average_peers into account, to create a bias for downloading torrents with < average peers

also take average_peers into account, to create a bias for downloading torrents with < average peers

../src/session_impl.cpp:4650

				average_peers = num_downloads_peers / num_downloads;
+
relevance 0../src/session_impl.cpp:4673make this bias configurable

make this bias configurable

../src/session_impl.cpp:4673

relevance 0../src/session_impl.cpp:4674also take average_peers into account, to create a bias for downloading torrents with < average peers

also take average_peers into account, to create a bias for downloading torrents with < average peers

../src/session_impl.cpp:4674

				average_peers = num_downloads_peers / num_downloads;
 
 			if (m_next_connect_torrent == m_torrents.end())
 				m_next_connect_torrent = m_torrents.begin();
@@ -2017,7 +2070,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/session_impl.cpp:4794make configurable

make configurable

../src/session_impl.cpp:4794

+
relevance 0../src/session_impl.cpp:4818make configurable

make configurable

../src/session_impl.cpp:4818

 #ifdef TORRENT_DEBUG
 			for (std::vector<peer_connection*>::const_iterator i = peers.begin()
 				, end(peers.end()), prev(peers.end()); i != end; ++i)
@@ -2050,7 +2103,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/session_impl.cpp:4808make configurable

make configurable

../src/session_impl.cpp:4808

						>= (*i)->uploaded_in_last_round() * 1000
+
relevance 0../src/session_impl.cpp:4832make configurable

make configurable

../src/session_impl.cpp:4832

						>= (*i)->uploaded_in_last_round() * 1000
 						* (1 + t2->priority()) / total_milliseconds(unchoke_interval));
 				}
 				prev = i;
@@ -2101,7 +2154,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/storage.cpp:324if the read fails, set error and exit immediately

if the read fails, set error and exit immediately

../src/storage.cpp:324

			if (m_storage->disk_pool()) block_size = m_storage->disk_pool()->block_size();
+
relevance 0../src/storage.cpp:324if the read fails, set error and exit immediately

if the read fails, set error and exit immediately

../src/storage.cpp:324

			if (m_storage->disk_pool()) block_size = m_storage->disk_pool()->block_size();
 			int size = slot_size;
 			int num_blocks = (size + block_size - 1) / block_size;
 
@@ -2152,7 +2205,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/storage.cpp:358if the read fails, set error and exit immediately

if the read fails, set error and exit immediately

../src/storage.cpp:358

					{
+
relevance 0../src/storage.cpp:358if the read fails, set error and exit immediately

if the read fails, set error and exit immediately

../src/storage.cpp:358

					{
 						ph.h.update((char const*)bufs[i].iov_base, bufs[i].iov_len);
 						small_piece_size -= bufs[i].iov_len;
 					}
@@ -2203,7 +2256,7 @@ connections

../src/session_impl.cpp:4615

relevance 0../src/storage.cpp:629make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

make this more generic to not just work if files have been +

relevance 0../src/storage.cpp:629make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

../src/storage.cpp:629

		for (;;)
 		{
@@ -2256,9 +2309,9 @@ maybe use the same format as .torrent files and reuse some code from torrent_inf
 		
 		for (int i = 0; i < file_sizes_ent->list_size(); ++i)
 		{
-
relevance 0../src/storage.cpp:1238what if file_base is used to merge several virtual files into a single physical file? We should probably disable this if file_base is used. This is not a widely used feature though

what if file_base is used to merge several virtual files +

relevance 0../src/storage.cpp:1246what if file_base is used to merge several virtual files into a single physical file? We should probably disable this if file_base is used. This is not a widely used feature though

what if file_base is used to merge several virtual files into a single physical file? We should probably disable this -if file_base is used. This is not a widely used feature though

../src/storage.cpp:1238

			int bytes_transferred = 0;
+if file_base is used. This is not a widely used feature though

../src/storage.cpp:1246

			int bytes_transferred = 0;
 			// if the file is opened in no_buffer mode, and the
 			// read is unaligned, we need to fall back on a slow
 			// special read that reads aligned buffers and copies
@@ -2309,7 +2362,7 @@ if file_base is used. This is not a widely used feature though

../src/st // makes unaligned requests (and the disk cache is disabled or fully utilized // for write cache). -

relevance 0../src/torrent.cpp:1362is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash

is verify_peer_cert called once per certificate in the chain, and +

relevance 0../src/torrent.cpp:1362is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash

is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) @@ -2365,12 +2418,12 @@ need to be verified to make sure its DN matches the info-hash

../src/tor { #if defined(TORRENT_VERBOSE_LOGGING) || defined(TORRENT_LOGGING) match = true; -

relevance 0../src/torrent.cpp:5172make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync

make this more generic to not just work if files have been +

relevance 0../src/torrent.cpp:5170make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures -which are kept in sync

../src/torrent.cpp:5172

		if (m_seed_mode) m_verified.resize(m_torrent_file->num_pieces(), false);
+which are kept in sync

../src/torrent.cpp:5170

		if (m_seed_mode) m_verified.resize(m_torrent_file->num_pieces(), false);
 		super_seeding(rd.dict_find_int_value("super_seeding", 0));
 
 		m_last_scrape = rd.dict_find_int_value("last_scrape", 0);
@@ -2421,12 +2474,12 @@ which are kept in sync

../src/torrent.cpp:5172

relevance 0../src/torrent.cpp:5308if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents

if this is a merkle torrent and we can't +

relevance 0../src/torrent.cpp:5306if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents

if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost -no one uses merkle torrents

../src/torrent.cpp:5308

				add_web_seed(url, web_seed_entry::http_seed);
+no one uses merkle torrents

../src/torrent.cpp:5306

				add_web_seed(url, web_seed_entry::http_seed);
 			}
 		}
 
@@ -2477,9 +2530,9 @@ no one uses merkle torrents

../src/torrent.cpp:5308

relevance 0../src/torrent.cpp:5496make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base

make this more generic to not just work if files have been +

relevance 0../src/torrent.cpp:5494make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. -using file_base

../src/torrent.cpp:5496

		entry::string_type& pieces = ret["pieces"].string();
+using file_base

../src/torrent.cpp:5494

		entry::string_type& pieces = ret["pieces"].string();
 		pieces.resize(m_torrent_file->num_pieces());
 		if (is_seed())
 		{
@@ -2530,10 +2583,10 @@ using file_base

../src/torrent.cpp:5496

relevance 0../src/torrent.cpp:8026go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece

go through the pieces we have and count the total number +

relevance 0../src/torrent.cpp:8035go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece

go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have -it num_interested == 0, we need to pick a new piece

../src/torrent.cpp:8026

			}
+it num_interested == 0, we need to pick a new piece

../src/torrent.cpp:8035

			}
 
 			rarest_pieces.clear();
 			rarest_rarity = pp.peer_count;
@@ -2584,7 +2637,7 @@ it num_interested == 0, we need to pick a new piece

../src/torrent.cpp:8 { m_picker->get_availability(avail_vec); } -

relevance 0../src/udp_tracker_connection.cpp:552it would be more efficient to not use a string here. however, the problem is that some trackers will respond with actual strings. For example i2p trackers

it would be more efficient to not use a string here. +

relevance 0../src/udp_tracker_connection.cpp:552it would be more efficient to not use a string here. however, the problem is that some trackers will respond with actual strings. For example i2p trackers

it would be more efficient to not use a string here. however, the problem is that some trackers will respond with actual strings. For example i2p trackers

../src/udp_tracker_connection.cpp:552

		}
 
@@ -2637,7 +2690,7 @@ with actual strings. For example i2p trackers

../src/udp_tracker_connect { restart_read_timeout(); int action = detail::read_int32(buf); -

relevance 0../src/utp_stream.cpp:1573this loop may not be very efficient

this loop may not be very efficient

../src/utp_stream.cpp:1573

	TORRENT_ASSERT(p->header_size >= sizeof(utp_header) + sack_size + 2);
+
relevance 0../src/utp_stream.cpp:1573this loop may not be very efficient

this loop may not be very efficient

../src/utp_stream.cpp:1573

	TORRENT_ASSERT(p->header_size >= sizeof(utp_header) + sack_size + 2);
 	memmove(ptr, ptr + sack_size + 2, p->size - p->header_size);
 	p->header_size -= sack_size + 2;
 	p->size -= sack_size + 2;
@@ -2688,8 +2741,8 @@ bool utp_socket_impl::send_pkt(int flags)
 		if (sack > 32) sack = 32;
 	}
 
-
relevance 0../src/kademlia/routing_table.cpp:265instad of refreshing a bucket by using find_nodes, ping each node periodically

instad of refreshing a bucket by using find_nodes, -ping each node periodically

../src/kademlia/routing_table.cpp:265

		os << "]\n";
+
relevance 0../src/kademlia/routing_table.cpp:280instad of refreshing a bucket by using find_nodes, ping each node periodically

instad of refreshing a bucket by using find_nodes, +ping each node periodically

../src/kademlia/routing_table.cpp:280

		os << "]\n";
 	}
 }
 
@@ -2735,12 +2788,12 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 	// generate a random node_id within the given bucket
 	target = generate_random_id();
 	int num_bits = std::distance(m_buckets.begin(), i) + 1;
-	node_id mask(0);
-	for (int i = 0; i < num_bits; ++i) mask[i/8] |= 0x80 >> (i&7);
+	node_id mask = generate_prefix_mask(num_bits);
 
 	// target = (target & ~mask) | (root & mask)
 	node_id root = m_id;
-
relevance 0../include/libtorrent/config.hpp:305Make this count Unicode characters instead of bytes on windows

Make this count Unicode characters instead of bytes on windows

../include/libtorrent/config.hpp:305

+	root &= mask;
+
relevance 0../include/libtorrent/config.hpp:305Make this count Unicode characters instead of bytes on windows

Make this count Unicode characters instead of bytes on windows

../include/libtorrent/config.hpp:305

 // ==== eCS(OS/2) ===
 #elif defined __OS2__
 #define TORRENT_OS2
@@ -2791,7 +2844,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 #include <stdarg.h>
 
 // internal
-
relevance 0../include/libtorrent/proxy_base.hpp:166it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

it would be nice to remember the bind port and bind once we know where the proxy is +

relevance 0../include/libtorrent/proxy_base.hpp:166it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:166

	{
 		return m_sock.get_option(opt, ec);
 	}
@@ -2843,7 +2896,7 @@ m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:166

m_sock.close(ec); m_resolver.cancel(); } -
relevance 0../include/libtorrent/torrent_info.hpp:123include the number of peers received from this tracker, at last announce

include the number of peers received from this tracker, at last announce

../include/libtorrent/torrent_info.hpp:123

+
relevance 0../include/libtorrent/torrent_info.hpp:123include the number of peers received from this tracker, at last announce

include the number of peers received from this tracker, at last announce

../include/libtorrent/torrent_info.hpp:123

 		// if this tracker failed the last time it was contacted
 		// this error code specifies what error occurred
 		error_code last_error;
@@ -2894,7 +2947,7 @@ m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:166

// flags for the source bitmask, each indicating where // we heard about this tracker enum tracker_source -
relevance 0../include/libtorrent/upnp.hpp:121support using the windows API for UPnP operations as well

support using the windows API for UPnP operations as well

../include/libtorrent/upnp.hpp:121

	{
+
relevance 0../include/libtorrent/upnp.hpp:121support using the windows API for UPnP operations as well

support using the windows API for UPnP operations as well

../include/libtorrent/upnp.hpp:121

	{
 		virtual const char* name() const BOOST_SYSTEM_NOEXCEPT;
 		virtual std::string message(int ev) const BOOST_SYSTEM_NOEXCEPT;
 		virtual boost::system::error_condition default_error_condition(int ev) const BOOST_SYSTEM_NOEXCEPT
diff --git a/docs/tuning.html b/docs/tuning.html
index ea96bf8e0..bcbec6924 100644
--- a/docs/tuning.html
+++ b/docs/tuning.html
@@ -3,7 +3,7 @@
 
 
 
-
+
 libtorrent manual
 
 
@@ -170,7 +170,9 @@ large number of paused torrents (that are popular) it will be even more
 significant.

If you're short of memory, you should consider lowering the limit. 500 is probably enough. You can do this by setting session_settings::max_peerlist_size to -the max number of peers you want in the torrent's peer list.

+the max number of peers you want in a torrent's peer list. This limit applies per +torrent. For 5 torrents, the total number of peers in peerlists will be 5 times +the setting.

You should also lower the same limit but for paused torrents. It might even make sense to set that even lower, since you only need a few peers to start up while waiting for the tracker and DHT to give you fresh ones. The max peer list size for paused diff --git a/docs/tuning.rst b/docs/tuning.rst index 4a3fea0c8..cbb2bf594 100644 --- a/docs/tuning.rst +++ b/docs/tuning.rst @@ -110,7 +110,9 @@ significant. If you're short of memory, you should consider lowering the limit. 500 is probably enough. You can do this by setting ``session_settings::max_peerlist_size`` to -the max number of peers you want in the torrent's peer list. +the max number of peers you want in a torrent's peer list. This limit applies per +torrent. For 5 torrents, the total number of peers in peerlists will be 5 times +the setting. You should also lower the same limit but for paused torrents. It might even make sense to set that even lower, since you only need a few peers to start up while waiting