diff --git a/docs/reference-Session.html b/docs/reference-Session.html index 15a66e4db..ea68ab14c 100644 --- a/docs/reference-Session.html +++ b/docs/reference-Session.html @@ -3,7 +3,7 @@ - + Session diff --git a/docs/stats_counters.rst b/docs/stats_counters.rst index 9bef6c11e..b6c51da75 100644 --- a/docs/stats_counters.rst +++ b/docs/stats_counters.rst @@ -92,6 +92,50 @@ these counters break down the peer errors into more specific categories. These errors are what the underlying transport reported (i.e. TCP or uTP) +.. _peer.piece_requests: + +.. _peer.max_piece_requests: + +.. _peer.invalid_piece_requests: + +.. _peer.choked_piece_requests: + +.. _peer.cancelled_piece_requests: + +.. _peer.piece_rejects: + +.. raw:: html + + + + + + + + ++-------------------------------+---------+ +| name | type | ++===============================+=========+ +| peer.piece_requests | counter | ++-------------------------------+---------+ +| peer.max_piece_requests | counter | ++-------------------------------+---------+ +| peer.invalid_piece_requests | counter | ++-------------------------------+---------+ +| peer.choked_piece_requests | counter | ++-------------------------------+---------+ +| peer.cancelled_piece_requests | counter | ++-------------------------------+---------+ +| peer.piece_rejects | counter | ++-------------------------------+---------+ + + +the total number of incoming piece requests we've received followed +by the number of rejected piece requests for various reasons. +max_piece_requests mean we already had too many outstanding requests +from this peer, so we rejected it. cancelled_piece_requests are ones +where the other end explicitly asked for the piece to be rejected. + .. _peer.error_incoming_peers: .. _peer.error_outgoing_peers: @@ -895,6 +939,47 @@ bittorrent message counters. These counters are incremented every time a message of the corresponding type is received from or sent to a bittorrent peer. +.. _ses.waste_piece_timed_out: + +.. _ses.waste_piece_cancelled: + +.. _ses.waste_piece_unknown: + +.. _ses.waste_piece_seed: + +.. _ses.waste_piece_end_game: + +.. _ses.waste_piece_closing: + +.. raw:: html + + + + + + + + ++---------------------------+---------+ +| name | type | ++===========================+=========+ +| ses.waste_piece_timed_out | counter | ++---------------------------+---------+ +| ses.waste_piece_cancelled | counter | ++---------------------------+---------+ +| ses.waste_piece_unknown | counter | ++---------------------------+---------+ +| ses.waste_piece_seed | counter | ++---------------------------+---------+ +| ses.waste_piece_end_game | counter | ++---------------------------+---------+ +| ses.waste_piece_closing | counter | ++---------------------------+---------+ + + +the number of wasted downloaded bytes by reason of the bytes being +wasted. + .. _picker.piece_picker_partial_loops: .. _picker.piece_picker_suggest_loops: @@ -1236,47 +1321,6 @@ hash a piece (when verifying against the piece hash) cumulative time spent in various disk jobs, as well as total for all disk jobs. Measured in microseconds -.. _ses.waste_piece_timed_out: - -.. _ses.waste_piece_cancelled: - -.. _ses.waste_piece_unknown: - -.. _ses.waste_piece_seed: - -.. _ses.waste_piece_end_game: - -.. _ses.waste_piece_closing: - -.. raw:: html - - - - - - - - -+---------------------------+---------+ -| name | type | -+===========================+=========+ -| ses.waste_piece_timed_out | counter | -+---------------------------+---------+ -| ses.waste_piece_cancelled | counter | -+---------------------------+---------+ -| ses.waste_piece_unknown | counter | -+---------------------------+---------+ -| ses.waste_piece_seed | counter | -+---------------------------+---------+ -| ses.waste_piece_end_game | counter | -+---------------------------+---------+ -| ses.waste_piece_closing | counter | -+---------------------------+---------+ - - -the number of wasted downloaded bytes by reason of the bytes being -wasted. - .. _dht.dht_nodes: .. raw:: html @@ -1509,6 +1553,26 @@ the total number of bytes sent and received by the DHT the number of DHT messages we've sent and received by kind. +.. _dht.sent_dht_bytes: + +.. _dht.recv_dht_bytes: + +.. raw:: html + + + + ++--------------------+---------+ +| name | type | ++====================+=========+ +| dht.sent_dht_bytes | counter | ++--------------------+---------+ +| dht.recv_dht_bytes | counter | ++--------------------+---------+ + + +the number of bytes sent and received by the DHT + .. _utp.utp_packet_loss: .. _utp.utp_timeout: diff --git a/docs/todo.html b/docs/todo.html index f5bded044..f902df147 100644 --- a/docs/todo.html +++ b/docs/todo.html @@ -22,7 +22,7 @@

libtorrent todo-list

0 urgent -21 important +20 important 28 relevant 8 feasible 134 notes @@ -496,9 +496,9 @@ namespace libtorrent #if TORRENT_USE_IPV6 if (!ipv4) -relevance 3../src/session_impl.cpp:4493it would be really nice to update these counters as they are incremented. This depends on the session being ticked, which has a fairly coarse grained resolution

it would be really nice to update these counters +relevance 3../src/session_impl.cpp:4486it would be really nice to update these counters as they are incremented. This depends on the session being ticked, which has a fairly coarse grained resolution

it would be really nice to update these counters as they are incremented. This depends on the session -being ticked, which has a fairly coarse grained resolution

../src/session_impl.cpp:4493

			t->status(&alert->status.back(), ~torrent_handle::query_accurate_download_counters);
+being ticked, which has a fairly coarse grained resolution

../src/session_impl.cpp:4486

			t->status(&alert->status.back(), ~torrent_handle::query_accurate_download_counters);
 			t->clear_in_state_update();
 		}
 		state_updates.clear();
@@ -524,8 +524,6 @@ being ticked, which has a fairly coarse grained resolution

../src/sessio , m_stat.total_transfer(stat::upload_payload)); m_stats_counters.set_value(counters::sent_ip_overhead_bytes , m_stat.total_transfer(stat::upload_ip_protocol)); - m_stats_counters.set_value(counters::sent_tracker_bytes - , m_stat.total_transfer(stat::upload_tracker_protocol)); m_stats_counters.set_value(counters::recv_bytes , m_stat.total_download()); @@ -533,8 +531,6 @@ being ticked, which has a fairly coarse grained resolution

../src/sessio , m_stat.total_transfer(stat::download_payload)); m_stats_counters.set_value(counters::recv_ip_overhead_bytes , m_stat.total_transfer(stat::download_ip_protocol)); - m_stats_counters.set_value(counters::recv_tracker_bytes - , m_stat.total_transfer(stat::download_tracker_protocol)); m_stats_counters.set_value(counters::limiter_up_queue , m_upload_rate.queue_size()); @@ -549,8 +545,64 @@ being ticked, which has a fairly coarse grained resolution

../src/sessio for (int i = 0; i < counters::num_counters; ++i) values[i] = m_stats_counters[i]; -

relevance 3../src/session_impl.cpp:5972If socket jobs could be higher level, to include RC4 encryption and decryption, we would offload the main thread even more

If socket jobs could be higher level, to include RC4 encryption and decryption, -we would offload the main thread even more

../src/session_impl.cpp:5972

	{
+		alert->timestamp = total_microseconds(time_now_hires() - m_created);
+
+		m_alerts.post_alert_ptr(alert.release());
+	}
+
relevance 3../src/session_impl.cpp:5345deprecate this function. All of this functionality should be exposed as performance counters

deprecate this function. All of this functionality should be +exposed as performance counters

../src/session_impl.cpp:5345

			if (m_alerts.should_post<portmap_alert>())
+				m_alerts.post_alert(portmap_alert(mapping, port
+					, map_transport));
+			return;
+		}
+
+		if (ec)
+		{
+			if (m_alerts.should_post<portmap_error_alert>())
+				m_alerts.post_alert(portmap_error_alert(mapping
+					, map_transport, ec));
+		}
+		else
+		{
+			if (m_alerts.should_post<portmap_alert>())
+				m_alerts.post_alert(portmap_alert(mapping, port
+					, map_transport));
+		}
+	}
+
+
session_status session_impl::status() const +
{ +// INVARIANT_CHECK; + TORRENT_ASSERT(is_single_thread()); + + session_status s; + + s.optimistic_unchoke_counter = m_optimistic_unchoke_time_scaler; + s.unchoke_counter = m_unchoke_time_scaler; + + s.num_peers = int(m_connections.size()); + s.num_dead_peers = int(m_undead_peers.size()); + s.num_unchoked = m_stats_counters[counters::num_peers_up_unchoked_all]; + s.allowed_upload_slots = m_allowed_upload_slots; + + s.num_torrents = m_torrents.size(); + // only non-paused torrents want tick + s.num_paused_torrents = m_torrents.size() - m_torrent_lists[torrent_want_tick].size(); + + s.total_redundant_bytes = m_stats_counters[counters::recv_redundant_bytes]; + s.total_failed_bytes = m_stats_counters[counters::recv_failed_bytes]; + + s.up_bandwidth_queue = m_upload_rate.queue_size(); + s.down_bandwidth_queue = m_download_rate.queue_size(); + + s.up_bandwidth_bytes_queue = int(m_upload_rate.queued_bytes()); + s.down_bandwidth_bytes_queue = int(m_download_rate.queued_bytes()); + + s.disk_write_queue = m_stats_counters[counters::num_peers_down_disk]; + s.disk_read_queue = m_stats_counters[counters::num_peers_up_disk]; + +
relevance 3../src/session_impl.cpp:5938If socket jobs could be higher level, to include RC4 encryption and decryption, we would offload the main thread even more

If socket jobs could be higher level, to include RC4 encryption and decryption, +we would offload the main thread even more

../src/session_impl.cpp:5938

	{
 		int num_threads = m_settings.get_int(settings_pack::network_threads);
 		int num_pools = num_threads > 0 ? num_threads : 1;
 		while (num_pools > m_net_thread_pool.size())
@@ -601,7 +653,7 @@ we would offload the main thread even more

../src/session_impl.cpp:5972< , end(m_connections.end()); i != end; ++i) { int type = (*i)->type(); -

relevance 3../src/torrent.cpp:1083if any other peer has a busy request to this block, we need to cancel it too

if any other peer has a busy request to this block, we need to cancel it too

../src/torrent.cpp:1083

#endif
+
relevance 3../src/torrent.cpp:1083if any other peer has a busy request to this block, we need to cancel it too

if any other peer has a busy request to this block, we need to cancel it too

../src/torrent.cpp:1083

#endif
 
 		TORRENT_ASSERT(j->piece >= 0);
 
@@ -652,7 +704,7 @@ we would offload the main thread even more

../src/session_impl.cpp:5972< alerts().post_alert(file_error_alert(j->error.ec , resolve_filename(j->error.file), j->error.operation_str(), get_handle())); if (c) c->disconnect(errors::no_memory, peer_connection_interface::op_file); -

relevance 3../src/torrent.cpp:7685if peer is a really good peer, maybe we shouldn't disconnect it

if peer is a really good peer, maybe we shouldn't disconnect it

../src/torrent.cpp:7685

#if defined TORRENT_LOGGING || defined TORRENT_ERROR_LOGGING
+
relevance 3../src/torrent.cpp:7685if peer is a really good peer, maybe we shouldn't disconnect it

if peer is a really good peer, maybe we shouldn't disconnect it

../src/torrent.cpp:7685

#if defined TORRENT_LOGGING || defined TORRENT_ERROR_LOGGING
 		debug_log("incoming peer (%d)", int(m_connections.size()));
 #endif
 
@@ -703,57 +755,6 @@ we would offload the main thread even more

../src/session_impl.cpp:5972< if (m_abort) return false; if (!m_connections.empty()) return true; -

relevance 3../src/tracker_manager.cpp:180replace this with performance counters. remove depedency on session_impl

replace this with performance counters. remove depedency on session_impl

../src/tracker_manager.cpp:180

		return m_requester.lock();
-	}
-
-	void tracker_connection::fail(error_code const& ec, int code
-		, char const* msg, int interval, int min_interval)
-	{
-		// we need to post the error to avoid deadlock
-			get_io_service().post(boost::bind(&tracker_connection::fail_impl
-					, shared_from_this(), ec, code, std::string(msg), interval, min_interval));
-	}
-
-	void tracker_connection::fail_impl(error_code const& ec, int code
-		, std::string msg, int interval, int min_interval)
-	{
-		boost::shared_ptr<request_callback> cb = requester();
-		if (cb) cb->tracker_request_error(m_req, code, ec, msg.c_str()
-			, interval == 0 ? min_interval : interval);
-		close();
-	}
-
-
void tracker_connection::sent_bytes(int bytes) -
{ - m_man.sent_bytes(bytes); - } - - void tracker_connection::received_bytes(int bytes) - { - m_man.received_bytes(bytes); - } - - void tracker_connection::close() - { - cancel(); - m_man.remove_request(this); - } - - tracker_manager::tracker_manager(aux::session_impl& ses) - : m_ses(ses) - , m_ip_filter(ses.m_ip_filter) - , m_udp_socket(ses.m_udp_socket) - , m_host_resolver(ses.m_host_resolver) - , m_settings(ses.settings()) - , m_abort(false) - {} - - tracker_manager::~tracker_manager() - { - TORRENT_ASSERT(m_abort); - abort_all_requests(true); - } -
relevance 3../src/web_peer_connection.cpp:596just make this peer not have the pieces associated with the file we just requested. Only when it doesn't have any of the file do the following

just make this peer not have the pieces associated with the file we just requested. Only when it doesn't have any of the file do the following

../src/web_peer_connection.cpp:596

		{
@@ -1042,53 +1043,7 @@ should not include internal state.

../include/libtorrent/torrent_info.hp // The URL of the web seed std::string url; -

relevance 3../include/libtorrent/udp_tracker_connection.hpp:122this should be a vector

this should be a vector

../include/libtorrent/udp_tracker_connection.hpp:122

			, char const* buf, int size);
-		bool on_receive_hostname(error_code const& e, char const* hostname
-			, char const* buf, int size);
-		bool on_connect_response(char const* buf, int size);
-		bool on_announce_response(char const* buf, int size);
-		bool on_scrape_response(char const* buf, int size);
-
-		// wraps tracker_connection::fail
-		void fail(error_code const& ec, int code = -1
-			, char const* msg = "", int interval = 0, int min_interval = 0);
-
-		void send_udp_connect();
-		void send_udp_announce();
-		void send_udp_scrape();
-
-		virtual void on_timeout(error_code const& ec);
-
-		udp::endpoint pick_target_endpoint() const;
-
-		std::string m_hostname;
-
std::list<tcp::endpoint> m_endpoints; -
- struct connection_cache_entry - { - boost::int64_t connection_id; - ptime expires; - }; - - static std::map<address, connection_cache_entry> m_connection_cache; - static mutex m_cache_mutex; - - udp::endpoint m_target; - - boost::uint32_t m_transaction_id; - int m_attempts; - - // action_t - boost::uint8_t m_state; - - bool m_abort; - }; - -} - -#endif // TORRENT_UDP_TRACKER_CONNECTION_HPP_INCLUDED - -
relevance 2../src/disk_io_thread.cpp:844should this be allocated on the stack?

should this be allocated on the stack?

../src/disk_io_thread.cpp:844

			// if we're also flushing the read cache, this piece
+
relevance 2../src/disk_io_thread.cpp:844should this be allocated on the stack?

should this be allocated on the stack?

../src/disk_io_thread.cpp:844

			// if we're also flushing the read cache, this piece
 			// should be removed as soon as all write jobs finishes
 			// otherwise it will turn into a read piece
 		}
@@ -1139,7 +1094,7 @@ should not include internal state.

../include/libtorrent/torrent_info.hp { cached_piece_entry* pe = m_disk_cache.find_piece(storage, (*i)->piece); TORRENT_PIECE_ASSERT(pe->num_dirty == 0, pe); -

relevance 2../src/disk_io_thread.cpp:885we're not flushing the read cache at all?

we're not flushing the read cache at all?

../src/disk_io_thread.cpp:885

			// from disk_io_thread::do_delete, which is a fence job and should
+
relevance 2../src/disk_io_thread.cpp:885we're not flushing the read cache at all?

we're not flushing the read cache at all?

../src/disk_io_thread.cpp:885

			// from disk_io_thread::do_delete, which is a fence job and should
 			// have any other jobs active, i.e. there should not be any references
 			// keeping pieces or blocks alive
 			if ((flags & flush_delete_cache) && (flags & flush_expect_clear))
@@ -1190,7 +1145,7 @@ should not include internal state.

../include/libtorrent/torrent_info.hp if (e->num_dirty == 0) continue; pieces.push_back(std::make_pair(e->storage.get(), int(e->piece))); } -

relevance 2../src/file.cpp:1491use vm_copy here, if available, and if buffers are aligned

use vm_copy here, if available, and if buffers are aligned

../src/file.cpp:1491

		CloseHandle(native_handle());
+
relevance 2../src/file.cpp:1491use vm_copy here, if available, and if buffers are aligned

use vm_copy here, if available, and if buffers are aligned

../src/file.cpp:1491

		CloseHandle(native_handle());
 		m_path.clear();
 #else
 		if (m_file_handle != INVALID_HANDLE_VALUE)
@@ -1220,7 +1175,7 @@ should not include internal state.

../include/libtorrent/torrent_info.hp int offset = 0; for (int i = 0; i < num_bufs; ++i) { -

relevance 2../src/file.cpp:1502use vm_copy here, if available, and if buffers are aligned

use vm_copy here, if available, and if buffers are aligned

../src/file.cpp:1502

	}
+
relevance 2../src/file.cpp:1502use vm_copy here, if available, and if buffers are aligned

use vm_copy here, if available, and if buffers are aligned

../src/file.cpp:1502

	}
 
 	// defined in storage.cpp
 	int bufs_size(file::iovec_t const* bufs, int num_bufs);
@@ -1271,7 +1226,7 @@ should not include internal state.

../include/libtorrent/torrent_info.hp // issue a single write operation instead of using a vector // operation int buf_size = 0; -

relevance 2../src/peer_connection.cpp:4839use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually

use a deadline_timer for timeouts. Don't rely on second_tick()! +

relevance 2../src/peer_connection.cpp:4839use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually

use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually

../src/peer_connection.cpp:4839

			if (is_i2p(*m_socket))
@@ -1325,7 +1280,7 @@ entirely eventually

../src/peer_connection.cpp:4839

relevance 2../src/session_impl.cpp:226find a better place for this function

find a better place for this function

../src/session_impl.cpp:226

			*j.vec, j.peer->make_write_handler(boost::bind(
+
relevance 2../src/session_impl.cpp:226find a better place for this function

find a better place for this function

../src/session_impl.cpp:226

			*j.vec, j.peer->make_write_handler(boost::bind(
 				&peer_connection::on_send_data, j.peer, _1, _2)));
 	}
 	else
@@ -1376,7 +1331,7 @@ namespace aux {
 		const static class_mapping v4_classes[] =
 		{
 			// everything
-
relevance 2../src/session_impl.cpp:1833the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list

the udp socket(s) should be using the same generic +

relevance 2../src/session_impl.cpp:1833the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list

the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list

../src/session_impl.cpp:1833

				}
@@ -1430,7 +1385,7 @@ listen_interfaces list

../src/session_impl.cpp:1833

relevance 2../src/session_impl.cpp:1933use bind_to_device in udp_socket

use bind_to_device in udp_socket

../src/session_impl.cpp:1933

		{
+
relevance 2../src/session_impl.cpp:1933use bind_to_device in udp_socket

use bind_to_device in udp_socket

../src/session_impl.cpp:1933

		{
 #if defined TORRENT_VERBOSE_LOGGING || defined TORRENT_LOGGING || defined TORRENT_ERROR_LOGGING
 			char msg[200];
 			snprintf(msg, sizeof(msg), "cannot bind TCP listen socket to interface \"%s\": %s"
@@ -1476,7 +1431,7 @@ listen_interfaces list

../src/session_impl.cpp:1833

relevance 2../src/session_impl.cpp:1960use bind_to_device in udp_socket

use bind_to_device in udp_socket

../src/session_impl.cpp:1960

			session_log("SSL: cannot bind to UDP interface \"%s\": %s"
+
relevance 2../src/session_impl.cpp:1960use bind_to_device in udp_socket

use bind_to_device in udp_socket

../src/session_impl.cpp:1960

			session_log("SSL: cannot bind to UDP interface \"%s\": %s"
 				, print_endpoint(m_listen_interface).c_str(), ec.message().c_str());
 #endif
 			if (m_alerts.should_post<listen_failed_alert>())
@@ -1527,8 +1482,8 @@ listen_interfaces list

../src/session_impl.cpp:1833

relevance 2../src/session_impl.cpp:3401make a list for torrents that want to be announced on the DHT so we don't have to loop over all torrents, just to find the ones that want to announce

make a list for torrents that want to be announced on the DHT so we -don't have to loop over all torrents, just to find the ones that want to announce

../src/session_impl.cpp:3401

		if (!m_dht_torrents.empty())
+
relevance 2../src/session_impl.cpp:3394make a list for torrents that want to be announced on the DHT so we don't have to loop over all torrents, just to find the ones that want to announce

make a list for torrents that want to be announced on the DHT so we +don't have to loop over all torrents, just to find the ones that want to announce

../src/session_impl.cpp:3394

		if (!m_dht_torrents.empty())
 		{
 			boost::shared_ptr<torrent> t;
 			do
@@ -1579,7 +1534,7 @@ don't have to loop over all torrents, just to find the ones that want to announc
 		if (m_torrents.empty()) return;
 
 		if (m_next_lsd_torrent == m_torrents.end())
-
relevance 2../src/torrent.cpp:701post alert

post alert

../src/torrent.cpp:701

		state_updated();
+
relevance 2../src/torrent.cpp:701post alert

post alert

../src/torrent.cpp:701

		state_updated();
 
 		set_state(torrent_status::downloading);
 
@@ -1630,7 +1585,7 @@ don't have to loop over all torrents, just to find the ones that want to announc
 		TORRENT_ASSERT(piece >= 0);
 		TORRENT_ASSERT(m_verified.get_bit(piece) == false);
 		++m_num_verified;
-
relevance 2../src/torrent.cpp:4694abort lookups this torrent has made via the session host resolver interface

abort lookups this torrent has made via the +

relevance 2../src/torrent.cpp:4694abort lookups this torrent has made via the session host resolver interface

abort lookups this torrent has made via the session host resolver interface

../src/torrent.cpp:4694

		// files belonging to the torrents
 		disconnect_all(errors::torrent_aborted, peer_connection_interface::op_bittorrent);
 
@@ -1682,7 +1637,7 @@ session host resolver interface

../src/torrent.cpp:4694

relevance 2../src/web_peer_connection.cpp:655create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection

create a mapping of file-index to redirection URLs. Use that to form +

relevance 2../src/web_peer_connection.cpp:655create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection

create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection

../src/web_peer_connection.cpp:655

						== dl_target);
 #endif
@@ -1735,7 +1690,7 @@ peer_connection

../src/web_peer_connection.cpp:655

relevance 2../src/kademlia/dos_blocker.cpp:75make these limits configurable

make these limits configurable

../src/kademlia/dos_blocker.cpp:75

	bool dos_blocker::incoming(address addr, ptime now)
+
relevance 2../src/kademlia/dos_blocker.cpp:75make these limits configurable

make these limits configurable

../src/kademlia/dos_blocker.cpp:75

	bool dos_blocker::incoming(address addr, ptime now)
 	{
 		node_ban_entry* match = 0;
 		node_ban_entry* min = m_ban_nodes;
@@ -1786,7 +1741,7 @@ peer_connection

../src/web_peer_connection.cpp:655

relevance 2../src/kademlia/node.cpp:67make this configurable in dht_settings

make this configurable in dht_settings

../src/kademlia/node.cpp:67

#include "libtorrent/kademlia/routing_table.hpp"
+
relevance 2../src/kademlia/node.cpp:67make this configurable in dht_settings

make this configurable in dht_settings

../src/kademlia/node.cpp:67

#include "libtorrent/kademlia/routing_table.hpp"
 #include "libtorrent/kademlia/node.hpp"
 #include "libtorrent/kademlia/dht_observer.hpp"
 
@@ -1837,7 +1792,7 @@ void purge_peers(std::set<peer_entry>& peers)
 
 void nop() {}
 
-
relevance 2../src/kademlia/node.cpp:804find_node should write directly to the response entry

find_node should write directly to the response entry

../src/kademlia/node.cpp:804

			TORRENT_LOG(node) << " values: " << reply["values"].list().size();
+
relevance 2../src/kademlia/node.cpp:804find_node should write directly to the response entry

find_node should write directly to the response entry

../src/kademlia/node.cpp:804

			TORRENT_LOG(node) << " values: " << reply["values"].list().size();
 		}
 #endif
 	}
@@ -1888,7 +1843,7 @@ void nop() {}
 		// listen port and instead use the source port of the packet?
 		if (msg_keys[5] && msg_keys[5]->int_value() != 0)
 			port = m.addr.port();
-
relevance 2../src/kademlia/node_id.cpp:133this could be optimized if SSE 4.2 is available. It could also be optimized given that we have a fixed length

this could be optimized if SSE 4.2 is +

relevance 2../src/kademlia/node_id.cpp:133this could be optimized if SSE 4.2 is available. It could also be optimized given that we have a fixed length

this could be optimized if SSE 4.2 is available. It could also be optimized given that we have a fixed length

../src/kademlia/node_id.cpp:133

		b6 = ip_.to_v6().to_bytes();
 		ip = &b6[0];
@@ -1941,7 +1896,7 @@ bool verify_id(node_id const& nid, address const& source_ip)
 	if (is_local(source_ip)) return true;
 
 	node_id h = generate_id_impl(source_ip, nid[19]);
-
relevance 2../include/libtorrent/enum_net.hpp:137this could be done more efficiently by just looking up the interface with the given name, maybe even with if_nametoindex()

this could be done more efficiently by just looking up +

relevance 2../include/libtorrent/enum_net.hpp:137this could be done more efficiently by just looking up the interface with the given name, maybe even with if_nametoindex()

this could be done more efficiently by just looking up the interface with the given name, maybe even with if_nametoindex()

../include/libtorrent/enum_net.hpp:137

 		address ip = address::from_string(device_name, ec);
 		if (!ec)
@@ -1993,7 +1948,7 @@ the interface with the given name, maybe even with if_nametoindex()

../i // returns true if the given device exists TORRENT_EXTRA_EXPORT bool has_interface(char const* name, io_service& ios -

relevance 2../include/libtorrent/intrusive_ptr_base.hpp:44remove this class and transition over to using shared_ptr and make_shared instead

remove this class and transition over to using shared_ptr and +

relevance 2../include/libtorrent/intrusive_ptr_base.hpp:44remove this class and transition over to using shared_ptr and make_shared instead

remove this class and transition over to using shared_ptr and make_shared instead

../include/libtorrent/intrusive_ptr_base.hpp:44

CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
@@ -2045,7 +2000,7 @@ namespace libtorrent
 
 		intrusive_ptr_base(): m_refs(0) {}
 
-
relevance 2../include/libtorrent/session_settings.hpp:55this type is only used internally now. move it to an internal header and make this type properly deprecated.

this type is only used internally now. move it to an internal +

relevance 2../include/libtorrent/session_settings.hpp:55this type is only used internally now. move it to an internal header and make this type properly deprecated.

this type is only used internally now. move it to an internal header and make this type properly deprecated.

../include/libtorrent/session_settings.hpp:55

 #include "libtorrent/version.hpp"
 #include "libtorrent/config.hpp"
@@ -2097,7 +2052,7 @@ namespace libtorrent
 		// proxy_settings::type field.
 		enum proxy_type
 		{
-
relevance 2../include/libtorrent/settings_pack.hpp:70add an API to query a settings_pack as well

add an API to query a settings_pack as well

../include/libtorrent/settings_pack.hpp:70

relevance 2../include/libtorrent/settings_pack.hpp:71maybe convert all bool types into int-types as well

maybe convert all bool types into int-types as well

../include/libtorrent/settings_pack.hpp:71

{
+
relevance 2../include/libtorrent/settings_pack.hpp:70add an API to query a settings_pack as well

add an API to query a settings_pack as well

../include/libtorrent/settings_pack.hpp:70

relevance 2../include/libtorrent/settings_pack.hpp:71maybe convert all bool types into int-types as well

maybe convert all bool types into int-types as well

../include/libtorrent/settings_pack.hpp:71

{
 	namespace aux { struct session_impl; struct session_settings; }
 
 	struct settings_pack;
@@ -2148,7 +2103,7 @@ namespace libtorrent
 		{
 			string_type_base = 0x0000,
 			int_type_base =    0x4000,
-
relevance 2../include/libtorrent/socks5_stream.hpp:129fix error messages to use custom error_code category

fix error messages to use custom error_code category

../include/libtorrent/socks5_stream.hpp:129

relevance 2../include/libtorrent/socks5_stream.hpp:130add async_connect() that takes a hostname and port as well

add async_connect() that takes a hostname and port as well

../include/libtorrent/socks5_stream.hpp:130

		if (m_dst_name.size() > 255)
+
relevance 2../include/libtorrent/socks5_stream.hpp:129fix error messages to use custom error_code category

fix error messages to use custom error_code category

../include/libtorrent/socks5_stream.hpp:129

relevance 2../include/libtorrent/socks5_stream.hpp:130add async_connect() that takes a hostname and port as well

add async_connect() that takes a hostname and port as well

../include/libtorrent/socks5_stream.hpp:130

		if (m_dst_name.size() > 255)
 			m_dst_name.resize(255);
 	}
 
@@ -2199,7 +2154,7 @@ namespace libtorrent
 		m_resolver.async_resolve(q, boost::bind(
 			&socks5_stream::name_lookup, this, _1, _2, h));
 	}
-
relevance 2../include/libtorrent/torrent_info.hpp:306there may be some opportunities to optimize the size if torrent_info. specifically to turn some std::string and std::vector into pointers

there may be some opportunities to optimize the size if torrent_info. +

relevance 2../include/libtorrent/torrent_info.hpp:306there may be some opportunities to optimize the size if torrent_info. specifically to turn some std::string and std::vector into pointers

there may be some opportunities to optimize the size if torrent_info. specifically to turn some std::string and std::vector into pointers

../include/libtorrent/torrent_info.hpp:306

		bool resolving;
 
 		// if the user wanted to remove this while
@@ -2251,7 +2206,7 @@ specifically to turn some std::string and std::vector into pointers

../i #ifndef BOOST_NO_EXCEPTIONS torrent_info(lazy_entry const& torrent_file, int flags = 0); torrent_info(char const* buffer, int size, int flags = 0); -

relevance 2../include/libtorrent/tracker_manager.hpp:269this class probably doesn't need to have virtual functions.

this class probably doesn't need to have virtual functions.

../include/libtorrent/tracker_manager.hpp:269

		int m_completion_timeout;
+
relevance 2../include/libtorrent/tracker_manager.hpp:270this class probably doesn't need to have virtual functions.

this class probably doesn't need to have virtual functions.

../include/libtorrent/tracker_manager.hpp:270

		int m_completion_timeout;
 
 		typedef mutex mutex_t;
 		mutable mutex_t m_mutex;
@@ -2302,7 +2257,7 @@ specifically to turn some std::string and std::vector into pointers

../i boost::shared_ptr<tracker_connection> shared_from_this() { -

relevance 2../include/libtorrent/tracker_manager.hpp:366this should be unique_ptr in the future

this should be unique_ptr in the future

../include/libtorrent/tracker_manager.hpp:366

		// this is only used for SOCKS packets, since
+
relevance 2../include/libtorrent/tracker_manager.hpp:367this should be unique_ptr in the future

this should be unique_ptr in the future

../include/libtorrent/tracker_manager.hpp:367

		// this is only used for SOCKS packets, since
 		// they may be addressed to hostname
 		virtual bool incoming_packet(error_code const& e, char const* hostname
 			, char const* buf, int size);
@@ -2334,13 +2289,14 @@ specifically to turn some std::string and std::vector into pointers

../i class udp_socket& m_udp_socket; resolver_interface& m_host_resolver; aux::session_settings const& m_settings; + counters& m_stats_counters; bool m_abort; }; } #endif // TORRENT_TRACKER_MANAGER_HPP_INCLUDED -

relevance 2../include/libtorrent/aux_/session_interface.hpp:107the IP voting mechanism should be factored out to its own class, not part of the session

the IP voting mechanism should be factored out +

relevance 2../include/libtorrent/aux_/session_interface.hpp:107the IP voting mechanism should be factored out to its own class, not part of the session

the IP voting mechanism should be factored out to its own class, not part of the session

../include/libtorrent/aux_/session_interface.hpp:107

	class port_filter;
 	struct settings_pack;
 	struct torrent_peer_allocator_interface;
@@ -2392,7 +2348,7 @@ namespace libtorrent { namespace aux
 		virtual void queue_async_resume_data(boost::shared_ptr<torrent> const& t) = 0;
 		virtual void done_async_resume() = 0;
 		virtual void evict_torrent(torrent* t) = 0;
-
relevance 1../src/http_seed_connection.cpp:124in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size

in chunked encoding mode, this assert won't hold. +

relevance 1../src/http_seed_connection.cpp:124in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size

in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size

../src/http_seed_connection.cpp:124

	boost::optional<piece_block_progress>
 	http_seed_connection::downloading_piece_progress() const
 	{
@@ -2444,8 +2400,8 @@ the chunk headers should be subtracted from the receive_buffer_size

../s std::string request; request.reserve(400); -

relevance 1../src/session_impl.cpp:5327report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address

report the proper address of the router as the source IP of -this understanding of our external address, instead of the empty address

../src/session_impl.cpp:5327

	void session_impl::on_port_mapping(int mapping, address const& ip, int port
+
relevance 1../src/session_impl.cpp:5316report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address

report the proper address of the router as the source IP of +this understanding of our external address, instead of the empty address

../src/session_impl.cpp:5316

	void session_impl::on_port_mapping(int mapping, address const& ip, int port
 		, error_code const& ec, int map_transport)
 	{
 		TORRENT_ASSERT(is_single_thread());
@@ -2492,13 +2448,9 @@ this understanding of our external address, instead of the empty address

relevance 1../src/session_impl.cpp:6515we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily

we only need to do this if our global IPv4 address has changed +

relevance 1../src/session_impl.cpp:6481we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily

we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT -is kind of expensive, it would be nice to not do it unnecessarily

../src/session_impl.cpp:6515

#endif
+is kind of expensive, it would be nice to not do it unnecessarily

../src/session_impl.cpp:6481

#endif
 
 		if (!m_external_ip.cast_vote(ip, source_type, source)) return;
 
@@ -2549,7 +2501,7 @@ is kind of expensive, it would be nice to not do it unnecessarily

../src , boost::function<void(char*)> const& handler) { return m_disk_thread.async_allocate_disk_buffer(category, handler); -

relevance 1../src/torrent.cpp:1142make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file

make this depend on the error and on the filesystem the +

relevance 1../src/torrent.cpp:1142make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file

make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything @@ -2604,7 +2556,7 @@ up to the highest written piece in each file

../src/torrent.cpp:1142

relevance 1../src/torrent.cpp:6837save the send_stats state instead of throwing them away it may pose an issue when downgrading though

save the send_stats state instead of throwing them away +

relevance 1../src/torrent.cpp:6837save the send_stats state instead of throwing them away it may pose an issue when downgrading though

save the send_stats state instead of throwing them away it may pose an issue when downgrading though

../src/torrent.cpp:6837

					for (int k = 0; k < bits; ++k)
 						v |= (i->info[j*8+k].state == piece_picker::block_info::state_finished)
 						? (1 << k) : 0;
@@ -2656,7 +2608,7 @@ it may pose an issue when downgrading though

../src/torrent.cpp:6837

relevance 1../src/torrent.cpp:7933should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though

should disconnect all peers that have the pieces we have +

relevance 1../src/torrent.cpp:7933should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though

should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though

../src/torrent.cpp:7933

		set_state(torrent_status::finished);
 		set_queue_position(-1);
@@ -2709,7 +2661,7 @@ for all peers though

../src/torrent.cpp:7933

relevance 1../include/libtorrent/ip_voter.hpp:122instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

../include/libtorrent/ip_voter.hpp:122

		// away all the votes and started from scratch, in case
+
relevance 1../include/libtorrent/ip_voter.hpp:122instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.

../include/libtorrent/ip_voter.hpp:122

		// away all the votes and started from scratch, in case
 		// our IP has changed
 		ptime m_last_rotate;
 	};
@@ -2736,7 +2688,7 @@ for all peers though

../src/torrent.cpp:7933

relevance 1../include/libtorrent/web_peer_connection.hpp:121if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer

if we make this be a disk_buffer_holder instead +

relevance 1../include/libtorrent/web_peer_connection.hpp:121if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer

if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer

../include/libtorrent/web_peer_connection.hpp:121

 		// returns the block currently being
@@ -2789,7 +2741,7 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer

../incl }; } -

relevance 0../test/test_block_cache.cpp:472test try_evict_blocks

test try_evict_blocks

../test/test_block_cache.cpp:472

relevance 0../test/test_block_cache.cpp:473test evicting volatile pieces, to see them be removed

test evicting volatile pieces, to see them be removed

../test/test_block_cache.cpp:473

relevance 0../test/test_block_cache.cpp:474test evicting dirty pieces

test evicting dirty pieces

../test/test_block_cache.cpp:474

relevance 0../test/test_block_cache.cpp:475test free_piece

test free_piece

../test/test_block_cache.cpp:475

relevance 0../test/test_block_cache.cpp:476test abort_dirty

test abort_dirty

../test/test_block_cache.cpp:476

relevance 0../test/test_block_cache.cpp:477test unaligned reads

test unaligned reads

../test/test_block_cache.cpp:477

	// it's supposed to be a cache hit
+
relevance 0../test/test_block_cache.cpp:472test try_evict_blocks

test try_evict_blocks

../test/test_block_cache.cpp:472

relevance 0../test/test_block_cache.cpp:473test evicting volatile pieces, to see them be removed

test evicting volatile pieces, to see them be removed

../test/test_block_cache.cpp:473

relevance 0../test/test_block_cache.cpp:474test evicting dirty pieces

test evicting dirty pieces

../test/test_block_cache.cpp:474

relevance 0../test/test_block_cache.cpp:475test free_piece

test free_piece

../test/test_block_cache.cpp:475

relevance 0../test/test_block_cache.cpp:476test abort_dirty

test abort_dirty

../test/test_block_cache.cpp:476

relevance 0../test/test_block_cache.cpp:477test unaligned reads

test unaligned reads

../test/test_block_cache.cpp:477

	// it's supposed to be a cache hit
 	TEST_CHECK(ret >= 0);
 	// return the reference to the buffer we just read
 	RETURN_BUFFER;
@@ -2812,7 +2764,7 @@ int test_main()
 
return 0;
} -
relevance 0../test/test_metadata_extension.cpp:87it would be nice to test reversing which session is making the connection as well

it would be nice to test reversing +

relevance 0../test/test_metadata_extension.cpp:87it would be nice to test reversing which session is making the connection as well

it would be nice to test reversing which session is making the connection as well

../test/test_metadata_extension.cpp:87

	, boost::shared_ptr<libtorrent::torrent_plugin> (*constructor)(libtorrent::torrent*, void*)
 	, int timeout)
 {
@@ -2864,7 +2816,7 @@ which session is making the connection as well

../test/test_metadata_ext ses1.apply_settings(pack); ses2.apply_settings(pack); -

relevance 0../test/test_policy.cpp:419test applying a port_filter

test applying a port_filter

../test/test_policy.cpp:419

relevance 0../test/test_policy.cpp:420test erasing peers

test erasing peers

../test/test_policy.cpp:420

relevance 0../test/test_policy.cpp:421test using port and ip filter

test using port and ip filter

../test/test_policy.cpp:421

relevance 0../test/test_policy.cpp:422test incrementing failcount (and make sure we no longer consider the peer a connect canidate)

test incrementing failcount (and make sure we no longer consider the peer a connect canidate)

../test/test_policy.cpp:422

relevance 0../test/test_policy.cpp:423test max peerlist size

test max peerlist size

../test/test_policy.cpp:423

relevance 0../test/test_policy.cpp:424test logic for which connection to keep when receiving an incoming connection to the same peer as we just made an outgoing connection to

test logic for which connection to keep when receiving an incoming connection to the same peer as we just made an outgoing connection to

../test/test_policy.cpp:424

relevance 0../test/test_policy.cpp:425test update_peer_port with allow_multiple_connections_per_ip

test update_peer_port with allow_multiple_connections_per_ip

../test/test_policy.cpp:425

relevance 0../test/test_policy.cpp:426test set_seed

test set_seed

../test/test_policy.cpp:426

relevance 0../test/test_policy.cpp:427test has_peer

test has_peer

../test/test_policy.cpp:427

relevance 0../test/test_policy.cpp:428test insert_peer with a full list

test insert_peer with a full list

../test/test_policy.cpp:428

relevance 0../test/test_policy.cpp:429test add i2p peers

test add i2p peers

../test/test_policy.cpp:429

relevance 0../test/test_policy.cpp:430test allow_i2p_mixed

test allow_i2p_mixed

../test/test_policy.cpp:430

relevance 0../test/test_policy.cpp:431test insert_peer failing

test insert_peer failing

../test/test_policy.cpp:431

relevance 0../test/test_policy.cpp:432test IPv6

test IPv6

../test/test_policy.cpp:432

relevance 0../test/test_policy.cpp:433test connect_to_peer() failing

test connect_to_peer() failing

../test/test_policy.cpp:433

relevance 0../test/test_policy.cpp:434test connection_closed

test connection_closed

../test/test_policy.cpp:434

relevance 0../test/test_policy.cpp:435test recalculate connect candidates

test recalculate connect candidates

../test/test_policy.cpp:435

relevance 0../test/test_policy.cpp:436add tests here

add tests here

../test/test_policy.cpp:436

		for (int i = 0; i < 100; ++i)
+
relevance 0../test/test_policy.cpp:419test applying a port_filter

test applying a port_filter

../test/test_policy.cpp:419

relevance 0../test/test_policy.cpp:420test erasing peers

test erasing peers

../test/test_policy.cpp:420

relevance 0../test/test_policy.cpp:421test using port and ip filter

test using port and ip filter

../test/test_policy.cpp:421

relevance 0../test/test_policy.cpp:422test incrementing failcount (and make sure we no longer consider the peer a connect canidate)

test incrementing failcount (and make sure we no longer consider the peer a connect canidate)

../test/test_policy.cpp:422

relevance 0../test/test_policy.cpp:423test max peerlist size

test max peerlist size

../test/test_policy.cpp:423

relevance 0../test/test_policy.cpp:424test logic for which connection to keep when receiving an incoming connection to the same peer as we just made an outgoing connection to

test logic for which connection to keep when receiving an incoming connection to the same peer as we just made an outgoing connection to

../test/test_policy.cpp:424

relevance 0../test/test_policy.cpp:425test update_peer_port with allow_multiple_connections_per_ip

test update_peer_port with allow_multiple_connections_per_ip

../test/test_policy.cpp:425

relevance 0../test/test_policy.cpp:426test set_seed

test set_seed

../test/test_policy.cpp:426

relevance 0../test/test_policy.cpp:427test has_peer

test has_peer

../test/test_policy.cpp:427

relevance 0../test/test_policy.cpp:428test insert_peer with a full list

test insert_peer with a full list

../test/test_policy.cpp:428

relevance 0../test/test_policy.cpp:429test add i2p peers

test add i2p peers

../test/test_policy.cpp:429

relevance 0../test/test_policy.cpp:430test allow_i2p_mixed

test allow_i2p_mixed

../test/test_policy.cpp:430

relevance 0../test/test_policy.cpp:431test insert_peer failing

test insert_peer failing

../test/test_policy.cpp:431

relevance 0../test/test_policy.cpp:432test IPv6

test IPv6

../test/test_policy.cpp:432

relevance 0../test/test_policy.cpp:433test connect_to_peer() failing

test connect_to_peer() failing

../test/test_policy.cpp:433

relevance 0../test/test_policy.cpp:434test connection_closed

test connection_closed

../test/test_policy.cpp:434

relevance 0../test/test_policy.cpp:435test recalculate connect candidates

test recalculate connect candidates

../test/test_policy.cpp:435

relevance 0../test/test_policy.cpp:436add tests here

add tests here

../test/test_policy.cpp:436

		for (int i = 0; i < 100; ++i)
 		{
 			torrent_peer* peer = p.add_peer(rand_tcp_ep(), 0, 0, &st);
 			TEST_EQUAL(st.erased.size(), 0);
@@ -2888,7 +2840,7 @@ which session is making the connection as well

../test/test_metadata_ext return 0; } -

relevance 0../test/test_primitives.cpp:213test the case where we have > 120 samples (and have the base delay actually be updated)

test the case where we have > 120 samples (and have the base delay actually be updated)

../test/test_primitives.cpp:213

relevance 0../test/test_primitives.cpp:214test the case where a sample is lower than the history entry but not lower than the base

test the case where a sample is lower than the history entry but not lower than the base

../test/test_primitives.cpp:214

	TEST_CHECK(!filter.find(k3));
+
relevance 0../test/test_primitives.cpp:213test the case where we have > 120 samples (and have the base delay actually be updated)

test the case where we have > 120 samples (and have the base delay actually be updated)

../test/test_primitives.cpp:213

relevance 0../test/test_primitives.cpp:214test the case where a sample is lower than the history entry but not lower than the base

test the case where a sample is lower than the history entry but not lower than the base

../test/test_primitives.cpp:214

	TEST_CHECK(!filter.find(k3));
 	TEST_CHECK(filter.find(k4));
 
 	// test timestamp_history
@@ -2939,7 +2891,7 @@ which session is making the connection as well

../test/test_metadata_ext sanitize_append_path_element(path, "a...b", 5); TEST_EQUAL(path, "a...b"); -

relevance 0../test/test_rss.cpp:136verify some key state is saved in 'state'

verify some key state is saved in 'state'

../test/test_rss.cpp:136

	feed_status st;
+
relevance 0../test/test_rss.cpp:136verify some key state is saved in 'state'

verify some key state is saved in 'state'

../test/test_rss.cpp:136

	feed_status st;
 	f->get_feed_status(&st);
 	TEST_CHECK(!st.error);
 
@@ -2974,7 +2926,7 @@ int test_main()
 	return 0;
 }
 
-
relevance 0../test/test_ssl.cpp:377test using a signed certificate with the wrong info-hash in DN

test using a signed certificate with the wrong info-hash in DN

../test/test_ssl.cpp:377

	// in verifying peers
+
relevance 0../test/test_ssl.cpp:377test using a signed certificate with the wrong info-hash in DN

test using a signed certificate with the wrong info-hash in DN

../test/test_ssl.cpp:377

	// in verifying peers
 	ctx.set_verify_mode(context::verify_none, ec);
 	if (ec)
 	{
@@ -3025,7 +2977,7 @@ int test_main()
 			return false;
 		}
 		fprintf(stderr, "use_tmp_dh_file \"%s\"\n", dh_params.c_str());
-
relevance 0../test/test_ssl.cpp:475also test using a hash that refers to a valid torrent but that differs from the SNI hash

also test using a hash that refers to a valid torrent +

relevance 0../test/test_ssl.cpp:475also test using a hash that refers to a valid torrent but that differs from the SNI hash

also test using a hash that refers to a valid torrent but that differs from the SNI hash

../test/test_ssl.cpp:475

	print_alerts(ses1, "ses1", true, true, true, &on_alert);
 	if (ec)
 	{
@@ -3077,7 +3029,7 @@ but that differs from the SNI hash

../test/test_ssl.cpp:475

relevance 0../test/test_torrent.cpp:132wait for an alert rather than just waiting 10 seconds. This is kind of silly

wait for an alert rather than just waiting 10 seconds. This is kind of silly

../test/test_torrent.cpp:132

			TEST_EQUAL(h.file_priorities().size(), info->num_files());
+
relevance 0../test/test_torrent.cpp:132wait for an alert rather than just waiting 10 seconds. This is kind of silly

wait for an alert rather than just waiting 10 seconds. This is kind of silly

../test/test_torrent.cpp:132

			TEST_EQUAL(h.file_priorities().size(), info->num_files());
 			TEST_EQUAL(h.file_priorities()[0], 0);
 			if (info->num_files() > 1)
 				TEST_EQUAL(h.file_priorities()[1], 0);
@@ -3128,7 +3080,7 @@ but that differs from the SNI hash

../test/test_ssl.cpp:475

relevance 0../test/test_torrent_parse.cpp:114test remap_files

test remap_files

../test/test_torrent_parse.cpp:114

relevance 0../test/test_torrent_parse.cpp:115merkle torrents. specifically torrent_info::add_merkle_nodes and torrent with "root hash"

merkle torrents. specifically torrent_info::add_merkle_nodes and torrent with "root hash"

../test/test_torrent_parse.cpp:115

relevance 0../test/test_torrent_parse.cpp:116torrent with 'p' (padfile) attribute

torrent with 'p' (padfile) attribute

../test/test_torrent_parse.cpp:116

relevance 0../test/test_torrent_parse.cpp:117torrent with 'h' (hidden) attribute

torrent with 'h' (hidden) attribute

../test/test_torrent_parse.cpp:117

relevance 0../test/test_torrent_parse.cpp:118torrent with 'x' (executable) attribute

torrent with 'x' (executable) attribute

../test/test_torrent_parse.cpp:118

relevance 0../test/test_torrent_parse.cpp:119torrent with 'l' (symlink) attribute

torrent with 'l' (symlink) attribute

../test/test_torrent_parse.cpp:119

relevance 0../test/test_torrent_parse.cpp:120creating a merkle torrent (torrent_info::build_merkle_list)

creating a merkle torrent (torrent_info::build_merkle_list)

../test/test_torrent_parse.cpp:120

relevance 0../test/test_torrent_parse.cpp:121torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)

torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)

../test/test_torrent_parse.cpp:121

	{ "invalid_info.torrent", errors::torrent_missing_info },
+
relevance 0../test/test_torrent_parse.cpp:114test remap_files

test remap_files

../test/test_torrent_parse.cpp:114

relevance 0../test/test_torrent_parse.cpp:115merkle torrents. specifically torrent_info::add_merkle_nodes and torrent with "root hash"

merkle torrents. specifically torrent_info::add_merkle_nodes and torrent with "root hash"

../test/test_torrent_parse.cpp:115

relevance 0../test/test_torrent_parse.cpp:116torrent with 'p' (padfile) attribute

torrent with 'p' (padfile) attribute

../test/test_torrent_parse.cpp:116

relevance 0../test/test_torrent_parse.cpp:117torrent with 'h' (hidden) attribute

torrent with 'h' (hidden) attribute

../test/test_torrent_parse.cpp:117

relevance 0../test/test_torrent_parse.cpp:118torrent with 'x' (executable) attribute

torrent with 'x' (executable) attribute

../test/test_torrent_parse.cpp:118

relevance 0../test/test_torrent_parse.cpp:119torrent with 'l' (symlink) attribute

torrent with 'l' (symlink) attribute

../test/test_torrent_parse.cpp:119

relevance 0../test/test_torrent_parse.cpp:120creating a merkle torrent (torrent_info::build_merkle_list)

creating a merkle torrent (torrent_info::build_merkle_list)

../test/test_torrent_parse.cpp:120

relevance 0../test/test_torrent_parse.cpp:121torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)

torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)

../test/test_torrent_parse.cpp:121

	{ "invalid_info.torrent", errors::torrent_missing_info },
 	{ "string.torrent", errors::torrent_is_no_dict },
 	{ "negative_size.torrent", errors::torrent_invalid_length },
 	{ "negative_file_size.torrent", errors::torrent_file_parse_failed },
@@ -3179,7 +3131,7 @@ namespace libtorrent
 	TEST_EQUAL(merkle_num_leafs(15), 16);
 	TEST_EQUAL(merkle_num_leafs(16), 16);
 	TEST_EQUAL(merkle_num_leafs(17), 32);
-
relevance 0../test/test_tracker.cpp:198test parse peers6

test parse peers6

../test/test_tracker.cpp:198

relevance 0../test/test_tracker.cpp:199test parse tracker-id

test parse tracker-id

../test/test_tracker.cpp:199

relevance 0../test/test_tracker.cpp:200test parse failure-reason

test parse failure-reason

../test/test_tracker.cpp:200

relevance 0../test/test_tracker.cpp:201test all failure paths invalid bencoding not a dictionary no files entry in scrape response no info-hash entry in scrape response malformed peers in peer list of dictionaries uneven number of bytes in peers and peers6 string responses

test all failure paths +

relevance 0../test/test_tracker.cpp:198test parse peers6

test parse peers6

../test/test_tracker.cpp:198

relevance 0../test/test_tracker.cpp:199test parse tracker-id

test parse tracker-id

../test/test_tracker.cpp:199

relevance 0../test/test_tracker.cpp:200test parse failure-reason

test parse failure-reason

../test/test_tracker.cpp:200

relevance 0../test/test_tracker.cpp:201test all failure paths invalid bencoding not a dictionary no files entry in scrape response no info-hash entry in scrape response malformed peers in peer list of dictionaries uneven number of bytes in peers and peers6 string responses

test all failure paths invalid bencoding not a dictionary no files entry in scrape response @@ -3236,7 +3188,7 @@ int test_main() snprintf(tracker_url, sizeof(tracker_url), "http://127.0.0.1:%d/announce", http_port); t->add_tracker(tracker_url, 0); -

relevance 0../test/test_upnp.cpp:100store the log and verify that some key messages are there

store the log and verify that some key messages are there

../test/test_upnp.cpp:100

		"USN:uuid:000f-66d6-7296000099dc::upnp:rootdevice\r\n"
+
relevance 0../test/test_upnp.cpp:100store the log and verify that some key messages are there

store the log and verify that some key messages are there

../test/test_upnp.cpp:100

		"USN:uuid:000f-66d6-7296000099dc::upnp:rootdevice\r\n"
 		"Location: http://127.0.0.1:%d/upnp.xml\r\n"
 		"Server: Custom/1.0 UPnP/1.0 Proc/Ver\r\n"
 		"EXT:\r\n"
@@ -3287,7 +3239,7 @@ int run_upnp_test(char const* root_filename, char const* router_model, char cons
 	error_code ec;
 	load_file(root_filename, buf, ec);
 	buf.push_back(0);
-
relevance 0../test/web_seed_suite.cpp:373file hashes don't work with the new torrent creator reading async

file hashes don't work with the new torrent creator reading async

../test/web_seed_suite.cpp:373

		// corrupt the files now, so that the web seed will be banned
+
relevance 0../test/web_seed_suite.cpp:373file hashes don't work with the new torrent creator reading async

file hashes don't work with the new torrent creator reading async

../test/web_seed_suite.cpp:373

		// corrupt the files now, so that the web seed will be banned
 		if (test_url_seed)
 		{
 			create_random_files(combine_path(save_path, "torrent_dir"), file_sizes, sizeof(file_sizes)/sizeof(file_sizes[0]));
@@ -3338,7 +3290,7 @@ int run_upnp_test(char const* root_filename, char const* router_model, char cons
 			, chunked_encoding, test_ban, keepalive);
 		
 		if (test_url_seed && test_rename)
-
relevance 0../src/block_cache.cpp:884it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list

it's somewhat expensive +

relevance 0../src/block_cache.cpp:884it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list

it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list

../src/block_cache.cpp:884

	}
@@ -3392,7 +3344,7 @@ weren't in this list

../src/block_cache.cpp:884

relevance 0../src/block_cache.cpp:948this should probably only be done every n:th time

this should probably only be done every n:th time

../src/block_cache.cpp:948

			}
+
relevance 0../src/block_cache.cpp:948this should probably only be done every n:th time

this should probably only be done every n:th time

../src/block_cache.cpp:948

			}
 
 			if (pe->ok_to_evict())
 			{
@@ -3443,7 +3395,7 @@ weren't in this list

../src/block_cache.cpp:884

relevance 0../src/block_cache.cpp:1720create a holder for refcounts that automatically decrement

create a holder for refcounts that automatically decrement

../src/block_cache.cpp:1720

	}
+
relevance 0../src/block_cache.cpp:1720create a holder for refcounts that automatically decrement

create a holder for refcounts that automatically decrement

../src/block_cache.cpp:1720

	}
 
 	j->buffer = allocate_buffer("send buffer");
 	if (j->buffer == 0) return -2;
@@ -3494,7 +3446,7 @@ bool block_cache::maybe_free_piece(cached_piece_entry* pe)
 
 	boost::shared_ptr<piece_manager> s = pe->storage;
 
-
relevance 0../src/bt_peer_connection.cpp:645this could be optimized using knuth morris pratt

this could be optimized using knuth morris pratt

../src/bt_peer_connection.cpp:645

		{
+
relevance 0../src/bt_peer_connection.cpp:645this could be optimized using knuth morris pratt

this could be optimized using knuth morris pratt

../src/bt_peer_connection.cpp:645

		{
 			disconnect(errors::no_memory, op_encryption);
 			return;
 		}
@@ -3545,7 +3497,7 @@ bool block_cache::maybe_free_piece(cached_piece_entry* pe)
 // 		}
 
         // no complete sync
-
relevance 0../src/bt_peer_connection.cpp:2216if we're finished, send upload_only message

if we're finished, send upload_only message

../src/bt_peer_connection.cpp:2216

			if (msg[5 + k / 8] & (0x80 >> (k % 8))) bitfield_string[k] = '1';
+
relevance 0../src/bt_peer_connection.cpp:2216if we're finished, send upload_only message

if we're finished, send upload_only message

../src/bt_peer_connection.cpp:2216

			if (msg[5 + k / 8] & (0x80 >> (k % 8))) bitfield_string[k] = '1';
 			else bitfield_string[k] = '0';
 		}
 		peer_log("==> BITFIELD [ %s ]", bitfield_string.c_str());
@@ -3596,7 +3548,7 @@ bool block_cache::maybe_free_piece(cached_piece_entry* pe)
 				? m_settings.get_str(settings_pack::user_agent)
 				: m_settings.get_str(settings_pack::handshake_client_version);
 		}
-
relevance 0../src/disk_io_thread.cpp:921instead of doing a lookup each time through the loop, save cached_piece_entry pointers with piece_refcount incremented to pin them

instead of doing a lookup each time through the loop, save +

relevance 0../src/disk_io_thread.cpp:921instead of doing a lookup each time through the loop, save cached_piece_entry pointers with piece_refcount incremented to pin them

instead of doing a lookup each time through the loop, save cached_piece_entry pointers with piece_refcount incremented to pin them

../src/disk_io_thread.cpp:921

	// this is why we pass in 1 as cont_block to the flushing functions
 	void disk_io_thread::try_flush_write_blocks(int num, tailqueue& completed_jobs
 		, mutex::scoped_lock& l)
@@ -3648,7 +3600,7 @@ cached_piece_entry pointers with piece_refcount incremented to pin them

cached_piece_entry* pe = m_disk_cache.find_piece(i->first, i->second); if (pe == NULL) continue; if (pe->num_dirty == 0) continue; -

relevance 0../src/disk_io_thread.cpp:1132instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.

instead of doing this. pass in the settings to each storage_interface +

relevance 0../src/disk_io_thread.cpp:1132instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.

instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.

../src/disk_io_thread.cpp:1132

	{
@@ -3695,7 +3647,7 @@ each access to the settings won't require a mutex to be held.

../src/dis // our quanta in case there aren't any other // jobs to run in between -

relevance 0../src/disk_io_thread.cpp:1160a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if m_outstanding_jobs > 0

a potentially more efficient solution would be to have a special +

relevance 0../src/disk_io_thread.cpp:1160a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if m_outstanding_jobs > 0

a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if m_outstanding_jobs > 0

../src/disk_io_thread.cpp:1160

 		ptime start_time = time_now_hires();
@@ -3728,7 +3680,7 @@ any thread. It would only work if m_outstanding_jobs > 0

../src/disk_io_ } #if TORRENT_USE_ASSERT -

relevance 0../src/disk_io_thread.cpp:1174it should clear the hash state even when there's an error, right?

it should clear the hash state even when there's an error, right?

../src/disk_io_thread.cpp:1174

		--m_outstanding_jobs;
+
relevance 0../src/disk_io_thread.cpp:1174it should clear the hash state even when there's an error, right?

it should clear the hash state even when there's an error, right?

../src/disk_io_thread.cpp:1174

		--m_outstanding_jobs;
 
 		if (ret == retry_job)
 		{
@@ -3779,7 +3731,7 @@ any thread. It would only work if m_outstanding_jobs > 0

../src/disk_io_ j->error.operation = storage_error::alloc_cache_piece; return -1; } -

relevance 0../src/disk_io_thread.cpp:1871maybe the tailqueue_iterator should contain a pointer-pointer instead and have an unlink function

maybe the tailqueue_iterator should contain a pointer-pointer +

relevance 0../src/disk_io_thread.cpp:1871maybe the tailqueue_iterator should contain a pointer-pointer instead and have an unlink function

maybe the tailqueue_iterator should contain a pointer-pointer instead and have an unlink function

../src/disk_io_thread.cpp:1871

		j->callback = handler;
 
 		add_fence_job(storage, j);
@@ -3831,7 +3783,7 @@ instead and have an unlink function

../src/disk_io_thread.cpp:1871

< if (completed_jobs.size()) add_completed_jobs(completed_jobs); } -
relevance 0../src/disk_io_thread.cpp:2126this is potentially very expensive. One way to solve it would be to have a fence for just this one piece.

this is potentially very expensive. One way to solve +

relevance 0../src/disk_io_thread.cpp:2126this is potentially very expensive. One way to solve it would be to have a fence for just this one piece.

this is potentially very expensive. One way to solve it would be to have a fence for just this one piece.

../src/disk_io_thread.cpp:2126

	}
 
 	void disk_io_thread::async_clear_piece(piece_manager* storage, int index
@@ -3883,7 +3835,7 @@ it would be to have a fence for just this one piece.

../src/disk_io_thre if (!pe->hash) return; if (pe->hashing) return; -

relevance 0../src/disk_io_thread.cpp:2387we should probably just hang the job on the piece and make sure the hasher gets kicked

we should probably just hang the job on the piece and make sure the hasher gets kicked

../src/disk_io_thread.cpp:2387

		if (pe == NULL)
+
relevance 0../src/disk_io_thread.cpp:2387we should probably just hang the job on the piece and make sure the hasher gets kicked

we should probably just hang the job on the piece and make sure the hasher gets kicked

../src/disk_io_thread.cpp:2387

		if (pe == NULL)
 		{
 			int cache_state = (j->flags & disk_io_job::volatile_read)
 				? cached_piece_entry::volatile_read_lru
@@ -3934,7 +3886,7 @@ it would be to have a fence for just this one piece.

../src/disk_io_thre // increment the refcounts of all // blocks up front, and then hash them without holding the lock -

relevance 0../src/disk_io_thread.cpp:2457introduce a holder class that automatically increments and decrements the piece_refcount

introduce a holder class that automatically increments +

relevance 0../src/disk_io_thread.cpp:2457introduce a holder class that automatically increments and decrements the piece_refcount

introduce a holder class that automatically increments and decrements the piece_refcount

../src/disk_io_thread.cpp:2457

		for (int i = ph->offset / block_size; i < blocks_in_piece; ++i)
 		{
 			iov.iov_len = (std::min)(block_size, piece_size - ph->offset);
@@ -3986,7 +3938,7 @@ and decrements the piece_refcount

../src/disk_io_thread.cpp:2457

relevance 0../src/disk_io_thread.cpp:2699it would be nice to not have to lock the mutex every turn through this loop

it would be nice to not have to lock the mutex every +

relevance 0../src/disk_io_thread.cpp:2699it would be nice to not have to lock the mutex every turn through this loop

it would be nice to not have to lock the mutex every turn through this loop

../src/disk_io_thread.cpp:2699

		{
 			j->error.ec = error::no_memory;
 			j->error.operation = storage_error::alloc_cache_piece;
@@ -4038,7 +3990,7 @@ turn through this loop

../src/disk_io_thread.cpp:2699

relevance 0../src/http_tracker_connection.cpp:93support authentication (i.e. user name and password) in the URL

support authentication (i.e. user name and password) in the URL

../src/http_tracker_connection.cpp:93

+
relevance 0../src/http_tracker_connection.cpp:93support authentication (i.e. user name and password) in the URL

support authentication (i.e. user name and password) in the URL

../src/http_tracker_connection.cpp:93

 	http_tracker_connection::http_tracker_connection(
 		io_service& ios
 		, tracker_manager& man
@@ -4089,7 +4041,7 @@ turn through this loop

../src/disk_io_thread.cpp:2699

relevance 0../src/http_tracker_connection.cpp:194support this somehow

support this somehow

../src/http_tracker_connection.cpp:194

				url += escape_string(id.c_str(), id.length());
+
relevance 0../src/http_tracker_connection.cpp:194support this somehow

support this somehow

../src/http_tracker_connection.cpp:194

				url += escape_string(id.c_str(), id.length());
 			}
 
 #if TORRENT_USE_I2P
@@ -4140,7 +4092,7 @@ turn through this loop

../src/disk_io_thread.cpp:2699

relevance 0../src/metadata_transfer.cpp:359this is not safe. The torrent could be unloaded while we're still sending the metadata

this is not safe. The torrent could be unloaded while +

relevance 0../src/metadata_transfer.cpp:359this is not safe. The torrent could be unloaded while we're still sending the metadata

this is not safe. The torrent could be unloaded while we're still sending the metadata

../src/metadata_transfer.cpp:359

				std::pair<int, int> offset
 					= req_to_offset(req, (int)m_tp.metadata().left());
 
@@ -4192,7 +4144,7 @@ we're still sending the metadata

../src/metadata_transfer.cpp:359

relevance 0../src/packet_buffer.cpp:176use compare_less_wrap for this comparison as well

use compare_less_wrap for this comparison as well

../src/packet_buffer.cpp:176

		while (new_size < size)
+
relevance 0../src/packet_buffer.cpp:176use compare_less_wrap for this comparison as well

use compare_less_wrap for this comparison as well

../src/packet_buffer.cpp:176

		while (new_size < size)
 			new_size <<= 1;
 
 		void** new_storage = (void**)malloc(sizeof(void*) * new_size);
@@ -4243,7 +4195,7 @@ we're still sending the metadata

../src/metadata_transfer.cpp:359

relevance 0../src/part_file.cpp:252what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal

what do we do if someone is currently reading from the disk +

relevance 0../src/part_file.cpp:252what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal

what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal

../src/part_file.cpp:252

		if (((mode & file::rw_mask) != file::read_only)
@@ -4297,7 +4249,7 @@ big of a deal

../src/part_file.cpp:252

relevance 0../src/part_file.cpp:344instead of rebuilding the whole file header and flushing it, update the slot entries as we go

instead of rebuilding the whole file header +

relevance 0../src/part_file.cpp:344instead of rebuilding the whole file header and flushing it, update the slot entries as we go

instead of rebuilding the whole file header and flushing it, update the slot entries as we go

../src/part_file.cpp:344

				if (block_to_copy == m_piece_size)
 				{
 					m_free_slots.push_back(i->second);
@@ -4349,7 +4301,7 @@ and flushing it, update the slot entries as we go

../src/part_file.cpp:3 for (int piece = 0; piece < m_max_pieces; ++piece) { -

relevance 0../src/peer_connection.cpp:1190this should be the global download rate

this should be the global download rate

../src/peer_connection.cpp:1190

+
relevance 0../src/peer_connection.cpp:1190this should be the global download rate

this should be the global download rate

../src/peer_connection.cpp:1190

 		int rate = 0;
 
 		// if we haven't received any data recently, the current download rate
@@ -4400,7 +4352,7 @@ and flushing it, update the slot entries as we go

../src/part_file.cpp:3 if (m_ignore_stats) return; boost::shared_ptr<torrent> t = m_torrent.lock(); if (!t) return; -

relevance 0../src/peer_connection.cpp:3416sort the allowed fast set in priority order

sort the allowed fast set in priority order

../src/peer_connection.cpp:3416

+
relevance 0../src/peer_connection.cpp:3416sort the allowed fast set in priority order

sort the allowed fast set in priority order

../src/peer_connection.cpp:3416

 		// if the peer has the piece and we want
 		// to download it, request it
 		if (int(m_have_piece.size()) > index
@@ -4451,7 +4403,7 @@ and flushing it, update the slot entries as we go

../src/part_file.cpp:3 boost::shared_ptr<torrent> t = m_torrent.lock(); TORRENT_ASSERT(t); TORRENT_ASSERT(t->has_picker()); -

relevance 0../src/piece_picker.cpp:2407when expanding pieces for cache stripe reasons, the !downloading condition doesn't make much sense

when expanding pieces for cache stripe reasons, +

relevance 0../src/piece_picker.cpp:2407when expanding pieces for cache stripe reasons, the !downloading condition doesn't make much sense

when expanding pieces for cache stripe reasons, the !downloading condition doesn't make much sense

../src/piece_picker.cpp:2407

		TORRENT_ASSERT(index < (int)m_piece_map.size() || m_piece_map.empty());
 		if (index+1 == (int)m_piece_map.size())
 			return m_blocks_in_last_piece;
@@ -4503,7 +4455,7 @@ the !downloading condition doesn't make much sense

../src/piece_picker.c // the second bool is true if this is the only active peer that is requesting // and downloading blocks from this piece. Active means having a connection. boost::tuple<bool, bool> requested_from(piece_picker::downloading_piece const& p -

relevance 0../src/session_impl.cpp:532there's no rule here to make uTP connections not have the global or local rate limits apply to it. This used to be the default.

there's no rule here to make uTP connections not have the global or +

relevance 0../src/session_impl.cpp:532there's no rule here to make uTP connections not have the global or local rate limits apply to it. This used to be the default.

there's no rule here to make uTP connections not have the global or local rate limits apply to it. This used to be the default.

../src/session_impl.cpp:532

		m_global_class = m_classes.new_peer_class("global");
 		m_tcp_peer_class = m_classes.new_peer_class("tcp");
 		m_local_peer_class = m_classes.new_peer_class("local");
@@ -4555,7 +4507,7 @@ local rate limits apply to it. This used to be the default.

../src/sessi // futexes, shared objects etc. rl.rlim_cur -= 20; -

relevance 0../src/session_impl.cpp:1744instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.

instead of having a special case for this, just make the +

relevance 0../src/session_impl.cpp:1744instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.

instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.

../src/session_impl.cpp:1744

 		// reset the retry counter
@@ -4608,7 +4560,7 @@ retry:
 					, retries, flags, ec);
 
 				if (s.sock)
-
relevance 0../src/session_impl.cpp:2621should this function take a shared_ptr instead?

should this function take a shared_ptr instead?

../src/session_impl.cpp:2621

	{
+
relevance 0../src/session_impl.cpp:2621should this function take a shared_ptr instead?

should this function take a shared_ptr instead?

../src/session_impl.cpp:2621

	{
 #if defined TORRENT_ASIO_DEBUGGING
 		complete_async("session_impl::on_socks_accept");
 #endif
@@ -4659,7 +4611,7 @@ retry:
 		TORRENT_ASSERT(sp.use_count() > 0);
 
 		connection_map::iterator i = m_connections.find(sp);
-
relevance 0../src/session_impl.cpp:2975have a separate list for these connections, instead of having to loop through all of them

have a separate list for these connections, instead of having to loop through all of them

../src/session_impl.cpp:2975

		if (m_auto_manage_time_scaler < 0)
+
relevance 0../src/session_impl.cpp:2975have a separate list for these connections, instead of having to loop through all of them

have a separate list for these connections, instead of having to loop through all of them

../src/session_impl.cpp:2975

		if (m_auto_manage_time_scaler < 0)
 		{
 			INVARIANT_CHECK;
 			m_auto_manage_time_scaler = settings().get_int(settings_pack::auto_manage_interval);
@@ -4709,8 +4661,8 @@ retry:
 		}
 
 #ifndef TORRENT_DISABLE_DHT
-		if (m_dht)
-
relevance 0../src/session_impl.cpp:3016this should apply to all bandwidth channels

this should apply to all bandwidth channels

../src/session_impl.cpp:3016

			t.second_tick(tick_interval_ms, m_tick_residual / 1000);
+		int dht_down = 0;
+
relevance 0../src/session_impl.cpp:3016this should apply to all bandwidth channels

this should apply to all bandwidth channels

../src/session_impl.cpp:3016

			t.second_tick(tick_interval_ms, m_tick_residual / 1000);
 
 			// if the call to second_tick caused the torrent
 			// to no longer want to be ticked (i.e. it was
@@ -4720,13 +4672,13 @@ retry:
 		}
 
 #ifndef TORRENT_DISABLE_DHT
+		int dht_down = 0;
+		int dht_up = 0;
 		if (m_dht)
 		{
-			int dht_down;
-			int dht_up;
 			m_dht->network_stats(dht_up, dht_down);
-			m_stat.sent_dht_bytes(dht_up);
-			m_stat.received_dht_bytes(dht_down);
+			m_stats_counters.inc_stats_counter(counters::sent_dht_bytes, dht_up);
+			m_stats_counters.inc_stats_counter(counters::recv_dht_bytes, dht_down);
 		}
 #endif
 
@@ -4734,17 +4686,10 @@ retry:
 		{
 			peer_class* gpc = m_classes.at(m_global_class);
 
-			gpc->channel[peer_connection::download_channel].use_quota(
 #ifndef TORRENT_DISABLE_DHT
-				m_stat.download_dht() +
+			gpc->channel[peer_connection::download_channel].use_quota(dht_down);
+			gpc->channel[peer_connection::upload_channel].use_quota(dht_up);
 #endif
-				m_stat.download_tracker());
-
-			gpc->channel[peer_connection::upload_channel].use_quota(
-#ifndef TORRENT_DISABLE_DHT
-				m_stat.upload_dht() +
-#endif
-				m_stat.upload_tracker());
 
 			int up_limit = upload_rate_limit(m_global_class);
 			int down_limit = download_rate_limit(m_global_class);
@@ -4761,7 +4706,14 @@ retry:
 				&& m_stat.upload_ip_overhead() >= up_limit
 				&& m_alerts.should_post<performance_alert>())
 			{
-
relevance 0../src/session_impl.cpp:3509these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be avoided alltogether.

these vectors could be copied from m_torrent_lists, + m_alerts.post_alert(performance_alert(torrent_handle() + , performance_alert::upload_limit_too_low)); + } + } + + m_peak_up_rate = (std::max)(m_stat.upload_rate(), m_peak_up_rate); + m_peak_down_rate = (std::max)(m_stat.download_rate(), m_peak_down_rate); +

relevance 0../src/session_impl.cpp:3502these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be avoided alltogether.

these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed @@ -4769,7 +4721,7 @@ whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be -avoided alltogether.

../src/session_impl.cpp:3509

#if defined TORRENT_VERBOSE_LOGGING || defined TORRENT_LOGGING
+avoided alltogether.

../src/session_impl.cpp:3502

#if defined TORRENT_VERBOSE_LOGGING || defined TORRENT_LOGGING
 				if (t->allows_peers())
 					t->log_to_all_peers("AUTO MANAGER PAUSING TORRENT");
 #endif
@@ -4820,7 +4772,7 @@ avoided alltogether.

../src/session_impl.cpp:3509

relevance 0../src/session_impl.cpp:3584allow extensions to sort torrents for queuing

allow extensions to sort torrents for queuing

../src/session_impl.cpp:3584

				if (t->is_finished())
+
relevance 0../src/session_impl.cpp:3577allow extensions to sort torrents for queuing

allow extensions to sort torrents for queuing

../src/session_impl.cpp:3577

				if (t->is_finished())
 					seeds.push_back(t);
 				else
 					downloaders.push_back(t);
@@ -4871,9 +4823,9 @@ avoided alltogether.

../src/session_impl.cpp:3509

relevance 0../src/session_impl.cpp:3756use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections

use a lower limit than m_settings.connections_limit +

relevance 0../src/session_impl.cpp:3749use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections

use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming -connections

../src/session_impl.cpp:3756

		// robin fashion, so that every torrent is equally likely to connect to a
+connections

../src/session_impl.cpp:3749

		// robin fashion, so that every torrent is equally likely to connect to a
 		// peer
 
 		// boost connections are connections made by torrent connection
@@ -4924,8 +4876,8 @@ connections

../src/session_impl.cpp:3756

relevance 0../src/session_impl.cpp:3917post a message to have this happen immediately instead of waiting for the next tick

post a message to have this happen -immediately instead of waiting for the next tick

../src/session_impl.cpp:3917

						// we've unchoked this peer, and it hasn't reciprocated
+
relevance 0../src/session_impl.cpp:3910post a message to have this happen immediately instead of waiting for the next tick

post a message to have this happen +immediately instead of waiting for the next tick

../src/session_impl.cpp:3910

						// we've unchoked this peer, and it hasn't reciprocated
 						// we may want to increase our estimated reciprocation rate
 						p->increase_est_reciprocation_rate();
 					}
@@ -4976,7 +4928,7 @@ immediately instead of waiting for the next tick

../src/session_impl.cpp prev = i; } #endif -

relevance 0../src/session_impl.cpp:3951make configurable

make configurable

../src/session_impl.cpp:3951

+
relevance 0../src/session_impl.cpp:3944make configurable

make configurable

../src/session_impl.cpp:3944

 #ifdef TORRENT_DEBUG
 			for (std::vector<peer_connection*>::const_iterator i = peers.begin()
 				, end(peers.end()), prev(peers.end()); i != end; ++i)
@@ -5009,7 +4961,7 @@ immediately instead of waiting for the next tick

../src/session_impl.cpp ++m_allowed_upload_slots; -

relevance 0../src/session_impl.cpp:3965make configurable

make configurable

../src/session_impl.cpp:3965

						>= (*i)->uploaded_in_last_round() * 1000
+
relevance 0../src/session_impl.cpp:3958make configurable

make configurable

../src/session_impl.cpp:3958

						>= (*i)->uploaded_in_last_round() * 1000
 						* (1 + t2->priority()) / total_milliseconds(unchoke_interval));
 				}
 				prev = i;
@@ -5060,7 +5012,7 @@ immediately instead of waiting for the next tick

../src/session_impl.cpp { // if our current upload rate is less than 90% of our // limit -

relevance 0../src/session_impl.cpp:4044this should be called for all peers!

this should be called for all peers!

../src/session_impl.cpp:4044

				// we don't know at what rate we can upload. If we have a
+
relevance 0../src/session_impl.cpp:4037this should be called for all peers!

this should be called for all peers!

../src/session_impl.cpp:4037

				// we don't know at what rate we can upload. If we have a
 				// measurement of the peak, use that + 10kB/s, otherwise
 				// assume 20 kB/s
 				upload_capacity_left = (std::max)(20000, m_peak_up_rate + 10000);
@@ -5111,10 +5063,10 @@ immediately instead of waiting for the next tick

../src/session_impl.cpp --unchoke_set_size; TORRENT_ASSERT(p->peer_info_struct()); -

relevance 0../src/session_impl.cpp:4459it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always pushed back

it might be a nice feature here to limit the number of torrents +

relevance 0../src/session_impl.cpp:4452it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always pushed back

it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always -pushed back

../src/session_impl.cpp:4459

			t->status(&*i, flags);
+pushed back

../src/session_impl.cpp:4452

			t->status(&*i, flags);
 		}
 	}
 	
@@ -5164,7 +5116,7 @@ pushed back

../src/session_impl.cpp:4459

relevance 0../src/storage.cpp:710make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

make this more generic to not just work if files have been +

relevance 0../src/storage.cpp:710make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info

../src/storage.cpp:710

		for (;;)
 		{
@@ -5217,7 +5169,7 @@ maybe use the same format as .torrent files and reuse some code from torrent_inf
 		if (file_sizes_ent->list_size() == 0)
 		{
 			ec.ec = errors::no_files_in_resume_data;
-
relevance 0../src/storage.cpp:1006if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile

if everything moves OK, except for the partfile +

relevance 0../src/storage.cpp:1006if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile

if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile

../src/storage.cpp:1006

					if (ec)
 					{
@@ -5270,7 +5222,7 @@ it would probably make more sense to give up on the partfile

../src/stor { fileop op = { &file::writev , file::read_write | flags }; -

relevance 0../src/torrent.cpp:491if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

if the existing torrent doesn't have metadata, insert +

relevance 0../src/torrent.cpp:491if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

../src/torrent.cpp:491

 		m_torrent_file = tf;
 
@@ -5322,7 +5274,7 @@ the metadata we just downloaded into it.

../src/torrent.cpp:491

relevance 0../src/torrent.cpp:641if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

if the existing torrent doesn't have metadata, insert +

relevance 0../src/torrent.cpp:641if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.

../src/torrent.cpp:641

 		m_torrent_file = tf;
 
@@ -5374,7 +5326,7 @@ the metadata we just downloaded into it.

../src/torrent.cpp:641

relevance 0../src/torrent.cpp:1446is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash

is verify_peer_cert called once per certificate in the chain, and +

relevance 0../src/torrent.cpp:1446is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash

is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) @@ -5430,7 +5382,7 @@ need to be verified to make sure its DN matches the info-hash

../src/tor { #if defined(TORRENT_VERBOSE_LOGGING) || defined(TORRENT_LOGGING) match = true; -

relevance 0../src/torrent.cpp:1838instead of creating the picker up front here, maybe this whole section should move to need_picker()

instead of creating the picker up front here, +

relevance 0../src/torrent.cpp:1838instead of creating the picker up front here, maybe this whole section should move to need_picker()

instead of creating the picker up front here, maybe this whole section should move to need_picker()

../src/torrent.cpp:1838

			else
 			{
 				read_resume_data(m_resume_data->entry);
@@ -5482,7 +5434,7 @@ maybe this whole section should move to need_picker()

../src/torrent.cpp // need to consider it finished std::vector<piece_picker::downloading_piece> dq -

relevance 0../src/torrent.cpp:2034there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();

there may be peer extensions relying on the torrent extension +

relevance 0../src/torrent.cpp:2034there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();

there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();

../src/torrent.cpp:2034

		// pinned torrents are not allowed to be swapped out
@@ -5536,7 +5488,7 @@ m_extensions.clear();

../src/torrent.cpp:2034

relevance 0../src/torrent.cpp:2709this pattern is repeated in a few places. Factor this into a function and generalize the concept of a torrent having a dedicated listen port

this pattern is repeated in a few places. Factor this into +

relevance 0../src/torrent.cpp:2709this pattern is repeated in a few places. Factor this into a function and generalize the concept of a torrent having a dedicated listen port

this pattern is repeated in a few places. Factor this into a function and generalize the concept of a torrent having a dedicated listen port

../src/torrent.cpp:2709

		// if the files haven't been checked yet, we're
 		// not ready for peers. Except, if we don't have metadata,
@@ -5589,7 +5541,7 @@ dedicated listen port

../src/torrent.cpp:2709

relevance 0../src/torrent.cpp:3483add one peer per IP the hostname resolves to

add one peer per IP the hostname resolves to

../src/torrent.cpp:3483

#endif
+
relevance 0../src/torrent.cpp:3483add one peer per IP the hostname resolves to

add one peer per IP the hostname resolves to

../src/torrent.cpp:3483

#endif
 
 	void torrent::on_peer_name_lookup(error_code const& e
 		, std::vector<address> const& host_list, int port)
@@ -5640,7 +5592,7 @@ dedicated listen port

../src/torrent.cpp:2709

relevance 0../src/torrent.cpp:4474update suggest_piece?

update suggest_piece?

../src/torrent.cpp:4474

+
relevance 0../src/torrent.cpp:4474update suggest_piece?

update suggest_piece?

../src/torrent.cpp:4474

 	void torrent::peer_has_all(peer_connection const* peer)
 	{
 		if (has_picker())
@@ -5691,7 +5643,7 @@ dedicated listen port

../src/torrent.cpp:2709

relevance 0../src/torrent.cpp:4617really, we should just keep the picker around in this case to maintain the availability counters

really, we should just keep the picker around +

relevance 0../src/torrent.cpp:4617really, we should just keep the picker around in this case to maintain the availability counters

really, we should just keep the picker around in this case to maintain the availability counters

../src/torrent.cpp:4617

		pieces.reserve(cs.pieces.size());
 
 		// sort in ascending order, to get most recently used first
@@ -5743,7 +5695,7 @@ in this case to maintain the availability counters

../src/torrent.cpp:46 } void torrent::abort() -

relevance 0../src/torrent.cpp:6541make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync

make this more generic to not just work if files have been +

relevance 0../src/torrent.cpp:6541make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread @@ -5799,7 +5751,7 @@ which are kept in sync

../src/torrent.cpp:6541

relevance 0../src/torrent.cpp:6704if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents

if this is a merkle torrent and we can't +

relevance 0../src/torrent.cpp:6704if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents

if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have @@ -5855,7 +5807,7 @@ no one uses merkle torrents

../src/torrent.cpp:6704

relevance 0../src/torrent.cpp:6894make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base

make this more generic to not just work if files have been +

relevance 0../src/torrent.cpp:6894make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base

make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base

../src/torrent.cpp:6894

		pieces.resize(m_torrent_file->num_pieces());
 		if (!has_picker())
@@ -5908,7 +5860,7 @@ using file_base

../src/torrent.cpp:6894

relevance 0../src/torrent.cpp:8887add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file

add a flag to ignore stats, and only care about resume data for +

relevance 0../src/torrent.cpp:8887add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file

add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file

../src/torrent.cpp:8887

		if (m_complete != 0xffffff) seeds = m_complete;
 		else seeds = m_policy ? m_policy->num_seeds() : 0;
@@ -5961,7 +5913,7 @@ just to save an empty resume data file

../src/torrent.cpp:8887

relevance 0../src/torrent.cpp:9849go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece

go through the pieces we have and count the total number +

relevance 0../src/torrent.cpp:9849go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece

go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece

../src/torrent.cpp:9849

			}
@@ -6015,7 +5967,7 @@ it num_interested == 0, we need to pick a new piece

../src/torrent.cpp:9 if (num_cache_pieces > m_torrent_file->num_pieces()) num_cache_pieces = m_torrent_file->num_pieces(); -

relevance 0../src/torrent.cpp:10495instead of resorting the whole list, insert the peers directly into the right place

instead of resorting the whole list, insert the peers +

relevance 0../src/torrent.cpp:10495instead of resorting the whole list, insert the peers directly into the right place

instead of resorting the whole list, insert the peers directly into the right place

../src/torrent.cpp:10495

				printf("timed out [average-piece-time: %d ms ]\n"
 					, m_average_piece_time);
 #endif
@@ -6067,7 +6019,7 @@ directly into the right place

../src/torrent.cpp:10495

relevance 0../src/torrent_peer.cpp:176how do we deal with our external address changing?

how do we deal with our external address changing?

../src/torrent_peer.cpp:176

		, is_v6_addr(false)
+
relevance 0../src/torrent_peer.cpp:176how do we deal with our external address changing?

how do we deal with our external address changing?

../src/torrent_peer.cpp:176

		, is_v6_addr(false)
 #endif
 #if TORRENT_USE_I2P
 		, is_i2p_addr(false)
@@ -6118,7 +6070,7 @@ directly into the right place

../src/torrent.cpp:10495

relevance 0../src/udp_socket.cpp:286it would be nice to detect this on posix systems also

it would be nice to detect this on posix systems also

../src/udp_socket.cpp:286

		--m_v6_outstanding;
+
relevance 0../src/udp_socket.cpp:286it would be nice to detect this on posix systems also

it would be nice to detect this on posix systems also

../src/udp_socket.cpp:286

		--m_v6_outstanding;
 	}
 	else
 #endif
@@ -6169,7 +6121,7 @@ void udp_socket::call_handler(error_code const& ec, udp::endpoint const&
 			ret = (*i)->incoming_packet(ec, ep, buf, size);
 		} TORRENT_CATCH (std::exception&) {}
 		if (*i == NULL) i = m_observers.erase(i);
-
relevance 0../src/upnp.cpp:71listen_interface is not used. It's meant to bind the broadcast socket

listen_interface is not used. It's meant to bind the broadcast socket

../src/upnp.cpp:71

#include <asio/ip/multicast.hpp>
+
relevance 0../src/upnp.cpp:71listen_interface is not used. It's meant to bind the broadcast socket

listen_interface is not used. It's meant to bind the broadcast socket

../src/upnp.cpp:71

#include <asio/ip/multicast.hpp>
 #else
 #include <boost/asio/ip/host_name.hpp>
 #include <boost/asio/ip/multicast.hpp>
@@ -6220,7 +6172,7 @@ static error_code ec;
 		m_devices.swap(s->devices);
 		m_mappings.swap(s->mappings);
 		delete s;
-
relevance 0../src/ut_metadata.cpp:316we really need to increment the refcounter on the torrent while this buffer is still in the peer's send buffer

we really need to increment the refcounter on the torrent +

relevance 0../src/ut_metadata.cpp:316we really need to increment the refcounter on the torrent while this buffer is still in the peer's send buffer

we really need to increment the refcounter on the torrent while this buffer is still in the peer's send buffer

../src/ut_metadata.cpp:316

				if (!m_tp.need_loaded()) return;
 				metadata = m_tp.metadata().begin + offset;
 				metadata_piece_size = (std::min)(
@@ -6272,7 +6224,7 @@ while this buffer is still in the peer's send buffer

../src/ut_metadata. #ifdef TORRENT_VERBOSE_LOGGING m_pc.peer_log("<== UT_METADATA [ not a dictionary ]"); #endif -

relevance 0../src/utp_stream.cpp:1627this loop may not be very efficient

this loop may not be very efficient

../src/utp_stream.cpp:1627

+
relevance 0../src/utp_stream.cpp:1627this loop may not be very efficient

this loop may not be very efficient

../src/utp_stream.cpp:1627

 	char* m_buf;
 };
 
@@ -6323,7 +6275,7 @@ bool utp_socket_impl::send_pkt(int flags)
 		if (sack > 32) sack = 32;
 	}
 
-
relevance 0../src/web_connection_base.cpp:73introduce a web-seed default class which has a low download priority

introduce a web-seed default class which has a low download priority

../src/web_connection_base.cpp:73

{
+
relevance 0../src/web_connection_base.cpp:73introduce a web-seed default class which has a low download priority

introduce a web-seed default class which has a low download priority

../src/web_connection_base.cpp:73

{
 	web_connection_base::web_connection_base(
 		peer_connection_args const& pack
 		, web_seed_entry& web)
@@ -6374,7 +6326,7 @@ bool utp_socket_impl::send_pkt(int flags)
 		// according to the settings.
 		return m_settings.get_int(settings_pack::urlseed_timeout);
 	}
-
relevance 0../src/kademlia/dht_tracker.cpp:428ideally this function would be called when the put completes

ideally this function would be called when the +

relevance 0../src/kademlia/dht_tracker.cpp:428ideally this function would be called when the put completes

ideally this function would be called when the put completes

../src/kademlia/dht_tracker.cpp:428

		// since it controls whether we re-put the content
 		TORRENT_ASSERT(!it.is_mutable());
 		f(it);
@@ -6426,7 +6378,7 @@ put completes

../src/kademlia/dht_tracker.cpp:428

relevance 0../src/kademlia/routing_table.cpp:316instad of refreshing a bucket by using find_nodes, ping each node periodically

instad of refreshing a bucket by using find_nodes, +

relevance 0../src/kademlia/routing_table.cpp:316instad of refreshing a bucket by using find_nodes, ping each node periodically

instad of refreshing a bucket by using find_nodes, ping each node periodically

../src/kademlia/routing_table.cpp:316

		os << "]\n";
 	}
 }
@@ -6478,7 +6430,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 	node_id mask = generate_prefix_mask(num_bits);
 
 	// target = (target & ~mask) | (root & mask)
-
relevance 0../include/libtorrent/bitfield.hpp:158rename to data() ?

rename to data() ?

../include/libtorrent/bitfield.hpp:158

				if (m_buf[i] != 0) return false;
+
relevance 0../include/libtorrent/bitfield.hpp:158rename to data() ?

rename to data() ?

../include/libtorrent/bitfield.hpp:158

				if (m_buf[i] != 0) return false;
 			}
 			return true;
 		}
@@ -6529,7 +6481,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 				return ret;
 			}	
 #endif // TORRENT_HAS_SSE
-
relevance 0../include/libtorrent/block_cache.hpp:218make this 32 bits and to count seconds since the block cache was created

make this 32 bits and to count seconds since the block cache was created

../include/libtorrent/block_cache.hpp:218

		bool operator==(cached_piece_entry const& rhs) const
+
relevance 0../include/libtorrent/block_cache.hpp:218make this 32 bits and to count seconds since the block cache was created

make this 32 bits and to count seconds since the block cache was created

../include/libtorrent/block_cache.hpp:218

		bool operator==(cached_piece_entry const& rhs) const
 		{ return storage.get() == rhs.storage.get() && piece == rhs.piece; }
 
 		// if this is set, we'll be calculating the hash
@@ -6580,7 +6532,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 
 		// this is set to true once we flush blocks past
 		// the hash cursor. Once this happens, there's
-
relevance 0../include/libtorrent/config.hpp:339Make this count Unicode characters instead of bytes on windows

Make this count Unicode characters instead of bytes on windows

../include/libtorrent/config.hpp:339

#define TORRENT_USE_WRITEV 0
+
relevance 0../include/libtorrent/config.hpp:339Make this count Unicode characters instead of bytes on windows

Make this count Unicode characters instead of bytes on windows

../include/libtorrent/config.hpp:339

#define TORRENT_USE_WRITEV 0
 #define TORRENT_USE_READV 0
 
 #else
@@ -6631,7 +6583,7 @@ bool compare_bucket_refresh(routing_table_node const& lhs, routing_table_nod
 #include <stdarg.h>
 
 // internal
-
relevance 0../include/libtorrent/debug.hpp:215rewrite this class to use FILE* instead and have a printf-like interface

rewrite this class to use FILE* instead and +

relevance 0../include/libtorrent/debug.hpp:215rewrite this class to use FILE* instead and have a printf-like interface

rewrite this class to use FILE* instead and have a printf-like interface

../include/libtorrent/debug.hpp:215

#endif
 }
 
@@ -6683,7 +6635,7 @@ namespace libtorrent
 
 			mutex::scoped_lock l(file_mutex);
 			open(!append);
-
relevance 0../include/libtorrent/disk_buffer_pool.hpp:128try to remove the observers, only using the async_allocate handlers

try to remove the observers, only using the async_allocate handlers

../include/libtorrent/disk_buffer_pool.hpp:128

+
relevance 0../include/libtorrent/disk_buffer_pool.hpp:128try to remove the observers, only using the async_allocate handlers

try to remove the observers, only using the async_allocate handlers

../include/libtorrent/disk_buffer_pool.hpp:128

 		// number of bytes per block. The BitTorrent
 		// protocol defines the block size to 16 KiB.
 		const int m_block_size;
@@ -6734,7 +6686,7 @@ namespace libtorrent
 		// the pointer to the block of virtual address space
 		// making up the mmapped cache space
 		char* m_cache_pool;
-
relevance 0../include/libtorrent/peer_connection.hpp:216make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers

make this a raw pointer (to save size in +

relevance 0../include/libtorrent/peer_connection.hpp:216make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers

make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers

../include/libtorrent/peer_connection.hpp:216

			, m_snubbed(false)
@@ -6788,7 +6740,7 @@ outlive their peers

../include/libtorrent/peer_connection.hpp:216

relevance 0../include/libtorrent/peer_connection.hpp:1123factor this out into its own class with a virtual interface torrent and session should implement this interface

factor this out into its own class with a virtual interface +

relevance 0../include/libtorrent/peer_connection.hpp:1123factor this out into its own class with a virtual interface torrent and session should implement this interface

factor this out into its own class with a virtual interface torrent and session should implement this interface

../include/libtorrent/peer_connection.hpp:1123

 		// the local endpoint for this peer, i.e. our address
 		// and our port. If this is set for outgoing connections
@@ -6840,7 +6792,7 @@ torrent and session should implement this interface

../include/libtorren // | // | m_recv_start (logical start of current // | | receive buffer, as perceived by upper layers) -

relevance 0../include/libtorrent/peer_connection_interface.hpp:45make this interface smaller!

make this interface smaller!

../include/libtorrent/peer_connection_interface.hpp:45

SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+
relevance 0../include/libtorrent/peer_connection_interface.hpp:45make this interface smaller!

make this interface smaller!

../include/libtorrent/peer_connection_interface.hpp:45

SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
@@ -6891,7 +6843,7 @@ namespace libtorrent
 		virtual tcp::endpoint const& remote() const = 0;
 		virtual tcp::endpoint local_endpoint() const = 0;
 		virtual void disconnect(error_code const& ec, operation_t op, int error = 0) = 0;
-
relevance 0../include/libtorrent/performance_counters.hpp:132should keepalives be in here too? how about dont-have, share-mode, upload-only

should keepalives be in here too? +

relevance 0../include/libtorrent/performance_counters.hpp:132should keepalives be in here too? how about dont-have, share-mode, upload-only

should keepalives be in here too? how about dont-have, share-mode, upload-only

../include/libtorrent/performance_counters.hpp:132

			// a connect candidate
 			connection_attempt_loops,
 			// successful incoming connections (not rejected for any reason)
@@ -6943,9 +6895,9 @@ how about dont-have, share-mode, upload-only

../include/libtorrent/perfo num_outgoing_cancel, num_outgoing_dht_port, num_outgoing_suggest, -

relevance 0../include/libtorrent/performance_counters.hpp:406some space could be saved here by making gauges 32 bits

some space could be saved here by making gauges 32 bits

../include/libtorrent/performance_counters.hpp:406

relevance 0../include/libtorrent/performance_counters.hpp:407restore these to regular integers. Instead have one copy of the counters per thread and collect them at convenient synchronization points

restore these to regular integers. Instead have one copy +

relevance 0../include/libtorrent/performance_counters.hpp:408some space could be saved here by making gauges 32 bits

some space could be saved here by making gauges 32 bits

../include/libtorrent/performance_counters.hpp:408

relevance 0../include/libtorrent/performance_counters.hpp:409restore these to regular integers. Instead have one copy of the counters per thread and collect them at convenient synchronization points

restore these to regular integers. Instead have one copy of the counters per thread and collect them at convenient -synchronization points

../include/libtorrent/performance_counters.hpp:407

			limiter_down_bytes,
+synchronization points

../include/libtorrent/performance_counters.hpp:409

			limiter_down_bytes,
 
 			num_counters,
 			num_gauge_counters = num_counters - num_stats_counters
@@ -6978,7 +6930,7 @@ synchronization points

../include/libtorrent/performance_counters.hpp:40 #endif -

relevance 0../include/libtorrent/piece_picker.hpp:669should this be allocated lazily?

should this be allocated lazily?

../include/libtorrent/piece_picker.hpp:669

		std::vector<downloading_piece>::const_iterator find_dl_piece(int queue, int index) const;
+
relevance 0../include/libtorrent/piece_picker.hpp:669should this be allocated lazily?

should this be allocated lazily?

../include/libtorrent/piece_picker.hpp:669

		std::vector<downloading_piece>::const_iterator find_dl_piece(int queue, int index) const;
 		std::vector<downloading_piece>::iterator find_dl_piece(int queue, int index);
 
 		// returns an iterator to the downloading piece, whichever
@@ -7029,7 +6981,7 @@ synchronization points

../include/libtorrent/performance_counters.hpp:40 // and some are still in the requested state // 2: downloading pieces where every block is // finished or writing -

relevance 0../include/libtorrent/proxy_base.hpp:171it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

it would be nice to remember the bind port and bind once we know where the proxy is +

relevance 0../include/libtorrent/proxy_base.hpp:171it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:171

	void bind(endpoint_type const& /* endpoint */)
 	{
 //		m_sock.bind(endpoint);
@@ -7081,7 +7033,7 @@ m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:171

m_sock.close(ec); m_resolver.cancel(); } -
relevance 0../include/libtorrent/session.hpp:861add get_peer_class_type_filter() as well

add get_peer_class_type_filter() as well

../include/libtorrent/session.hpp:861

		// 
+
relevance 0../include/libtorrent/session.hpp:861add get_peer_class_type_filter() as well

add get_peer_class_type_filter() as well

../include/libtorrent/session.hpp:861

		// 
 		// The ``peer_class`` argument cannot be greater than 31. The bitmasks
 		// representing peer classes in the ``peer_class_filter`` are 32 bits.
 		// 
@@ -7132,7 +7084,7 @@ m_sock.bind(endpoint, ec);

../include/libtorrent/proxy_base.hpp:171

// destructs. // // For more information on peer classes, see peer-classes_. -
relevance 0../include/libtorrent/settings_pack.hpp:1080deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.

deprecate this +

relevance 0../include/libtorrent/settings_pack.hpp:1080deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.

deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.

../include/libtorrent/settings_pack.hpp:1080

			auto_manage_startup,
@@ -7186,7 +7138,7 @@ disconnected.

../include/libtorrent/settings_pack.hpp:1080

relevance 0../include/libtorrent/size_type.hpp:48remove these and just use boost's types directly

remove these and just use boost's types directly

../include/libtorrent/size_type.hpp:48

ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+
relevance 0../include/libtorrent/size_type.hpp:48remove these and just use boost's types directly

remove these and just use boost's types directly

../include/libtorrent/size_type.hpp:48

ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
 POSSIBILITY OF SUCH DAMAGE.
 
 */
@@ -7212,7 +7164,7 @@ namespace libtorrent
 
 
 #endif
-
relevance 0../include/libtorrent/torrent.hpp:1213this wastes 5 bits per file

this wastes 5 bits per file

../include/libtorrent/torrent.hpp:1213

		typedef std::list<boost::shared_ptr<torrent_plugin> > extension_list_t;
+
relevance 0../include/libtorrent/torrent.hpp:1213this wastes 5 bits per file

this wastes 5 bits per file

../include/libtorrent/torrent.hpp:1213

		typedef std::list<boost::shared_ptr<torrent_plugin> > extension_list_t;
 		extension_list_t m_extensions;
 #endif
 
@@ -7263,7 +7215,7 @@ namespace libtorrent
 
 		// if this was added from an RSS feed, this is the unique
 		// identifier in the feed.
-
relevance 0../include/libtorrent/torrent.hpp:1272These two bitfields should probably be coalesced into one

These two bitfields should probably be coalesced into one

../include/libtorrent/torrent.hpp:1272

		// the .torrent file from m_url
+
relevance 0../include/libtorrent/torrent.hpp:1272These two bitfields should probably be coalesced into one

These two bitfields should probably be coalesced into one

../include/libtorrent/torrent.hpp:1272

		// the .torrent file from m_url
 //		std::vector<char> m_torrent_file_buf;
 
 		// this is a list of all pieces that we have announced
@@ -7314,7 +7266,7 @@ namespace libtorrent
 		// this is the time last any of our peers saw a seed
 		// in this swarm
 		time_t m_swarm_last_seen_complete;
-
relevance 0../include/libtorrent/torrent_info.hpp:124include the number of peers received from this tracker, at last announce

include the number of peers received from this tracker, at last announce

../include/libtorrent/torrent_info.hpp:124

+
relevance 0../include/libtorrent/torrent_info.hpp:124include the number of peers received from this tracker, at last announce

include the number of peers received from this tracker, at last announce

../include/libtorrent/torrent_info.hpp:124

 		// if this tracker failed the last time it was contacted
 		// this error code specifies what error occurred
 		error_code last_error;
@@ -7365,7 +7317,7 @@ namespace libtorrent
 		// flags for the source bitmask, each indicating where
 		// we heard about this tracker
 		enum tracker_source
-
relevance 0../include/libtorrent/upnp.hpp:112support using the windows API for UPnP operations as well

support using the windows API for UPnP operations as well

../include/libtorrent/upnp.hpp:112

			// specific port
+
relevance 0../include/libtorrent/upnp.hpp:112support using the windows API for UPnP operations as well

support using the windows API for UPnP operations as well

../include/libtorrent/upnp.hpp:112

			// specific port
 			external_port_must_be_wildcard = 727
 		};
 
@@ -7416,7 +7368,7 @@ public:
 	// is -1, which means failure. There will not be any error alert notification for
 	// mappings that fail with a -1 return value.
 	int add_mapping(protocol_type p, int external_port, int local_port);
-
relevance 0../include/libtorrent/utp_stream.hpp:395implement blocking write. Low priority since it's not used (yet)

implement blocking write. Low priority since it's not used (yet)

../include/libtorrent/utp_stream.hpp:395

		for (typename Mutable_Buffers::const_iterator i = buffers.begin()
+
relevance 0../include/libtorrent/utp_stream.hpp:395implement blocking write. Low priority since it's not used (yet)

implement blocking write. Low priority since it's not used (yet)

../include/libtorrent/utp_stream.hpp:395

		for (typename Mutable_Buffers::const_iterator i = buffers.begin()
 			, end(buffers.end()); i != end; ++i)
 		{
 			using asio::buffer_cast;
@@ -7467,7 +7419,7 @@ public:
 		if (m_impl == 0)
 		{
 			m_io_service.post(boost::bind<void>(handler, asio::error::not_connected, 0));
-
relevance 0../include/libtorrent/kademlia/item.hpp:61since this is a public function, it should probably be moved out of this header and into one with other public functions.

since this is a public function, it should probably be moved +

relevance 0../include/libtorrent/kademlia/item.hpp:61since this is a public function, it should probably be moved out of this header and into one with other public functions.

since this is a public function, it should probably be moved out of this header and into one with other public functions.

../include/libtorrent/kademlia/item.hpp:61

#include <boost/array.hpp>
 
 namespace libtorrent { namespace dht
@@ -7519,7 +7471,7 @@ public:
 	item(entry const& v
 		, std::pair<char const*, int> salt
 		, boost::uint64_t seq, char const* pk, char const* sk);
-
relevance 0../include/libtorrent/aux_/session_impl.hpp:378move the login info into the tracker_request object

move the login info into the tracker_request object

../include/libtorrent/aux_/session_impl.hpp:378

+
relevance 0../include/libtorrent/aux_/session_impl.hpp:378move the login info into the tracker_request object

move the login info into the tracker_request object

../include/libtorrent/aux_/session_impl.hpp:378

 			void on_lsd_announce(error_code const& e);
 
 			// called when a port mapping is successful, or a router returns
@@ -7570,7 +7522,7 @@ public:
 
 #ifndef TORRENT_DISABLE_EXTENSIONS
 			void add_extensions_to_torrent(
-
relevance 0../include/libtorrent/aux_/session_impl.hpp:841should this be renamed m_outgoing_interfaces?

should this be renamed m_outgoing_interfaces?

../include/libtorrent/aux_/session_impl.hpp:841

			// listen socket. For each retry the port number
+
relevance 0../include/libtorrent/aux_/session_impl.hpp:841should this be renamed m_outgoing_interfaces?

should this be renamed m_outgoing_interfaces?

../include/libtorrent/aux_/session_impl.hpp:841

			// listen socket. For each retry the port number
 			// is incremented by one
 			int m_listen_port_retries;
 
@@ -7621,7 +7573,7 @@ public:
 			mutable boost::uint8_t m_interface_index;
 
 			void open_new_incoming_socks_connection();
-
relevance 0../include/libtorrent/aux_/session_impl.hpp:890replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:890

			mutable boost::uint8_t m_interface_index;
+
relevance 0../include/libtorrent/aux_/session_impl.hpp:890replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:890

			mutable boost::uint8_t m_interface_index;
 
 			void open_new_incoming_socks_connection();
 
@@ -7645,7 +7597,7 @@ public:
 
 			// this is used to decide when to recalculate which
 			// torrents to keep queued and which to activate
-
relevance 0../include/libtorrent/aux_/session_impl.hpp:895replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:895

			void setup_listener(listen_socket_t* s, std::string const& device
+
relevance 0../include/libtorrent/aux_/session_impl.hpp:895replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:895

			void setup_listener(listen_socket_t* s, std::string const& device
 				, bool ipv4, int port, int& retries, int flags, error_code& ec);
 
 #ifndef TORRENT_DISABLE_DHT	
@@ -7671,7 +7623,7 @@ public:
 			// is only decresed when the unchoke set
 			// is recomputed, and when it reaches zero,
 			// the optimistic unchoke is moved to another peer.
-
relevance 0../include/libtorrent/aux_/session_impl.hpp:902replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:902

+
relevance 0../include/libtorrent/aux_/session_impl.hpp:902replace this by a proper asio timer

replace this by a proper asio timer

../include/libtorrent/aux_/session_impl.hpp:902

 			// the number of unchoked peers as set by the auto-unchoker
 			// this should always be >= m_max_uploads
 			int m_allowed_upload_slots;
@@ -7722,7 +7674,7 @@ public:
 			int m_suggest_timer;
 
 			// statistics gathered from all torrents.
-
relevance 0../include/libtorrent/aux_/session_interface.hpp:202it would be nice to not have this be part of session_interface

it would be nice to not have this be part of session_interface

../include/libtorrent/aux_/session_interface.hpp:202

		virtual boost::uint16_t listen_port() const = 0;
+
relevance 0../include/libtorrent/aux_/session_interface.hpp:202it would be nice to not have this be part of session_interface

it would be nice to not have this be part of session_interface

../include/libtorrent/aux_/session_interface.hpp:202

		virtual boost::uint16_t listen_port() const = 0;
 		virtual boost::uint16_t ssl_listen_port() const = 0;
 
 		// used to (potentially) issue socket write calls onto multiple threads
diff --git a/docs/tuning.html b/docs/tuning.html
index 5938763a1..df1166f5d 100644
--- a/docs/tuning.html
+++ b/docs/tuning.html
@@ -56,29 +56,27 @@
 
  • send buffer watermark
  • optimize hashing for memory usage
  • reduce executable size
  • -
  • reduce statistics
  • -
  • play nice with the disk
  • -
  • high performance seeding
      -
    • file pool
    • -
    • disk cache
    • -
    • SSD as level 2 cache
    • -
    • uTP-TCP mixed mode
    • -
    • send buffer low watermark
    • -
    • peers
    • -
    • torrent limits
    • -
    • SHA-1 hashing
    • +
    • play nice with the disk
    • +
    • high performance seeding
    • -
    • scalability
    • -
    • benchmarking
      @@ -224,14 +222,6 @@ deprecated functions and struct members. As long as no deprecated functions are relied upon, this should be a simple way to eliminate a little bit of code.

      For all available options, see the building libtorrent secion.

      -
      -

      reduce statistics

      -

      You can save some memory for each connection and each torrent by reducing the -number of separate rates kept track of by libtorrent. If you build with full-stats=off -(or -DTORRENT_DISABLE_FULL_STATS) you will save a few hundred bytes for each -connection and torrent. It might make a difference if you have a very large number -of peers or torrents.

      -

      play nice with the disk

      @@ -514,6 +504,7 @@ command line argument. It generates disk_buffer.png

      disk_access.log

      +

      The disk access log is now binary

      The disc access log has three fields. The timestamp (milliseconds since start), operation and offset. The offset is the absolute offset within the torrent (not within a file). This log is only useful when you're downloading a single torrent, otherwise the offsets will not @@ -540,96 +531,61 @@ file, disk_access.gnuplot which assumes The density of the disk seeks tells you how hard the drive has to work.

      -
      -

      session stats

      -

      By defining TORRENT_STATS libtorrent will write a log file called session_stats/<pid>.<sequence>.log which -is in a format ready to be passed directly into gnuplot. The parser script parse_session_stats.py -generates a report in session_stats_report/index.html.

      -

      The first line in the log contains all the field names, separated by colon:

      -
      -second:upload rate:download rate:downloading torrents:seeding torrents:peers...
      -
      -

      The rest of the log is one line per second with all the fields' values.

      -

      These are the fields:

      - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      field namedescription
      secondthe time, in seconds, for this log line
      upload ratethe number of bytes uploaded in the last second
      download ratethe number of bytes downloaded in the last second
      downloading torrentsthe number of torrents that are not seeds
      seeding torrentsthe number of torrents that are seed
      peersthe total number of connected peers
      connecting peersthe total number of peers attempting to connect (half-open)
      disk block buffersthe total number of disk buffer blocks that are in use
      unchoked peersthe total number of unchoked peers
      num list peersthe total number of known peers, but not necessarily connected
      peer allocationsthe total number of allocations for the peer list pool
      peer storage bytesthe total number of bytes allocated for the peer list pool
      -

      This is an example of a graph that can be generated from this log:

      -session_stats_peers.png -

      It shows statistics about the number of peers and peers states. How at the startup -there are a lot of half-open connections, which tapers off as the total number of -peers approaches the limit (50). It also shows how the total peer list slowly but steadily -grows over time. This list is plotted against the right axis, as it has a different scale -as the other fields.

      - -
      -

      understanding the disk thread

      -

      All disk operations are funneled through a separate thread, referred to as the disk thread. -The main interface to the disk thread is a queue where disk jobs are posted, and the results -of these jobs are then posted back on the main thread's io_service.

      +
      +

      understanding the disk threads

      +

      This section is somewhat outdated, there are potentially more than one disk +thread

      +

      All disk operations are funneled through a separate thread, referred to as the +disk thread. The main interface to the disk thread is a queue where disk jobs +are posted, and the results of these jobs are then posted back on the main +thread's io_service.

      A disk job is essentially one of:

      -
        -
      1. write this block to disk, i.e. a write job. For the most part this is just a matter of sticking the block in the disk cache, but if we've run out of cache space or completed a whole piece, we'll also flush blocks to disk. This is typically very fast, since the OS just sticks these buffers in its write cache which will be flushed at a later time, presumably when the drive head will pass the place on the platter where the blocks go.
      2. -
      3. read this block from disk. The first thing that happens is we look in the cache to see if the block is already in RAM. If it is, we'll return immediately with this block. If it's a cache miss, we'll have to hit the disk. Here we decide to defer this job. We find the physical offset on the drive for this block and insert the job in an ordered queue, sorted by the physical location. At a later time, once we don't have any more non-read jobs left in the queue, we pick one read job out of the ordered queue and service it. The order we pick jobs out of the queue is according to an elevator cursor moving up and down along the ordered queue of read jobs. If we have enough space in the cache we'll read read_cache_line_size number of blocks and stick those in the cache. This defaults to 32 blocks. If the system supports asynchronous I/O (Windows, Linux, Mac OS X, BSD, Solars for instance), jobs will be issued immediately to the OS. This especially increases read throughput, since the OS has a much greater flexibility to reorder the read jobs.
      4. +
          +
        1. +
          write this block to disk, i.e. a write job. For the most part this is just a
          +

          matter of sticking the block in the disk cache, but if we've run out of +cache space or completed a whole piece, we'll also flush blocks to disk. +This is typically very fast, since the OS just sticks these buffers in its +write cache which will be flushed at a later time, presumably when the drive +head will pass the place on the platter where the blocks go.

          +
          +
          +
        2. +
        3. +
          read this block from disk. The first thing that happens is we look in the
          +

          cache to see if the block is already in RAM. If it is, we'll return +immediately with this block. If it's a cache miss, we'll have to hit the +disk. Here we decide to defer this job. We find the physical offset on the +drive for this block and insert the job in an ordered queue, sorted by the +physical location. At a later time, once we don't have any more non-read +jobs left in the queue, we pick one read job out of the ordered queue and +service it. The order we pick jobs out of the queue is according to an +elevator cursor moving up and down along the ordered queue of read jobs. If +we have enough space in the cache we'll read read_cache_line_size number of +blocks and stick those in the cache. This defaults to 32 blocks. If the +system supports asynchronous I/O (Windows, Linux, Mac OS X, BSD, Solars for +instance), jobs will be issued immediately to the OS. This especially +increases read throughput, since the OS has a much greater flexibility to +reorder the read jobs.

          +
          +
          +
        -

        Other disk job consist of operations that needs to be synchronized with the disk I/O, like renaming files, closing files, flushing the cache, updating the settings etc. These are relatively rare though.

        +

        Other disk job consist of operations that needs to be synchronized with the +disk I/O, like renaming files, closing files, flushing the cache, updating the +settings etc. These are relatively rare though.

      contributions

      -

      If you have added instrumentation for some part of libtorrent that is not covered here, or -if you have improved any of the parser scrips, please consider contributing it back to the -project.

      -

      If you have run tests and found that some algorithm or default value in libtorrent is -suboptimal, please contribute that knowledge back as well, to allow us to improve the library.

      -

      If you have additional suggestions on how to tune libtorrent for any specific use case, -please let us know and we'll update this document.

      +

      If you have added instrumentation for some part of libtorrent that is not +covered here, or if you have improved any of the parser scrips, please consider +contributing it back to the project.

      +

      If you have run tests and found that some algorithm or default value in +libtorrent is suboptimal, please contribute that knowledge back as well, to +allow us to improve the library.

      +

      If you have additional suggestions on how to tune libtorrent for any specific +use case, please let us know and we'll update this document.