diff --git a/docs/dht_sec.html b/docs/dht_sec.html
index f94e88888..ce68c6075 100644
--- a/docs/dht_sec.html
+++ b/docs/dht_sec.html
@@ -171,14 +171,18 @@ random numbers.
bootstrapping
In order to set ones initial node ID, the external IP needs to be known. This
-is not a trivial problem. With this extension, all DHT requests whose node
-ID does not match its IP address MUST be serviced and MUST also include one
-extra result value (inside the r dictionary) called ip. The IP field
-contains the raw (big endian) byte representation of the external IP address.
-This is the same byte sequence used to verify the node ID.
+is not a trivial problem. With this extension, all DHT responses SHOULD include
+a top-level field called ip, containing a compact binary representation of
+the requestor's IP and port. That is big endian IP followed by 2 bytes of big endian
+port.
+
The IP portion is the same byte sequence used to verify the node ID.
+
It is important that the ip field is in the top level dictionary. Nodes that
+enforce the node-ID will respond with an error message ("y": "e", "e": { ... }),
+whereas a node that supports this extension but without enforcing it will respond
+with a normal reply ("y": "r", "r": { ... }).
A DHT node which receives an ip result in a request SHOULD consider restarting
its DHT node with a new node ID, taking this IP into account. Since a single node
-can not be trusted, there should be some mechanism of determining whether or
+can not be trusted, there should be some mechanism to determine whether or
not the node has a correct understanding of its external IP or not. This could
be done by voting, or only restart the DHT once at least a certain number of
nodes, from separate searches, tells you your node ID is incorrect.
diff --git a/docs/dht_store.html b/docs/dht_store.html
index 4c1d734d2..783ec96a4 100644
--- a/docs/dht_store.html
+++ b/docs/dht_store.html
@@ -3,7 +3,7 @@
-
+
BitTorrent extension for arbitrary DHT store
@@ -190,7 +190,7 @@ version, the sequence number seq must be monot
and a node hosting the list node MUST not downgrade a list head from a higher sequence
number to a lower one, only upgrade. The sequence number SHOULD not exceed MAX_INT64,
(i.e. 0x7fffffffffffffff. A client MAY reject any message with a sequence number
-exceeding this.
+exceeding this. A client MAY also reject any message with a negative sequence number.
The signature is a 64 byte ed25519 signature of the bencoded sequence
number concatenated with the v key. e.g. something like this:: 3:seqi4e1:v12:Hello world!.
@@ -223,12 +223,14 @@ message with code 302 (see error codes below).
Note that this request does not contain a target hash. The target hash under
which this blob is stored is implied by the k argument. The key is
the SHA-1 hash of the key (k).
-
The cas field is optional. If present it is interpreted of the sha-1 hash of
+
The cas field is optional. If present it is interpreted as the sha-1 hash of
the sequence number and v field that is expected to be replaced. The buffer
to hash is the same as the one signed when storing. cas is short for compare
and swap, it has similar semantics as CAS CPU instructions. If specified as part
of the put command, and the current value stored under the public key differs from
-the expected value, the store fails. The cas field only applies to mutable puts.
+the expected value, the store fails. The cas field only applies to mutable puts.
+If there is no current value, the cas field SHOULD be ignored, not preventing
+the put based on it.
Response:
{
diff --git a/docs/dht_store.rst b/docs/dht_store.rst
index 8f8571a29..85065c1c7 100644
--- a/docs/dht_store.rst
+++ b/docs/dht_store.rst
@@ -153,7 +153,7 @@ version, the sequence number ``seq`` must be monotonically increasing for each u
and a node hosting the list node MUST not downgrade a list head from a higher sequence
number to a lower one, only upgrade. The sequence number SHOULD not exceed ``MAX_INT64``,
(i.e. ``0x7fffffffffffffff``. A client MAY reject any message with a sequence number
-exceeding this.
+exceeding this. A client MAY also reject any message with a negative sequence number.
The signature is a 64 byte ed25519 signature of the bencoded sequence
number concatenated with the ``v`` key. e.g. something like this:: ``3:seqi4e1:v12:Hello world!``.
@@ -200,6 +200,8 @@ to hash is the same as the one signed when storing. ``cas`` is short for *compar
and swap*, it has similar semantics as CAS CPU instructions. If specified as part
of the put command, and the current value stored under the public key differs from
the expected value, the store fails. The ``cas`` field only applies to mutable puts.
+If there is no current value, the ``cas`` field SHOULD be ignored, not preventing
+the put based on it.
Response:
diff --git a/docs/todo.html b/docs/todo.html
index 39526b556..11082713b 100644
--- a/docs/todo.html
+++ b/docs/todo.html
@@ -21,7 +21,7 @@
libtorrent todo-list
-2 important
+3 important4 relevant15 feasible36 notes
@@ -80,7 +80,7 @@ do as well with NATs)
in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size
in chunked encoding mode, this assert won't hold.
+the chunk headers should be subtracted from the receive_buffer_size
../src/http_seed_connection.cpp:117
boost::optional<piece_block_progress>
http_seed_connection::downloading_piece_progress() const
{
if (m_requests.empty())
@@ -442,8 +493,8 @@ the chunk headers should be subtracted from the receive_buffer_size
report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address
report the proper address of the router as the source IP of
+this understanding of our external address, instead of the empty address
../src/session_impl.cpp:5719
void session_impl::on_port_mapping(int mapping, address const& ip, int port
, error_code const& ec, int map_transport)
{
TORRENT_ASSERT(is_network_thread());
@@ -546,7 +597,7 @@ this understanding of our external address, instead of the empty address
we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily
we only need to do this if our global IPv4 address has changed
+
we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily
we only need to do this if our global IPv4 address has changed
since the DHT (currently) only supports IPv4. Since restarting the DHT
-is kind of expensive, it would be nice to not do it unnecessarily
../src/session_impl.cpp:6375
void session_impl::set_external_address(address const& ip
+is kind of expensive, it would be nice to not do it unnecessarily
../src/session_impl.cpp:6400
void session_impl::set_external_address(address const& ip
, int source_type, address const& source)
{
#if defined TORRENT_VERBOSE_LOGGING
@@ -650,7 +701,7 @@ is kind of expensive, it would be nice to not do it unnecessarily
make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file
make this depend on the error and on the filesystem the
+
make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file
make this depend on the error and on the filesystem the
files are being downloaded to. If the error is no_space_left_on_device
and the filesystem doesn't support sparse files, only zero the priorities
of the pieces that are at the tails of all files, leaving everything
@@ -705,8 +756,8 @@ up to the highest written piece in each file
once the filename renaming is removed from here this check can be removed as well
once the filename renaming is removed from here
+this check can be removed as well
../src/torrent_info.cpp:418
if (!extract_single_file(*list.list_at(i), e, root_dir
, &file_hash, &fee, &mtime))
return false;
@@ -963,7 +1014,7 @@ this check can be removed as well
instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.
instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.
../include/libtorrent/ip_voter.hpp:100
bloom_filter<32> m_external_address_voters;
std::vector<external_ip_t> m_external_addresses;
address m_external_address;
};
@@ -1041,7 +1092,7 @@ this check can be removed as well
implement blocking write. Low priority since it's not used (yet)
implement blocking write. Low priority since it's not used (yet)
../include/libtorrent/utp_stream.hpp:376
for (typename Mutable_Buffers::const_iterator i = buffers.begin()
, end(buffers.end()); i != end; ++i)
{
using asio::buffer_cast;
@@ -1092,11 +1143,9 @@ this check can be removed as well
if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer
if we make this be a disk_buffer_holder instead
we would save a copy sometimes
-use allocate_disk_receive_buffer and release_disk_receive_buffer
../include/libtorrent/web_peer_connection.hpp:127
- private:
-
+use allocate_disk_receive_buffer and release_disk_receive_buffer
../include/libtorrent/web_peer_connection.hpp:126
bool maybe_harvest_block();
// returns the block currently being
@@ -1111,6 +1160,8 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer
../incl
std::deque<int> m_file_requests;
std::string m_url;
+
+ web_seed_entry& m_web;
// this is used for intermediate storage of pieces
// that are received in more than one HTTP response
@@ -1138,12 +1189,14 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer
../incl
// this is the number of bytes we've already received
// from the next chunk header we're waiting for
int m_partial_chunk_header;
+
+ // the number of responses we've received so far on
+ // this connection
+ int m_num_responses;
};
}
-#endif // TORRENT_WEB_PEER_CONNECTION_HPP_INCLUDED
-
-
if (m_encrypted && m_rc4_encrypted)
{
fun = encrypt;
userdata = m_enc_handler.get();
@@ -1194,7 +1247,7 @@ use allocate_disk_receive_buffer and release_disk_receive_buffer
move the erasing into the loop above remove all payload ranges that has been sent
move the erasing into the loop above
+remove all payload ranges that has been sent
../src/bt_peer_connection.cpp:3323
for (std::vector<range>::iterator i = m_payloads.begin();
i != m_payloads.end(); ++i)
{
i->start -= bytes_transferred;
@@ -1297,7 +1350,7 @@ remove all payload ranges that has been sent
support authentication (i.e. user name and password) in the URL
support authentication (i.e. user name and password) in the URL
../src/http_tracker_connection.cpp:99
, aux::session_impl const& ses
, proxy_settings const& ps
, std::string const& auth
#if TORRENT_USE_I2P
@@ -1399,39 +1452,39 @@ remove all payload ranges that has been sent
while (new_size < size)
new_size <<= 1;
void** new_storage = (void**)malloc(sizeof(void*) * new_size);
@@ -1501,9 +1554,9 @@ remove all payload ranges that has been sent
// this piece index later
m_allowed_fast.push_back(index);
// if the peer has the piece and we want
@@ -1605,8 +1658,8 @@ we can construct a full bitfield
peers should really be corked/uncorked outside of all completed disk operations
peers should really be corked/uncorked outside of
+all completed disk operations
../src/peer_connection.cpp:4574
// this means we're in seed mode and we haven't yet
// verified this piece (r.piece)
t->filesystem().async_read_and_hash(r, boost::bind(&peer_connection::on_disk_read_complete
, self(), _1, _2, r), cache.second);
@@ -1657,7 +1710,7 @@ all completed disk operations
recalculate all connect candidates for all torrents
recalculate all connect candidates for all torrents
../src/session_impl.cpp:1943
m_upload_rate.close();
// #error closing the udp socket here means that
// the uTP connections cannot be closed gracefully
@@ -1811,7 +1864,7 @@ override at a time
have a separate list for these connections, instead of having to loop through all of them
have a separate list for these connections, instead of having to loop through all of them
../src/session_impl.cpp:3393
// --------------------------------------------------------------
if (!m_paused) m_auto_manage_time_scaler--;
if (m_auto_manage_time_scaler < 0)
{
@@ -1862,7 +1915,7 @@ override at a time
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info
make this more generic to not just work if files have been
+
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info
make this more generic to not just work if files have been
renamed, but also if they have been merged into a single file for instance
maybe use the same format as .torrent files and reuse some code from torrent_info
../src/storage.cpp:629
for (;;)
{
@@ -2256,9 +2309,9 @@ maybe use the same format as .torrent files and reuse some code from torrent_inf
for (int i = 0; i < file_sizes_ent->list_size(); ++i)
{
-
what if file_base is used to merge several virtual files into a single physical file? We should probably disable this if file_base is used. This is not a widely used feature though
what if file_base is used to merge several virtual files
+
what if file_base is used to merge several virtual files into a single physical file? We should probably disable this if file_base is used. This is not a widely used feature though
what if file_base is used to merge several virtual files
into a single physical file? We should probably disable this
-if file_base is used. This is not a widely used feature though
../src/storage.cpp:1238
int bytes_transferred = 0;
+if file_base is used. This is not a widely used feature though
../src/storage.cpp:1246
int bytes_transferred = 0;
// if the file is opened in no_buffer mode, and the
// read is unaligned, we need to fall back on a slow
// special read that reads aligned buffers and copies
@@ -2309,7 +2362,7 @@ if file_base is used. This is not a widely used feature though
../src/st
// makes unaligned requests (and the disk cache is disabled or fully utilized
// for write cache).
-
is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash
is verify_peer_cert called once per certificate in the chain, and
+
is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash
is verify_peer_cert called once per certificate in the chain, and
this function just tells us which depth we're at right now? If so, the comment
makes sense.
any certificate that isn't the leaf (i.e. the one presented by the peer)
@@ -2365,12 +2418,12 @@ need to be verified to make sure its DN matches the info-hash
../src/tor
{
#if defined(TORRENT_VERBOSE_LOGGING) || defined(TORRENT_LOGGING)
match = true;
-
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync
make this more generic to not just work if files have been
+
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync
make this more generic to not just work if files have been
renamed, but also if they have been merged into a single file for instance
maybe use the same format as .torrent files and reuse some code from torrent_info
The mapped_files needs to be read both in the network thread
and in the disk thread, since they both have their own mapped files structures
-which are kept in sync
../src/torrent.cpp:5172
if (m_seed_mode) m_verified.resize(m_torrent_file->num_pieces(), false);
+which are kept in sync
../src/torrent.cpp:5170
if (m_seed_mode) m_verified.resize(m_torrent_file->num_pieces(), false);
super_seeding(rd.dict_find_int_value("super_seeding", 0));
m_last_scrape = rd.dict_find_int_value("last_scrape", 0);
@@ -2421,12 +2474,12 @@ which are kept in sync
if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents
if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents
if this is a merkle torrent and we can't
restore the tree, we need to wipe all the
bits in the have array, but not necessarily
we might want to do a full check to see if we have
all the pieces. This is low priority since almost
-no one uses merkle torrents
../src/torrent.cpp:5308
add_web_seed(url, web_seed_entry::http_seed);
+no one uses merkle torrents
../src/torrent.cpp:5306
add_web_seed(url, web_seed_entry::http_seed);
}
}
@@ -2477,9 +2530,9 @@ no one uses merkle torrents
make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base
make this more generic to not just work if files have been
renamed, but also if they have been merged into a single file for instance.
-using file_base
go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece
go through the pieces we have and count the total number
+
go through the pieces we have and count the total number of downloaders we have. Only count peers that are interested in us since some peers might not send have messages for pieces we have it num_interested == 0, we need to pick a new piece
go through the pieces we have and count the total number
of downloaders we have. Only count peers that are interested in us
since some peers might not send have messages for pieces we have
-it num_interested == 0, we need to pick a new piece
../src/torrent.cpp:8026
}
+it num_interested == 0, we need to pick a new piece
../src/torrent.cpp:8035
}
rarest_pieces.clear();
rarest_rarity = pp.peer_count;
@@ -2584,7 +2637,7 @@ it num_interested == 0, we need to pick a new piece
it would be more efficient to not use a string here. however, the problem is that some trackers will respond with actual strings. For example i2p trackers
it would be more efficient to not use a string here.
+
it would be more efficient to not use a string here. however, the problem is that some trackers will respond with actual strings. For example i2p trackers
it would be more efficient to not use a string here.
however, the problem is that some trackers will respond
with actual strings. For example i2p trackers
../src/udp_tracker_connection.cpp:552
}
@@ -2637,7 +2690,7 @@ with actual strings. For example i2p trackers
../src/udp_tracker_connect
{
restart_read_timeout();
int action = detail::read_int32(buf);
-
include the number of peers received from this tracker, at last announce
include the number of peers received from this tracker, at last announce
../include/libtorrent/torrent_info.hpp:123
// if this tracker failed the last time it was contacted
// this error code specifies what error occurred
error_code last_error;
@@ -2894,7 +2947,7 @@ m_sock.bind(endpoint, ec);
../include/libtorrent/proxy_base.hpp:166
// flags for the source bitmask, each indicating where
// we heard about this tracker
enum tracker_source
-
support using the windows API for UPnP operations as well
support using the windows API for UPnP operations as well
../include/libtorrent/upnp.hpp:121
{
virtual const char* name() const BOOST_SYSTEM_NOEXCEPT;
virtual std::string message(int ev) const BOOST_SYSTEM_NOEXCEPT;
virtual boost::system::error_condition default_error_condition(int ev) const BOOST_SYSTEM_NOEXCEPT
diff --git a/docs/tuning.html b/docs/tuning.html
index ea96bf8e0..bcbec6924 100644
--- a/docs/tuning.html
+++ b/docs/tuning.html
@@ -3,7 +3,7 @@
-
+
libtorrent manual
@@ -170,7 +170,9 @@ large number of paused torrents (that are popular) it will be even more
significant.
If you're short of memory, you should consider lowering the limit. 500 is probably
enough. You can do this by setting session_settings::max_peerlist_size to
-the max number of peers you want in the torrent's peer list.
+the max number of peers you want in a torrent's peer list. This limit applies per
+torrent. For 5 torrents, the total number of peers in peerlists will be 5 times
+the setting.
You should also lower the same limit but for paused torrents. It might even make sense
to set that even lower, since you only need a few peers to start up while waiting
for the tracker and DHT to give you fresh ones. The max peer list size for paused
diff --git a/docs/tuning.rst b/docs/tuning.rst
index 4a3fea0c8..cbb2bf594 100644
--- a/docs/tuning.rst
+++ b/docs/tuning.rst
@@ -110,7 +110,9 @@ significant.
If you're short of memory, you should consider lowering the limit. 500 is probably
enough. You can do this by setting ``session_settings::max_peerlist_size`` to
-the max number of peers you want in the torrent's peer list.
+the max number of peers you want in a torrent's peer list. This limit applies per
+torrent. For 5 torrents, the total number of peers in peerlists will be 5 times
+the setting.
You should also lower the same limit but for paused torrents. It might even make sense
to set that even lower, since you only need a few peers to start up while waiting