libtorrent todo-list

0 urgent 6 important 52 relevant 9 feasible 164 notes
relevance 3../src/file.cpp:481find out what error code is reported when the filesystem does not support hard links.
relevance 3../src/upnp.cpp:72listen_interface is not used. It's meant to bind the broadcast socket
relevance 3../src/kademlia/dht_tracker.cpp:521it would be nice to not have to decode this if logging is not enabled. Maybe there could be a separate log function for incoming and outgoing packets.
relevance 3../src/kademlia/get_item.cpp:202it would be nice to not have to spend so much time rendering the bencoded dict if logging is disabled
relevance 3../src/kademlia/get_item.cpp:226we don't support CAS errors here! we need a custom observer
relevance 3../src/kademlia/traversal_algorithm.cpp:357it would be nice to not have to perform this loop if logging is disabled
relevance 2../src/alert.cpp:1444the salt here is allocated on the heap. It would be nice to allocate in in the stack_allocator
relevance 2../src/alert_manager.cpp:97keep a count of the number of threads waiting. Only if it's > 0 notify them
relevance 2../src/block_cache.cpp:1690turn these return values into enums returns -1: block not in cache -2: out of memory
relevance 2../src/escape_string.cpp:209this should probably be moved into string_util.cpp
relevance 2../src/file.cpp:505test this on a FAT volume to see what error we get!
relevance 2../src/http_tracker_connection.cpp:379returning a bool here is redundant. Instead this function should return the peer_entry
relevance 2../src/peer_connection.cpp:2336this should probably be based on time instead of number of request messages. For a very high throughput connection, 300 may be a legitimate number of requests to have in flight when getting choked
relevance 2../src/peer_connection.cpp:3043since we throw away the queue entry once we issue the disk job, this may happen. Instead, we should keep the queue entry around, mark it as having been requested from disk and once the disk job comes back, discard it if it has been cancelled. Maybe even be able to cancel disk jobs?
relevance 2../src/peer_connection.cpp:4693use a deadline_timer for timeouts. Don't rely on second_tick()! Hook this up to connect timeout as well. This would improve performance because of less work in second_tick(), and might let use remove ticking entirely eventually
relevance 2../src/peer_list.cpp:495it would be nice if there was a way to iterate over these torrent_peer objects in the order they are allocated in the pool instead. It would probably be more efficient
relevance 2../src/piece_picker.cpp:1996make the 2048 limit configurable
relevance 2../src/piece_picker.cpp:2605the first_block returned here is the largest free range, not the first-fit range, which would be better
relevance 2../src/piece_picker.cpp:3390it would be nice if this could be folded into lock_piece() the main distinction is that this also maintains the m_num_passed counter and the passed_hash_check member Is there ever a case where we call write filed without also locking the piece? Perhaps write_failed() should imply locking it.
relevance 2../src/session_impl.cpp:214find a better place for this function
relevance 2../src/session_impl.cpp:821if the DHT is enabled, it should probably be restarted here. maybe it should even be deferred to not be started until the client has had a chance to pass in the dht state
relevance 2../src/session_impl.cpp:1817the udp socket(s) should be using the same generic mechanism and not be restricted to a single one we should open a one listen socket for each entry in the listen_interfaces list
relevance 2../src/session_impl.cpp:1919use bind_to_device in udp_socket
relevance 2../src/session_impl.cpp:1945use bind_to_device in udp_socket
relevance 2../src/session_impl.cpp:3391make a list for torrents that want to be announced on the DHT so we don't have to loop over all torrents, just to find the ones that want to announce
relevance 2../src/storage.cpp:921is this risky? The upper layer will assume we have the whole file. Perhaps we should verify that at least the size of the file is correct
relevance 2../src/torrent.cpp:726post alert
relevance 2../src/torrent.cpp:4807abort lookups this torrent has made via the session host resolver interface
relevance 2../src/torrent.cpp:4951the tracker login feature should probably be deprecated
relevance 2../src/torrent.cpp:7808if peer is a really good peer, maybe we shouldn't disconnect it
relevance 2../src/tracker_manager.cpp:200some of these arguments could probably be moved to the tracker request itself. like the ip_filter and settings
relevance 2../src/udp_tracker_connection.cpp:83support authentication here. tracker_req().auth
relevance 2../src/ut_metadata.cpp:120if we were to initialize m_metadata_size lazily instead, we would probably be more efficient initialize m_metadata_size
relevance 2../src/utp_stream.cpp:351it would be nice if not everything would have to be public here
relevance 2../src/web_peer_connection.cpp:632just make this peer not have the pieces associated with the file we just requested. Only when it doesn't have any of the file do the following
relevance 2../src/web_peer_connection.cpp:691create a mapping of file-index to redirection URLs. Use that to form URLs instead. Support to reconnect to a new server without destructing this peer_connection
relevance 2../src/kademlia/node.cpp:71make this configurable in dht_settings
relevance 2../src/kademlia/node.cpp:503it would be nice to have a bias towards node-id prefixes that are missing in the bucket
relevance 2../src/kademlia/node.cpp:593use the non deprecated function instead of this one
relevance 2../src/kademlia/node.cpp:926find_node should write directly to the response entry
relevance 2../src/kademlia/routing_table.cpp:116use the non deprecated function instead of this one
relevance 2../src/kademlia/routing_table.cpp:955move the lowest priority nodes to the replacement bucket
relevance 2../include/libtorrent/alert_types.hpp:1428should the alert baseclass have this object instead?
relevance 2../include/libtorrent/build_config.hpp:40instead of using a dummy function to cause link errors when incompatible build configurations are used, make the namespace name depend on the configuration, and have a using declaration in the headers to pull it into libtorrent.
relevance 2../include/libtorrent/enum_net.hpp:143this could be done more efficiently by just looking up the interface with the given name, maybe even with if_nametoindex()
relevance 2../include/libtorrent/heterogeneous_queue.hpp:56add emplace_back() version
relevance 2../include/libtorrent/piece_picker.hpp:600having 8 priority levels is probably excessive. It should probably be changed to 3 levels + dont-download
relevance 2../include/libtorrent/proxy_base.hpp:259use the resolver interface that has a built-in cache
relevance 2../include/libtorrent/session.hpp:198the two second constructors here should probably be deprecated in favor of the more generic one that just takes a settings_pack and a string
relevance 2../include/libtorrent/session.hpp:249the ip filter should probably be saved here too
relevance 2../include/libtorrent/session_settings.hpp:55this type is only used internally now. move it to an internal header and make this type properly deprecated.
relevance 2../include/libtorrent/socket_type.hpp:321it would be nice to use aligned_storage here when building on c++11
relevance 2../include/libtorrent/socks5_stream.hpp:135add async_connect() that takes a hostname and port as well
relevance 2../include/libtorrent/tracker_manager.hpp:276this class probably doesn't need to have virtual functions.
relevance 2../include/libtorrent/kademlia/observer.hpp:133make this private and unconditional
relevance 2../include/libtorrent/aux_/session_interface.hpp:138the IP voting mechanism should be factored out to its own class, not part of the session
relevance 2../include/libtorrent/aux_/session_interface.hpp:163remove this. There's already get_resolver()
relevance 2../include/libtorrent/aux_/session_interface.hpp:218factor out the thread pool for socket jobs into a separate class used to (potentially) issue socket write calls onto multiple threads
relevance 1../src/disk_io_thread.cpp:206it would be nice to have the number of threads be set dynamically
relevance 1../src/http_seed_connection.cpp:123in chunked encoding mode, this assert won't hold. the chunk headers should be subtracted from the receive_buffer_size
relevance 1../src/session_impl.cpp:5223report the proper address of the router as the source IP of this understanding of our external address, instead of the empty address
relevance 1../src/session_impl.cpp:6506we only need to do this if our global IPv4 address has changed since the DHT (currently) only supports IPv4. Since restarting the DHT is kind of expensive, it would be nice to not do it unnecessarily
relevance 1../src/torrent.cpp:1168make this depend on the error and on the filesystem the files are being downloaded to. If the error is no_space_left_on_device and the filesystem doesn't support sparse files, only zero the priorities of the pieces that are at the tails of all files, leaving everything up to the highest written piece in each file
relevance 1../src/torrent.cpp:6956save the send_stats state instead of throwing them away it may pose an issue when downgrading though
relevance 1../src/torrent.cpp:8057should disconnect all peers that have the pieces we have not just seeds. It would be pretty expensive to check all pieces for all peers though
relevance 1../include/libtorrent/ip_voter.hpp:124instead, have one instance per possible subnet, global IPv4, global IPv6, loopback, 192.168.x.x, 10.x.x.x, etc.
relevance 1../include/libtorrent/web_peer_connection.hpp:120if we make this be a disk_buffer_holder instead we would save a copy sometimes use allocate_disk_receive_buffer and release_disk_receive_buffer
relevance 0../test/test_block_cache.cpp:469test try_evict_blocks
relevance 0../test/test_block_cache.cpp:470test evicting volatile pieces, to see them be removed
relevance 0../test/test_block_cache.cpp:471test evicting dirty pieces
relevance 0../test/test_block_cache.cpp:472test free_piece
relevance 0../test/test_block_cache.cpp:473test abort_dirty
relevance 0../test/test_block_cache.cpp:474test unaligned reads
relevance 0../test/test_dht.cpp:439test obfuscated_get_peers
relevance 0../test/test_file_storage.cpp:210test file_storage::optimize too
relevance 0../test/test_file_storage.cpp:211test map_block
relevance 0../test/test_file_storage.cpp:212test piece_size(int piece)
relevance 0../test/test_file_storage.cpp:213test file_index_at_offset
relevance 0../test/test_file_storage.cpp:214test file attributes
relevance 0../test/test_file_storage.cpp:215test symlinks
relevance 0../test/test_file_storage.cpp:216test pad_files
relevance 0../test/test_file_storage.cpp:217test reorder_file (make sure internal_file_entry::swap() is used)
relevance 0../test/test_metadata_extension.cpp:93it would be nice to test reversing which session is making the connection as well
relevance 0../test/test_peer_list.cpp:921test erasing peers
relevance 0../test/test_peer_list.cpp:922test update_peer_port with allow_multiple_connections_per_ip and without
relevance 0../test/test_peer_list.cpp:923test add i2p peers
relevance 0../test/test_peer_list.cpp:924test allow_i2p_mixed
relevance 0../test/test_peer_list.cpp:925test insert_peer failing with all error conditions
relevance 0../test/test_peer_list.cpp:926test IPv6
relevance 0../test/test_peer_list.cpp:927test connect_to_peer() failing
relevance 0../test/test_peer_list.cpp:928test connection_closed
relevance 0../test/test_peer_list.cpp:929connect candidates recalculation when incrementing failcount
relevance 0../test/test_primitives.cpp:212test the case where we have > 120 samples (and have the base delay actually be updated)
relevance 0../test/test_primitives.cpp:213test the case where a sample is lower than the history entry but not lower than the base
relevance 0../test/test_resolve_links.cpp:80test files with different piece size (negative test)
relevance 0../test/test_resolve_links.cpp:83it would be nice to test resolving of more than just 2 files as well. like 3 single file torrents merged into one, resolving all 3 files.
relevance 0../test/test_resume.cpp:340test all other resume flags here too. This would require returning more than just the torrent_status from test_resume_flags. Also http seeds and trackers for instance
relevance 0../test/test_ssl.cpp:378test using a signed certificate with the wrong info-hash in DN
relevance 0../test/test_ssl.cpp:476also test using a hash that refers to a valid torrent but that differs from the SNI hash
relevance 0../test/test_torrent.cpp:133wait for an alert rather than just waiting 10 seconds. This is kind of silly
relevance 0../test/test_torrent_info.cpp:160test remap_files
relevance 0../test/test_torrent_info.cpp:161merkle torrents. specifically torrent_info::add_merkle_nodes and torrent with "root hash"
relevance 0../test/test_torrent_info.cpp:162torrent with 'p' (padfile) attribute
relevance 0../test/test_torrent_info.cpp:163torrent with 'h' (hidden) attribute
relevance 0../test/test_torrent_info.cpp:164torrent with 'x' (executable) attribute
relevance 0../test/test_torrent_info.cpp:165torrent with 'l' (symlink) attribute
relevance 0../test/test_torrent_info.cpp:166creating a merkle torrent (torrent_info::build_merkle_list)
relevance 0../test/test_torrent_info.cpp:167torrent with multiple trackers in multiple tiers, making sure we shuffle them (how do you test shuffling?, load it multiple times and make sure it's in different order at least once)
relevance 0../test/test_torrent_info.cpp:168sanitize_append_path_element with all kinds of UTF-8 sequences, including invalid ones
relevance 0../test/test_torrent_info.cpp:169torrents with a missing name
relevance 0../test/test_torrent_info.cpp:170torrents with a zero-length name
relevance 0../test/test_torrent_info.cpp:171torrents with a merkle tree and add_merkle_nodes
relevance 0../test/test_torrent_info.cpp:172torrent with a non-dictionary info-section
relevance 0../test/test_torrent_info.cpp:173torrents with DHT nodes
relevance 0../test/test_torrent_info.cpp:174torrent with url-list as a single string
relevance 0../test/test_torrent_info.cpp:175torrent with http seed as a single string
relevance 0../test/test_torrent_info.cpp:176torrent with a comment
relevance 0../test/test_torrent_info.cpp:177torrent with an SSL cert
relevance 0../test/test_torrent_info.cpp:178torrent with attributes (executable and hidden)
relevance 0../test/test_torrent_info.cpp:179torrent_info::add_tracker
relevance 0../test/test_torrent_info.cpp:180torrent_info::add_url_seed
relevance 0../test/test_torrent_info.cpp:181torrent_info::add_http_seed
relevance 0../test/test_torrent_info.cpp:182torrent_info::unload
relevance 0../test/test_torrent_info.cpp:183torrent_info constructor that takes an invalid bencoded buffer
relevance 0../test/test_torrent_info.cpp:184verify_encoding with a string that triggers character replacement
relevance 0../test/test_tracker.cpp:252test parse peers6
relevance 0../test/test_tracker.cpp:253test parse tracker-id
relevance 0../test/test_tracker.cpp:254test parse failure-reason
relevance 0../test/test_tracker.cpp:255test all failure paths, including invalid bencoding not a dictionary no files entry in scrape response no info-hash entry in scrape response malformed peers in peer list of dictionaries uneven number of bytes in peers and peers6 string responses
relevance 0../test/test_transfer.cpp:291factor out the disk-full test into its own unit test
relevance 0../test/test_upnp.cpp:100store the log and verify that some key messages are there
relevance 0../test/web_seed_suite.cpp:366file hashes don't work with the new torrent creator reading async
relevance 0../src/block_cache.cpp:959it's somewhat expensive to iterate over this linked list. Presumably because of the random access of memory. It would be nice if pieces with no evictable blocks weren't in this list
relevance 0../src/block_cache.cpp:1023this should probably only be done every n:th time
relevance 0../src/block_cache.cpp:1775create a holder for refcounts that automatically decrement
relevance 0../src/bt_peer_connection.cpp:676this could be optimized using knuth morris pratt
relevance 0../src/bt_peer_connection.cpp:2245if we're finished, send upload_only message
relevance 0../src/choker.cpp:336optimize this using partial_sort or something. We don't need to sort the entire list
relevance 0../src/choker.cpp:339make the comparison function a free function and move it into this cpp file
relevance 0../src/choker.cpp:344make configurable
relevance 0../src/choker.cpp:358make configurable
relevance 0../src/create_torrent.cpp:284this should probably be optional
relevance 0../src/disk_buffer_pool.cpp:319perhaps we should sort the buffers here?
relevance 0../src/disk_io_thread.cpp:857it would be nice to optimize this by having the cache pieces also ordered by
relevance 0../src/disk_io_thread.cpp:900instead of doing a lookup each time through the loop, save cached_piece_entry pointers with piece_refcount incremented to pin them
relevance 0../src/disk_io_thread.cpp:1079instead of doing this. pass in the settings to each storage_interface call. Each disk thread could hold its most recent understanding of the settings in a shared_ptr, and update it every time it wakes up from a job. That way each access to the settings won't require a mutex to be held.
relevance 0../src/disk_io_thread.cpp:1107a potentially more efficient solution would be to have a special queue for retry jobs, that's only ever run when a job completes, in any thread. It would only work if counters::num_running_disk_jobs > 0
relevance 0../src/disk_io_thread.cpp:1121it should clear the hash state even when there's an error, right?
relevance 0../src/disk_io_thread.cpp:1819maybe the tailqueue_iterator should contain a pointer-pointer instead and have an unlink function
relevance 0../src/disk_io_thread.cpp:2081this is potentially very expensive. One way to solve it would be to have a fence for just this one piece.
relevance 0../src/disk_io_thread.cpp:2342we should probably just hang the job on the piece and make sure the hasher gets kicked
relevance 0../src/disk_io_thread.cpp:2409introduce a holder class that automatically increments and decrements the piece_refcount
relevance 0../src/disk_io_thread.cpp:2655it would be nice to not have to lock the mutex every turn through this loop
relevance 0../src/http_tracker_connection.cpp:184support this somehow
relevance 0../src/metadata_transfer.cpp:356this is not safe. The torrent could be unloaded while we're still sending the metadata
relevance 0../src/packet_buffer.cpp:176use compare_less_wrap for this comparison as well
relevance 0../src/part_file.cpp:252what do we do if someone is currently reading from the disk from this piece? does it matter? Since we won't actively erase the data from disk, but it may be overwritten soon, it's probably not that big of a deal
relevance 0../src/part_file.cpp:350instead of rebuilding the whole file header and flushing it, update the slot entries as we go
relevance 0../src/peer_connection.cpp:509it would be neat to be able to print this straight into the alert's stack allocator
relevance 0../src/peer_connection.cpp:1009this should be the global download rate
relevance 0../src/peer_connection.cpp:3282sort the allowed fast set in priority order
relevance 0../src/peer_connection.cpp:6048The stats checks can not be honored when authenticated encryption is in use because we may have encrypted data which we cannot authenticate yet
relevance 0../src/piece_picker.cpp:2070this could probably be optimized by incrementally calling partial_sort to sort one more element in the list. Because chances are that we'll just need a single piece, and once we've picked from it we're done. Sorting the rest of the list in that case is a waste of time.
relevance 0../src/piece_picker.cpp:2575when expanding pieces for cache stripe reasons, the !downloading condition doesn't make much sense
relevance 0../src/session_impl.cpp:504there's no rule here to make uTP connections not have the global or local rate limits apply to it. This used to be the default.
relevance 0../src/session_impl.cpp:1731instead of having a special case for this, just make the default listen interfaces be "0.0.0.0:6881,[::1]:6881" and use the generic path. That would even allow for not listening at all.
relevance 0../src/session_impl.cpp:2624should this function take a shared_ptr instead?
relevance 0../src/session_impl.cpp:2983have a separate list for these connections, instead of having to loop through all of them
relevance 0../src/session_impl.cpp:3013this should apply to all bandwidth channels
relevance 0../src/session_impl.cpp:3500these vectors could be copied from m_torrent_lists, if we would maintain them. That way the first pass over all torrents could be avoided. It would be especially efficient if most torrents are not auto-managed whenever we receive a scrape response (or anything that may change the rank of a torrent) that one torrent could re-sort itself in a list that's kept sorted at all times. That way, this pass over all torrents could be avoided alltogether.
relevance 0../src/session_impl.cpp:3577allow extensions to sort torrents for queuing
relevance 0../src/session_impl.cpp:3750use a lower limit than m_settings.connections_limit to allocate the to 10% or so of connection slots for incoming connections
relevance 0../src/session_impl.cpp:3893post a message to have this happen immediately instead of waiting for the next tick
relevance 0../src/session_impl.cpp:3940this should be called for all peers!
relevance 0../src/session_impl.cpp:4346it might be a nice feature here to limit the number of torrents to send in a single update. By just posting the first n torrents, they would nicely be round-robined because the torrent lists are always pushed back. Perhaps the status_update_alert could even have a fixed array of n entries rather than a vector, to further improve memory locality.
relevance 0../src/storage.cpp:731make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info
relevance 0../src/storage.cpp:1062if everything moves OK, except for the partfile we currently won't update the save path, which breaks things. it would probably make more sense to give up on the partfile
relevance 0../src/string_util.cpp:60warning C4146: unary minus operator applied to unsigned type, result still unsigned
relevance 0../src/torrent.cpp:515if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.
relevance 0../src/torrent.cpp:666if the existing torrent doesn't have metadata, insert the metadata we just downloaded into it.
relevance 0../src/torrent.cpp:1475is verify_peer_cert called once per certificate in the chain, and this function just tells us which depth we're at right now? If so, the comment makes sense. any certificate that isn't the leaf (i.e. the one presented by the peer) should be accepted automatically, given preverified is true. The leaf certificate need to be verified to make sure its DN matches the info-hash
relevance 0../src/torrent.cpp:1882instead of creating the picker up front here, maybe this whole section should move to need_picker()
relevance 0../src/torrent.cpp:1957this could be optimized by looking up which files are complete and just look at those
relevance 0../src/torrent.cpp:1973this could be optimized by looking up which files are complete and just look at those
relevance 0../src/torrent.cpp:2140there may be peer extensions relying on the torrent extension still being alive. Only do this if there are no peers. And when the last peer is disconnected, if the torrent is unloaded, clear the extensions m_extensions.clear();
relevance 0../src/torrent.cpp:2816this pattern is repeated in a few places. Factor this into a function and generalize the concept of a torrent having a dedicated listen port
relevance 0../src/torrent.cpp:3593add one peer per IP the hostname resolves to
relevance 0../src/torrent.cpp:4587update suggest_piece?
relevance 0../src/torrent.cpp:4730really, we should just keep the picker around in this case to maintain the availability counters
relevance 0../src/torrent.cpp:6704make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance maybe use the same format as .torrent files and reuse some code from torrent_info The mapped_files needs to be read both in the network thread and in the disk thread, since they both have their own mapped files structures which are kept in sync
relevance 0../src/torrent.cpp:6822if this is a merkle torrent and we can't restore the tree, we need to wipe all the bits in the have array, but not necessarily we might want to do a full check to see if we have all the pieces. This is low priority since almost no one uses merkle torrents
relevance 0../src/torrent.cpp:7013make this more generic to not just work if files have been renamed, but also if they have been merged into a single file for instance. using file_base
relevance 0../src/torrent.cpp:9028add a flag to ignore stats, and only care about resume data for content. For unchanged files, don't trigger a load of the metadata just to save an empty resume data file
relevance 0../src/torrent.cpp:10640instead of resorting the whole list, insert the peers directly into the right place
relevance 0../src/torrent_peer.cpp:179how do we deal with our external address changing?
relevance 0../src/udp_socket.cpp:288it would be nice to detect this on posix systems also
relevance 0../src/udp_socket.cpp:779use the system resolver_interface here
relevance 0../src/ut_metadata.cpp:313we really need to increment the refcounter on the torrent while this buffer is still in the peer's send buffer
relevance 0../src/utp_stream.cpp:1709this loop is not very efficient. It could be fixed by having a separate list of sequence numbers that need resending
relevance 0../src/web_connection_base.cpp:73introduce a web-seed default class which has a low download priority
relevance 0../src/kademlia/dht_tracker.cpp:307ideally this function would be called when the put completes
relevance 0../include/libtorrent/block_cache.hpp:219make this 32 bits and to count seconds since the block cache was created
relevance 0../include/libtorrent/config.hpp:339Make this count Unicode characters instead of bytes on windows
relevance 0../include/libtorrent/disk_buffer_pool.hpp:137try to remove the observers, only using the async_allocate handlers
relevance 0../include/libtorrent/file.hpp:173move this into a separate header file, TU pair
relevance 0../include/libtorrent/heterogeneous_queue.hpp:185if this throws, should we do anything?
relevance 0../include/libtorrent/peer_connection.hpp:204make this a raw pointer (to save size in the first cache line) and make the constructor take a raw pointer. torrent objects should always outlive their peers
relevance 0../include/libtorrent/peer_connection.hpp:1043factor this out into its own class with a virtual interface torrent and session should implement this interface
relevance 0../include/libtorrent/peer_connection_interface.hpp:47make this interface smaller!
relevance 0../include/libtorrent/performance_counters.hpp:139should keepalives be in here too? how about dont-have, share-mode, upload-only
relevance 0../include/libtorrent/performance_counters.hpp:451some space could be saved here by making gauges 32 bits
relevance 0../include/libtorrent/performance_counters.hpp:452restore these to regular integers. Instead have one copy of the counters per thread and collect them at convenient synchronization points
relevance 0../include/libtorrent/piece_picker.hpp:762should this be allocated lazily?
relevance 0../include/libtorrent/proxy_base.hpp:173it would be nice to remember the bind port and bind once we know where the proxy is m_sock.bind(endpoint, ec);
relevance 0../include/libtorrent/receive_buffer.hpp:258Detect when the start of the next crpyto packet is aligned with the start of piece data and the crpyto packet is at least as large as the piece data. With a little extra work we could receive directly into a disk buffer in that case.
relevance 0../include/libtorrent/session.hpp:844add get_peer_class_type_filter() as well
relevance 0../include/libtorrent/settings_pack.hpp:1097deprecate this ``max_rejects`` is the number of piece requests we will reject in a row while a peer is choked before the peer is considered abusive and is disconnected.
relevance 0../include/libtorrent/torrent.hpp:1263this wastes 5 bits per file
relevance 0../include/libtorrent/torrent.hpp:1322These two bitfields should probably be coalesced into one
relevance 0../include/libtorrent/torrent_info.hpp:115include the number of peers received from this tracker, at last announce
relevance 0../include/libtorrent/torrent_info.hpp:262there may be some opportunities to optimize the size if torrent_info. specifically to turn some std::string and std::vector into pointers
relevance 0../include/libtorrent/tracker_manager.hpp:380this should be unique_ptr in the future
relevance 0../include/libtorrent/upnp.hpp:108support using the windows API for UPnP operations as well
relevance 0../include/libtorrent/utp_stream.hpp:402implement blocking write. Low priority since it's not used (yet)
relevance 0../include/libtorrent/kademlia/item.hpp:61since this is a public function, it should probably be moved out of this header and into one with other public functions.
relevance 0../include/libtorrent/aux_/session_impl.hpp:851should this be renamed m_outgoing_interfaces?
relevance 0../include/libtorrent/aux_/session_impl.hpp:902replace this by a proper asio timer
relevance 0../include/libtorrent/aux_/session_impl.hpp:907replace this by a proper asio timer
relevance 0../include/libtorrent/aux_/session_impl.hpp:914replace this by a proper asio timer
relevance 0../include/libtorrent/aux_/session_interface.hpp:242it would be nice to not have this be part of session_interface
relevance 0../include/libtorrent/aux_/session_settings.hpp:78make this a bitfield