fix typos and extend spell checking

This commit is contained in:
arvidn 2019-11-29 16:56:15 +01:00 committed by Arvid Norberg
parent a81bf1f1d7
commit d0f5f08665
6 changed files with 146 additions and 51 deletions

46
docs/filter-rst.py Normal file
View File

@ -0,0 +1,46 @@
#!/usr/bin/env python
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
from __future__ import print_function
import sys
def indent(line):
if line == '':
return None
end = 0
for c in line:
end += 1
if " \t" not in c:
return line[:end]
return line
start_block = False
filter_indent = None
for line in open(sys.argv[1]):
if line == '\n':
continue
if filter_indent:
if line.startswith(filter_indent):
continue
else:
filter_indent = None
if line.strip().startswith('.. '):
start_block = True
continue
if line.endswith('::\n'):
start_block = True
continue
if start_block:
filter_indent = indent(line)
start_block = False
continue
sys.stdout.write(line)

View File

@ -4,6 +4,7 @@ from __future__ import print_function
f = open('../include/libtorrent/settings_pack.hpp')
out = open('settings.rst', 'w+')
all_names = set()
def print_field(str, width):
@ -18,6 +19,8 @@ def render_section(names, description, type, default_values):
# add link targets for the rest of the manual to reference
for n in names:
print('.. _%s:\n' % n, file=out)
for w in n.split('_'):
all_names.add(w)
if len(names) > 0:
print('.. raw:: html\n', file=out)
@ -123,5 +126,9 @@ for line in f:
names.append(line)
dictionary = open('hunspell/settings.dic', 'w+')
for w in all_names:
dictionary.write(w + '\n')
dictionary.close()
out.close()
f.close()

View File

@ -42,6 +42,7 @@ unchoked
dict
kiB
MiB
GiB
DHT
adler32
LRU
@ -142,6 +143,7 @@ OpenSSL
openssl
libtorrent's
filesystem
filesystems
url
fs
io
@ -208,7 +210,8 @@ unchoking
ep
nid
crypto
uri
URI
URIs
infohashes
rw
holepunch
@ -471,3 +474,34 @@ clang's
prev
Dreik's
ctx
unicode
peers6
DNSName
SubjectAltName
SNI
httpseeds
Base16
lsd
xt
netsh
GUID
NIC
tun0
eth0
eth1
lan
NOATIME
INADDR
supportcrypt
setsockopt
OS
portmap
QBone
SNDBUFFER
RCVBUF
QBSS
DDoS
anonymization
Tribler
gzipped
processes'

View File

@ -79,8 +79,9 @@ all: html pdf
single-page-ref.rst: $(REFERENCE_TARGETS:=.rst)
python join_rst.py $(filter-out reference.rst, $(REFERENCE_TARGETS:=.rst)) >single-page-ref.rst
settings.rst: ../include/libtorrent/settings_pack.hpp
settings.rst hunspell/settings.dic: ../include/libtorrent/settings_pack.hpp hunspell/libtorrent.dic
python gen_settings_doc.py || { rm $@; exit 1; }
cat hunspell/libtorrent.dic >>hunspell/settings.dic
stats_counters.rst: ../src/session_stats.cpp ../include/libtorrent/performance_counters.hpp
python gen_stats_doc.py || { rm $@; exit 1; }
@ -100,11 +101,15 @@ ifneq ($(STAGE),)
cp $@ $(WEB_PATH)/$@
endif
$(REFERENCE_TARGETS:=.rst) plain_text_out.txt:gen_reference_doc.py ../include/libtorrent/*.hpp ../include/libtorrent/kademlia/*.hpp manual.rst settings.rst stats_counters.rst
$(REFERENCE_TARGETS:=.rst) plain_text_out.txt:gen_reference_doc.py ../include/libtorrent/*.hpp ../include/libtorrent/kademlia/*.hpp manual.rst settings.rst stats_counters.rst hunspell/settings.dic
python gen_reference_doc.py --plain-output
spell-check:plain_text_out.txt $(MANUAL_TARGETS:=.html)
spell-check:plain_text_out.txt $(MANUAL_TARGETS:=.html) manual.rst settings.rst
python filter-rst.py manual.rst >manual-plain.txt
python filter-rst.py settings.rst >settings-plain.txt
hunspell -d hunspell/en_US -p hunspell/libtorrent.dic -l plain_text_out.txt >hunspell-report.txt
hunspell -d hunspell/en_US -p hunspell/libtorrent.dic -l manual-plain.txt >hunspell-report.txt
# hunspell -d hunspell/en_US -p hunspell/settings.dic -l settings-plain.txt >hunspell-report.txt
hunspell -d hunspell/en_US -p hunspell/libtorrent.dic -H -l $(MANUAL_TARGETS:=.html) >>hunspell-report.txt
@if [ -s hunspell-report.txt ]; then echo 'spellcheck failed, fix words or add to dictionary:'; cat hunspell-report.txt; false; fi;
@ -133,5 +138,5 @@ ifneq ($(STAGE),)
endif
clean:
rm -f $(TARGETS:=.html) $(TARGETS:=.pdf) $(FIGURES:=.png) $(FIGURES:=.eps) $(REFERENCE_TARGETS:=.rst) settings.rst todo.html reference*.html stats_counters.rst
rm -f $(TARGETS:=.html) $(TARGETS:=.pdf) $(FIGURES:=.png) $(FIGURES:=.eps) $(REFERENCE_TARGETS:=.rst) settings.rst todo.html reference*.html stats_counters.rst hunspell/settings.dic

View File

@ -127,7 +127,7 @@ The error_code::message() function will typically return a localized error strin
for system errors. That is, errors that belong to the generic or system category.
Errors that belong to the libtorrent error category are not localized however, they
are only available in english. In order to translate libtorrent errors, compare the
are only available in English. In order to translate libtorrent errors, compare the
error category of the ``error_code`` object against ``lt::libtorrent_category()``,
and if matches, you know the error code refers to the list above. You can provide
your own mapping from error code to string, which is localized. In this case, you
@ -221,7 +221,7 @@ parallel. The benefits are:
* your disk I/O load is likely to be more local which may improve I/O
performance and decrease fragmentation.
There are fundamentally 3 seaparate queues:
There are fundamentally 3 separate queues:
* checking torrents
* downloading torrents
@ -269,14 +269,14 @@ torrent_status::allocating state that are auto-managed.
The checking queue will make sure that (of the torrents in its queue) no more than
settings_pack::active_checking_limit torrents are started at any given time.
Once a torrent completes checking and moves into a diffferent state, the next in
Once a torrent completes checking and moves into a different state, the next in
line will be started for checking.
Any torrent added force-started or force-stopped (i.e. the auto managed flag is
*not* set), will not be subject to this limit and they will all check
independently and in parallel.
Once a torrent completes the checking of its files, or fastresume data, it will
Once a torrent completes the checking of its files, or resume data, it will
be put in the queue for downloading and potentially start downloading immediately.
In order to add a torrent and check its files without starting the download, it
can be added in ``stop_when_ready`` mode.
@ -381,7 +381,7 @@ to true.
Since it sometimes may take a few minutes for a newly started torrent to find
peers and be unchoked, or find peers that are interested in requesting data,
torrents are not considered inactive immadiately. There must be an extended
torrents are not considered inactive immediately. There must be an extended
period of no transfers before it is considered inactive and exempt from the
queuing limits.
@ -475,11 +475,11 @@ The file format is a bencoded dictionary containing the following fields:
| | In the same order as in the torrent file. |
+--------------------------+--------------------------------------------------------------+
| ``url-list`` | list of strings. List of url-seed URLs used by this torrent. |
| | The urls are expected to be properly encoded and not contain |
| | The URLs are expected to be properly encoded and not contain |
| | any illegal url characters. |
+--------------------------+--------------------------------------------------------------+
| ``httpseeds`` | list of strings. List of httpseed URLs used by this torrent. |
| | The urls are expected to be properly encoded and not contain |
| ``httpseeds`` | list of strings. List of HTTP seed URLs used by this torrent.|
| | The URLs are expected to be properly encoded and not contain |
| | any illegal url characters. |
+--------------------------+--------------------------------------------------------------+
| ``merkle tree`` | string. In case this torrent is a merkle torrent, this is a |
@ -580,7 +580,7 @@ The benefits of this mode are:
* Downloaded pieces are written directly to their final place in the files and
the total number of disk operations will be fewer and may also play nicer to
filesystems' file allocation, and reduce fragmentation.
the filesystem file allocation, and reduce fragmentation.
* No risk of a download failing because of a full disk during download, once
all files have been created.
@ -592,8 +592,8 @@ There are two kinds of HTTP seeding. One with that assumes a smart (and polite)
client and one that assumes a smart server. These are specified in `BEP 19`_
and `BEP 17`_ respectively.
libtorrent supports both. In the libtorrent source code and API, BEP 19 urls
are typically referred to as *url seeds* and BEP 17 urls are typically referred
libtorrent supports both. In the libtorrent source code and API, BEP 19 URLs
are typically referred to as *url seeds* and BEP 17 URLs are typically referred
to as *HTTP seeds*.
The libtorrent implementation of `BEP 19`_ assumes that, if the URL ends with a
@ -630,7 +630,7 @@ internal representation
It is optimized by, at all times, keeping a list of pieces ordered by rarity,
randomly shuffled within each rarity class. This list is organized as a single
vector of contigous memory in RAM, for optimal memory locality and to eliminate
vector of contiguous memory in RAM, for optimal memory locality and to eliminate
heap allocations and frees when updating rarity of pieces.
Expensive events, like a peer joining or leaving, are evaluated lazily, since
@ -669,7 +669,7 @@ request. The idea behind this is to make all snubbed peers more likely to be
able to do download blocks from the same piece, concentrating slow peers on as
few pieces as possible. The reverse order means that the most common pieces are
picked, instead of the rarest pieces (or in the case of sequential download,
the last pieces, intead of the first).
the last pieces, instead of the first).
parole mode
-----------
@ -786,7 +786,7 @@ preventing a client from reconfiguring the peer class ip- and type filters
to disable or customize which peers they apply to. See set_peer_class_filter()
and set_peer_class_type_filter().
A peer class can be considered a more general form of *lables* that some
A peer class can be considered a more general form of *labels* that some
clients have. Peer classes however are not just applied to torrents, but
ultimately the peers.
@ -837,7 +837,7 @@ To make uTP sockets exempt from rate limiting:
ses.set_peer_class_type_filter(flt);
To make all peers on the internal network unthrottled:
To make all peers on the internal network not subject to throttling:
.. code:: c++
@ -862,7 +862,7 @@ SSL. The protocols are layered like this:
During the SSL handshake, both peers need to authenticate by providing a
certificate that is signed by the CA certificate found in the .torrent file.
These peer certificates are expected to be privided to peers through some other
These peer certificates are expected to be provided to peers through some other
means than bittorrent. Typically by a peer generating a certificate request
which is sent to the publisher of the torrent, and the publisher returning a
signed certificate.
@ -885,15 +885,15 @@ This setting is only taken into account when the normal listen socket is opened
socket). To not listen on an SSL socket at all, set ``ssl_listen`` to 0.
This feature is only available if libtorrent is build with openssl support
(``TORRENT_USE_OPENSSL``) and requires at least openSSL version 1.0, since it
(``TORRENT_USE_OPENSSL``) and requires at least OpenSSL version 1.0, since it
needs SNI support.
Peer certificates must have at least one *SubjectAltName* field of type
dNSName. At least one of the fields must *exactly* match the name of the
DNSName. At least one of the fields must *exactly* match the name of the
torrent. This is a byte-by-byte comparison, the UTF-8 encoding must be
identical (i.e. there's no unicode normalization going on). This is the
recommended way of verifying certificates for HTTPS servers according to `RFC
2818`_. Note the difference that for torrents only *dNSName* fields are taken
2818`_. Note the difference that for torrents only *DNSName* fields are taken
into account (not IP address fields). The most specific (i.e. last) *Common
Name* field is also taken into account if no *SubjectAltName* did not match.
@ -923,7 +923,7 @@ libtorrent's point of view, it doesn't matter what it is. libtorrent only makes
sure the peer certificates are signed by the correct root certificate.
One way to create the certificates is to use the ``CA.sh`` script that comes
with openssl, like thisi (don't forget to enter a common Name for the
with openssl, like this (don't forget to enter a common Name for the
certificate)::
CA.sh -newca
@ -952,7 +952,7 @@ socket receives *n* bytes, a counter is incremented by *n*.
*Counters* are the most flexible of metrics. It allows the program to sample
the counter at any interval, and calculate average rates of increments to the
counter. Some events may be rare and need to be sampled over a longer period in
order to get userful rates, where other events may be more frequent and evenly
order to get useful rates, where other events may be more frequent and evenly
distributed that sampling it frequently yields useful values. Counters also
provides accurate overall counts. For example, converting samples of a download
rate into a total transfer count is not accurate and takes more samples.
@ -960,7 +960,7 @@ Converting an increasing counter into a rate is easy and flexible.
*Gauges* measure the instantaneous state of some kind. This is used for metrics
that are not counting events or flows, but states that can fluctuate. For
example, the number of torrents that are currenly being downloaded.
example, the number of torrents that are currently being downloaded.
It's important to know whether a value is a counter or a gauge in order to
interpret it correctly. In order to query libtorrent for which counters and

View File

@ -149,8 +149,8 @@ namespace aux {
enum string_types
{
// this is the client identification to the tracker. The recommended
// format of this string is: "ClientName/ClientVersion
// libtorrent/libtorrentVersion". This name will not only be used when
// format of this string is: "client-name/client-version
// libtorrent/libtorrent-version". This name will not only be used when
// making HTTP requests, but also when sending extended headers to
// peers that support that extension. It may not contain \r or \n
user_agent = string_type_base,
@ -189,9 +189,9 @@ namespace aux {
// sets the network interface this session will use when it opens
// outgoing connections. An empty string binds outgoing connections to
// INADDR_ANY and port 0 (i.e. let the OS decide). Ths parameter must
// INADDR_ANY and port 0 (i.e. let the OS decide). The parameter must
// be a string containing one or more, comma separated, adapter names.
// Adapter names on unix systems are of the form "eth0", "eth1",
// Adapter names on Unix systems are of the form "eth0", "eth1",
// "tun0", etc. When specifying multiple interfaces, they will be
// assigned in round-robin order. This may be useful for clients that
// are multi-homed. Binding an outgoing connection to a local IP does
@ -232,7 +232,7 @@ namespace aux {
// connections on port 7777 on adapter with this GUID.
listen_interfaces,
// when using a poxy, this is the hostname where the proxy is running
// when using a proxy, this is the hostname where the proxy is running
// see proxy_type.
proxy_hostname,
@ -389,7 +389,7 @@ namespace aux {
// ``prefer_udp_trackers``: true means that trackers
// may be rearranged in a way that udp trackers are always tried
// before http trackers for the same hostname. Setting this to false
// means that the trackers' tier is respected and there's no
// means that the tracker's tier is respected and there's no
// preference of one protocol over another.
prefer_udp_trackers,
@ -453,7 +453,7 @@ namespace aux {
deprecated_guided_read_cache,
#endif
// ``no_atime_storage`` this is a linux-only option and passes in the
// ``no_atime_storage`` this is a Linux-only option and passes in the
// ``O_NOATIME`` to ``open()`` when opening files. This may lead to
// some disk performance improvements.
no_atime_storage,
@ -477,7 +477,7 @@ namespace aux {
// ``strict_end_game_mode`` controls when a
// block may be requested twice. If this is ``true``, a block may only
// be requested twice when there's ay least one request to every piece
// be requested twice when there's at least one request to every piece
// that's left to download in the torrent. This may slow down progress
// on some pieces sometimes, but it may also avoid downloading a lot
// of redundant bytes. If this is ``false``, libtorrent attempts to
@ -610,7 +610,7 @@ namespace aux {
#if TORRENT_ABI_VERSION == 1
// ``lock_files`` determines whether or not to lock files which
// libtorrent is downloading to or seeding from. This is implemented
// using ``fcntl(F_SETLK)`` on unix systems and by not passing in
// using ``fcntl(F_SETLK)`` on Unix systems and by not passing in
// ``SHARE_READ`` and ``SHARE_WRITE`` on windows. This might prevent
// 3rd party processes from corrupting the files under libtorrent's
// feet.
@ -719,7 +719,7 @@ namespace aux {
enable_dht,
// if the allowed encryption level is both, setting this to true will
// prefer rc4 if both methods are offered, plaintext otherwise
// prefer rc4 if both methods are offered, plain text otherwise
prefer_rc4,
// if true, hostname lookups are done via the configured proxy (if
@ -759,7 +759,7 @@ namespace aux {
dht_prefer_verified_node_ids,
// when this is true, create an affinity for downloading 4 MiB extents
// of adjecent pieces. This is an attempt to achieve better disk I/O
// of adjacent pieces. This is an attempt to achieve better disk I/O
// throughput by downloading larger extents of bytes, for torrents with
// small piece sizes
piece_extent_affinity,
@ -796,14 +796,14 @@ namespace aux {
// measured on the uncompressed data. So, if you get 20 bytes of gzip
// response that'll expand to 2 megabytes, it will be interrupted
// before the entire response has been uncompressed (assuming the
// limit is lower than 2 megs).
// limit is lower than 2 MiB).
tracker_maximum_response_length,
// the number of seconds from a request is sent until it times out if
// no piece response is returned.
piece_timeout,
// the number of seconds one block (16kB) is expected to be received
// the number of seconds one block (16 kiB) is expected to be received
// within. If it's not, the block is requested from a different peer
request_timeout,
@ -852,7 +852,8 @@ namespace aux {
urlseed_pipeline_size,
// number of seconds until a new retry of a url-seed takes place.
// Default retry value for http-seeds that don't provide a valid 'retry-after' header.
// Default retry value for http-seeds that don't provide
// a valid ``retry-after`` header.
urlseed_wait_retry,
// sets the upper limit on the total number of files this session will
@ -864,10 +865,12 @@ namespace aux {
// of file descriptors a process may have open.
file_pool_size,
// ``max_failcount`` is the maximum times we try to connect to a peer
// before stop connecting again. If a peer succeeds, the failcounter
// is reset. If a peer is retrieved from a peer source (other than
// DHT) the failcount is decremented by one, allowing another try.
// ``max_failcount`` is the maximum times we try to
// connect to a peer before stop connecting again. If a
// peer succeeds, the failure counter is reset. If a
// peer is retrieved from a peer source (other than DHT)
// the failcount is decremented by one, allowing another
// try.
max_failcount,
// the number of seconds to wait to reconnect to a peer. this time is
@ -948,7 +951,7 @@ namespace aux {
// will determine how fast we can ramp up the send rate
//
// if the send buffer has fewer bytes than ``send_buffer_watermark``,
// we'll read another 16kB block onto it. If set too small, upload
// we'll read another 16 kiB block onto it. If set too small, upload
// rate capacity will suffer. If set too high, memory will be wasted.
// The actual watermark may be lower than this in case the upload rate
// is low, this is the upper limit.
@ -994,7 +997,7 @@ namespace aux {
// The available options are:
//
// * ``round_robin`` which round-robins the peers that are unchoked
// when seeding. This distributes the upload bandwidht uniformly and
// when seeding. This distributes the upload bandwidth uniformly and
// fairly. It minimizes the ability for a peer to download everything
// without redistributing it.
//
@ -1009,7 +1012,7 @@ namespace aux {
seed_choking_algorithm,
// ``cache_size`` is the disk write and read cache. It is specified
// in units of 16 KiB blocks. Buffers that are part of a peer's send
// in units of 16 kiB blocks. Buffers that are part of a peer's send
// or receive buffer also count against this limit. Send and receive
// buffers will never be denied to be allocated, but they will cause
// the actual cached blocks to be flushed or evicted. If this is set
@ -1174,7 +1177,7 @@ namespace aux {
// this is the minimum allowed announce interval for a tracker. This
// is specified in seconds and is used as a sanity check on what is
// returned from a tracker. It mitigates hammering misconfigured
// returned from a tracker. It mitigates hammering mis-configured
// trackers.
min_announce_interval,
@ -1471,7 +1474,7 @@ namespace aux {
// ``alert_queue_size`` is the maximum number of alerts queued up
// internally. If alerts are not popped, the queue will eventually
// fill up to this level. Once the alert queue is full, additional
// alerts will be dropped, and not delievered to the client. Once the
// alerts will be dropped, and not delivered to the client. Once the
// client drains the queue, new alerts may be delivered again. In order
// to know that alerts have been dropped, see
// session_handle::dropped_alerts().
@ -1765,7 +1768,7 @@ namespace aux {
// settings_pack::allowed_enc_level.
enum enc_level : std::uint8_t
{
// use only plaintext encryption
// use only plain text encryption
pe_plaintext = 1,
// use only rc4 encryption
pe_rc4 = 2,