This commit is contained in:
maqp 2019-01-24 04:01:00 +02:00
parent a4e7ba3090
commit 033ee0899b
165 changed files with 27896 additions and 17809 deletions

View File

@ -4,24 +4,11 @@
# Regexes for lines to exclude from consideration
exclude_lines =
# Debugging code for third-party modules
pragma: no cover
# TYPE_CHECKING is True only during type checking
if typing.TYPE_CHECKING:
# Ignore catchers for KeyboardInterrupt (^C) and EOF (^D) signals from user:
except EOFError
except KeyboardInterrupt:
except \(EOFError, KeyboardInterrupt\):
except \(FunctionReturn, KeyboardInterrupt\):
# Ignore errors specific to gateway libraries
except SerialException:
except socket.error
except ConnectionRefusedError:
# Ignore lines for Settings database testing that
# can not be mocked without overwriting user data
if operation == RX:
omit =
# Since dbus is not available for python3.6, it is currently not possible to test nh/pidgin.py
src/nh/pidgin.py
# Ignore Flask server init under standard operation
else: \# not unittest

View File

@ -3,13 +3,23 @@ language: python
python:
- '3.6'
dist: xenial
sudo: required
before_install:
- sudo apt install python3-tk
- export TZ=Europe/Helsinki
- echo "deb https://deb.torproject.org/torproject.org xenial main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
- echo "deb-src https://deb.torproject.org/torproject.org xenial main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
- echo "deb https://deb.torproject.org/torproject.org tor-nightly-master-xenial main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
- echo "deb-src https://deb.torproject.org/torproject.org tor-nightly-master-xenial main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
- gpg --keyserver khkp://keys.gnupg.net --recv-keys A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
- gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
- sudo apt update
- sudo apt install python3-setuptools python3-tk tor -y
install:
- pip install pytest pytest-cov pyyaml coveralls
- pip install -r requirements.txt --require-hashes
- pip install -r requirements.txt --require-hashes
- pip install -r requirements-relay.txt --require-hashes
script:
- py.test --cov=src tests/

674
LICENSE Normal file
View File

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

2906
LICENSE-3RD-PARTY Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,864 +0,0 @@
# Licenses
### TFC
TFC 1.17.08 Copyright (C) 2013-2017 Markus Ottela
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program. If not, see http://www.gnu.org/licenses/.
(You can find the license at the bottom of this file)
#### TFC Documentation including white paper and GitHub wiki are released under
GNU Free Documentation License 1.3
## Third Party licenses
### TTL Data diode
Copyrights for the schematics of the TTL data diode presented in the
documentation belong to pseudonym Sancho_P and are published under GNU Free
Documentation License v1.3.
### RS-232 Data diode
Copyrights for the schematics of the RS-232 Data Diode presented in the
documentation belong to Douglas W. Jones and are published used under GNU Free
Documentation License. URL to original work:
http://homepage.cs.uiowa.edu/~jones/voting/diode/RS232tech.pdf
### Base58
The Base58 implementation is used and modified under MIT license
https://github.com/keis/base58
### PySerial
The PySerial library is used under BSD-3-Clause license
https://github.com/pyserial/pyserial/blob/master/LICENSE.txt
### Reed-Solomon erasure code
The Reed Solomon erasure code library has been released to the public domain.
License: https://github.com/tomerfiliba/reedsolomon/blob/master/LICENSE
Original python implementation:
https://github.com/tomerfiliba/reedsolomon/blob/master/reedsolo.py
Implementation is based on tutorial at
http://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders
### Argon2_cffi
The Argon2 library is used under MIT license
https://github.com/hynek/argon2_cffi/blob/master/LICENSE
### PyNaCl
The PyNaCl library is licensed under Apache License 2.0 and is compatible with
GNU GPLv3 license:
Version 2.0, January 2004 http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and
configuration files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object
code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form,
made available under the License, as indicated by a copyright notice that is
included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or Legal
Entity authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal, or
written communication sent to the Licensor or its representatives, including
but not limited to communication on electronic mailing lists, source code
control systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise designated
in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
Grant of Copyright License. Subject to the terms and conditions of this
License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable copyright license to
reproduce, prepare Derivative Works of, publicly display, publicly perform,
sublicense, and distribute the Work and such Derivative Works in Source or
Object form.
Grant of Patent License. Subject to the terms and conditions of this License,
each Contributor hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section) patent
license to make, have made, use, offer to sell, sell, import, and otherwise
transfer the Work, where such license applies only to those patent claims
licensable by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s) with the Work
to which such Contribution(s) was submitted. If You institute patent litigation
against any entity (including a cross-claim or counterclaim in a lawsuit)
alleging that the Work or a Contribution incorporated within the Work
constitutes direct or contributory patent infringement, then any patent
licenses granted to You under this License for that Work shall terminate as of
the date such litigation is filed.
Redistribution. You may reproduce and distribute copies of the Work or
Derivative Works thereof in any medium, with or without modifications, and in
Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy
of this License; and
(b) You must cause any modified files to carry prominent notices stating that
You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You
distribute, all copyright, patent, trademark, and attribution notices from the
Source form of the Work, excluding those notices that do not pertain to any
part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then
any Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a
whole, provided Your use, reproduction, and distribution of the Work otherwise
complies with the conditions stated in this License.
Submission of Contributions. Unless You explicitly state otherwise, any
Contribution intentionally submitted for inclusion in the Work by You to the
Licensor shall be under the terms and conditions of this License, without any
additional terms or conditions. Notwithstanding the above, nothing herein shall
supersede or modify the terms of any separate license agreement you may have
executed with Licensor regarding such Contributions.
Trademarks. This License does not grant permission to use the trade names,
trademarks, service marks, or product names of the Licensor, except as required
for reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
Disclaimer of Warranty. Unless required by applicable law or agreed to in
writing, Licensor provides the Work (and each Contributor provides its
Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied, including, without limitation, any warranties
or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any risks
associated with Your exercise of permissions under this License.
Limitation of Liability. In no event and under no legal theory, whether in tort
(including negligence), contract, or otherwise, unless required by applicable
law (such as deliberate and grossly negligent acts) or agreed to in writing,
shall any Contributor be liable to You for damages, including any direct,
indirect, special, incidental, or consequential damages of any character
arising as a result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill, work stoppage,
computer failure or malfunction, or any and all other commercial damages or
losses), even if such Contributor has been advised of the possibility of such
damages.
Accepting Warranty or Additional Liability. While redistributing the Work or
Derivative Works thereof, You may choose to offer, and charge a fee for,
acceptance of support, warranty, indemnity, or other liability obligations
and/or rights consistent with this License. However, in accepting such
obligations, You may act only on Your own behalf and on Your sole
responsibility, not on behalf of any other Contributor, and only if You agree
to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
# GNU GENERAL PUBLIC LICENSE
## Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. http://fsf.org/
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and
other kinds of works.
The licenses for most software and other practical works are designed to take
away your freedom to share and change the works. By contrast, the GNU General
Public License is intended to guarantee your freedom to share and change all
versions of a program--to make sure it remains free software for all its users.
We, the Free Software Foundation, use the GNU General Public License for most
of our software; it applies also to any other work released this way by its
authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our
General Public Licenses are designed to make sure that you have the freedom
to distribute copies of free software (and charge for them if you wish), that
you receive source code or can get it if you want it, that you can change the
software or use pieces of it in new free programs, and that you know you can
do these things.
To protect your rights, we need to prevent others from denying you these rights
or asking you to surrender the rights. Therefore, you have certain
responsibilities if you distribute copies of the software, or if you modify it:
responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for
a fee, you must pass on to the recipients the same freedoms that you received.
You must make sure that they, too, receive or can get the source code. And you
must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert
copyright on the software, and (2) offer you this License giving you legal
permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that
there is no warranty for this free software. For both users' and authors' sake,
the GPL requires that modified versions be marked as changed, so that their
problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified
versions of the software inside them, although the manufacturer can do so. This
is fundamentally incompatible with the aim of protecting users' freedom to
change the software. The systematic pattern of such abuse occurs in the area of
products for individuals to use, which is precisely where it is most
unacceptable. Therefore, we have designed this version of the GPL to prohibit
the practice for those products. If such problems arise substantially in other
domains, we stand ready to extend this provision to those domains in future
versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States
should not allow patents to restrict development and use of software on
general-purpose computers, but in those that do, we wish to avoid the special
danger that patents applied to a free program could make it effectively
proprietary. To prevent this, the GPL assures that patents cannot be used to
render the program non-free.
The precise terms and conditions for copying, distribution and modification
follow.
### TERMS AND CONDITIONS
Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works,
such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License.
Each licensee is addressed as “you”. “Licensees” and “recipients” may be
individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a
fashion requiring copyright permission, other than the making of an exact copy.
The resulting work is called a “modified version” of the earlier work or a work
“based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the
Program.
To “propagate” a work means to do anything with it that, without permission,
would make you directly or secondarily liable for infringement under applicable
copyright law, except executing it on a computer or modifying a private copy.
Propagation includes copying, distribution (with or without modification),
making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to
make or receive copies. Mere interaction with a user through a computer
network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the
extent that it includes a convenient and prominently visible feature that (1)
displays an appropriate copyright notice, and (2) tells the user that there is
no warranty for the work (except to the extent that warranties are provided),
that licensees may convey the work under this License, and how to view a copy
of this License. If the interface presents a list of user commands or options,
such as a menu, a prominent item in the list meets this criterion.
Source Code.
The “source code” for a work means the preferred form of the work for making
modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard
defined by a recognized standards body, or, in the case of interfaces specified
for a particular programming language, one that is widely used among developers
working in that language.
The “System Libraries” of an executable work include anything, other than the
work as a whole, that (a) is included in the normal form of packaging a Major
Component, but which is not part of that Major Component, and (b) serves only
to enable use of the work with that Major Component, or to implement a Standard
Interface for which an implementation is available to the public in source code
form. A “Major Component”, in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system (if any) on
which the executable work runs, or a compiler used to produce the work, or an
object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source
code needed to generate, install, and (for an executable work) run the object
code and to modify the work, including scripts to control those activities.
However, it does not include the work's System Libraries, or general-purpose
tools or generally available free programs which are used unmodified in
performing those activities but which are not part of the work. For example,
Corresponding Source includes interface definition files associated with source
files for the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require, such as
by intimate data communication or control flow between those subprograms and
other parts of the work.
The Corresponding Source need not include anything that users can regenerate
automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
Basic Permissions.
All rights granted under this License are granted for the term of copyright on
the Program, and are irrevocable provided the stated conditions are met. This
License explicitly affirms your unlimited permission to run the unmodified
Program. The output from running a covered work is covered by this License only
if the output, given its content, constitutes a covered work. This License
acknowledges your rights of fair use or other equivalent, as provided by
copyright law.
You may make, run and propagate covered works that you do not convey, without
conditions so long as your license otherwise remains in force. You may convey
covered works to others for the sole purpose of having them make modifications
exclusively for you, or provide you with facilities for running those works,
provided that you comply with the terms of this License in conveying all
material for which you do not control copyright. Those thus making or running
the covered works for you must do so exclusively on your behalf, under your
direction and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the
conditions stated below. Sublicensing is not allowed; section 10 makes it
unnecessary.
Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure
under any applicable law fulfilling obligations under article 11 of the WIPO
copyright treaty adopted on 20 December 1996, or similar laws prohibiting or
restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention is
effected by exercising rights under this License with respect to the covered
work, and you disclaim any intention to limit operation or modification of the
work as a means of enforcing, against the work's users, your or third parties'
legal rights to forbid circumvention of technological measures.
Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it,
in any medium, provided that you conspicuously and appropriately publish on
each copy an appropriate copyright notice; keep intact all notices stating that
this License and any non-permissive terms added in accord with section 7 apply
to the code; keep intact all notices of the absence of any warranty; and give
all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may
offer support or warranty protection for a fee.
Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it
from the Program, in the form of source code under the terms of section 4,
provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and
giving a relevant date.
b) The work must carry prominent notices stating that it is released under this
License and any conditions added under section 7. This requirement modifies
the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone
who comes into possession of a copy. This License will therefore apply,
along with any applicable section 7 additional terms, to the whole of the
work, and all its parts, regardless of how they are packaged. This License
gives no permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate
Legal Notices; however, if the Program has interactive interfaces that do
not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works,
which are not by their nature extensions of the covered work, and which are not
combined with it such as to form a larger program, in or on a volume of a
storage or distribution medium, is called an “aggregate” if the compilation and
its resulting copyright are not used to limit the access or legal rights of the
compilation's users beyond what the individual works permit. Inclusion of a
covered work in an aggregate does not cause this License to apply to the other
parts of the aggregate.
Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4
and 5, provided that you also convey the machine-readable Corresponding Source
under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by the Corresponding Source fixed
on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a
physical distribution medium), accompanied by a written offer, valid for at
least three years and valid for as long as you offer spare parts or customer
support for that product model, to give anyone who possesses the object code
either (1) a copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical medium
customarily used for software interchange, for a price no more than your
reasonable cost of physically performing this conveying of source, or (2)
access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer
to provide the Corresponding Source. This alternative is allowed only
occasionally and noncommercially, and only if you received the object code
with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or
for a charge), and offer equivalent access to the Corresponding Source in
the same way through the same place at no further charge. You need not
require recipients to copy the Corresponding Source along with the object
code. If the place to copy the object code is a network server, the
Corresponding Source may be on a different server (operated by you or a
third party) that supports equivalent copying facilities, provided you
maintain clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the Corresponding
Source, you remain obligated to ensure that it is available for as long as
needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the
Corresponding Source as a System Library, need not be included in conveying the
object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible
personal property which is normally used for personal, family, or household
purposes, or (2) anything designed or sold for incorporation into a dwelling.
In determining whether a product is a consumer product, doubtful cases shall
be resolved in favor of coverage. For a particular product received by a
particular user, “normally used” refers to a typical or common use of that
class of product, regardless of the status of the particular user or of the way
in which the particular user actually uses, or expects or is expected to use,
the product. A product is a consumer product regardless of whether the product
has substantial commercial, industrial or non-consumer uses, unless such uses
represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures,
authorization keys, or other information required to install and execute
modified versions of a covered work in that User Product from a modified
version of its Corresponding Source. The information must suffice to ensure
that the continued functioning of the modified object code is in no case
prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as part of a
transaction in which the right of possession and use of the User Product is
transferred to the recipient in perpetuity or for a fixed term (regardless of
how the transaction is characterized), the Corresponding Source conveyed under
this section must be accompanied by the Installation Information. But this
requirement does not apply if neither you nor any third party retains the
ability to install modified object code on the User Product (for example, the
work has been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates for a
work that has been modified or installed by the recipient, or for the User
Product in which it has been modified or installed. Access to a network may be
denied when the modification itself materially and adversely affects the
operation of the network or violates the rules and protocols for communication
across the network.
Corresponding Source conveyed, and Installation Information provided, in accord
with this section must be in a format that is publicly documented (and with an
implementation available to the public in source code form), and must require
no special password or key for unpacking, reading or copying.
Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by
making exceptions from one or more of its conditions. Additional permissions
that are applicable to the entire Program shall be treated as though they were
included in this License, to the extent that they are valid under applicable
law. If additional permissions apply only to part of the Program, that part may
be used separately under those permissions, but the entire Program remains
governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any
additional permissions from that copy, or from any part of it. (Additional
permissions may be written to require their own removal in certain cases when
you modify the work.) You may place additional permissions on material, added
by you to a covered work, for which you have or can give appropriate copyright
permission.
Notwithstanding any other provision of this License, for material you add to a
covered work, you may (if authorized by the copyright holders of that material)
supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of
sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author
attributions in that material or in the Appropriate Legal Notices displayed
by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring
that modified versions of such material be marked in reasonable ways as
different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of
the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by
anyone who conveys the material (or modified versions of it) with
contractual assumptions of liability to the recipient, for any liability
that these contractual assumptions directly impose on those licensors and
authors.
All other non-permissive additional terms are considered “further restrictions”
within the meaning of section 10. If the Program as you received it, or any
part of it, contains a notice stating that it is governed by this License along
with a term that is a further restriction, you may remove that term. If a
license document contains a further restriction but permits relicensing or
conveying under this License, you may add to a covered work material governed
by the terms of that license document, provided that the further restriction
does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place,
in the relevant source files, a statement of the additional terms that apply
to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a
separately written license, or stated as exceptions; the above requirements
apply either way.
Termination.
You may not propagate or modify a covered work except as expressly provided
under this License. Any attempt otherwise to propagate or modify it is void,
and will automatically terminate your rights under this License (including any
patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a
particular copyright holder is reinstated (a) provisionally, unless and until
the copyright holder explicitly and finally terminates your license, and (b)
permanently, if the copyright holder fails to notify you of the violation by
some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated
permanently if the copyright holder notifies you of the violation by some
reasonable means, this is the first time you have received notice of violation
of this License (for any work) from that copyright holder, and you cure the
violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses
of parties who have received copies or rights from you under this License. If
your rights have been terminated and not permanently reinstated, you do not
qualify to receive new licenses for the same material under section 10.
Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy
of the Program. Ancillary propagation of a covered work occurring solely as a
consequence of using peer-to-peer transmission to receive a copy likewise does
not require acceptance. However, nothing other than this License grants you
permission to propagate or modify any covered work. These actions infringe
copyright if you do not accept this License. Therefore, by modifying or
propagating a covered work, you indicate your acceptance of this License to do
so.
Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a
license from the original licensors, to run, modify and propagate that work,
subject to this License. You are not responsible for enforcing compliance by
third parties with this License.
An “entity transaction” is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered work
results from an entity transaction, each party to that transaction who receives
a copy of the work also receives whatever licenses to the work the party's
predecessor in interest had or could give under the previous paragraph, plus a
right to possession of the Corresponding Source of the work from the
predecessor in interest, if the predecessor has it or can get it with
reasonable efforts.
You may not impose any further restrictions on the exercise of the rights
granted or affirmed under this License. For example, you may not impose a
license fee, royalty, or other charge for exercise of rights granted under this
License, and you may not initiate litigation (including a cross-claim or
counterclaim in a lawsuit) alleging that any patent claim is infringed by
making, using, selling, offering for sale, or importing the Program or any
portion of it.
Patents.
A “contributor” is a copyright holder who authorizes use under this License of
the Program or a work on which the Program is based. The work thus licensed is
called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or
controlled by the contributor, whether already acquired or hereafter acquired,
that would be infringed by some manner, permitted by this License, of making,
using, or selling its contributor version, but do not include claims that would
be infringed only as a consequence of further modification of the contributor
version. For purposes of this definition, “control” includes the right to grant
patent sublicenses in a manner consistent with the requirements of this
License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent
license under the contributor's essential patent claims, to make, use, sell,
offer for sale, import and otherwise run, modify and propagate the contents of
its contributor version.
In the following three paragraphs, a “patent license” is any express agreement
or commitment, however denominated, not to enforce a patent (such as an express
permission to practice a patent or covenant not to sue for patent infringement).
To “grant” such a patent license to a party means to make such an agreement or
commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the
Corresponding Source of the work is not available for anyone to copy, free of
charge and under the terms of this License, through a publicly available
network server or other readily accessible means, then you must either (1)
cause the Corresponding Source to be so available, or (2) arrange to deprive
yourself of the benefit of the patent license for this particular work, or (3)
arrange, in a manner consistent with the requirements of this License, to
extend the patent license to downstream recipients. “Knowingly relying” means
you have actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work in a
country, would infringe one or more identifiable patents in that country that
you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you
convey, or propagate by procuring conveyance of, a covered work, and grant a
patent license to some of the parties receiving the covered work authorizing
them to use, propagate, modify or convey a specific copy of the covered work,
then the patent license you grant is automatically extended to all recipients
of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of
its coverage, prohibits the exercise of, or is conditioned on the non-exercise
of one or more of the rights that are specifically granted under this License.
You may not convey a covered work if you are a party to an arrangement with a
third party that is in the business of distributing software, under which you
make payment to the third party based on the extent of your activity of
conveying the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory patent
license (a) in connection with copies of the covered work conveyed by you (or
copies made from those copies), or (b) primarily for and in connection with
specific products or compilations that contain the covered work, unless you
entered into that arrangement, or that patent license was granted, prior to
28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied
license or other defenses to infringement that may otherwise be available to
you under applicable patent law.
No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not excuse
you from the conditions of this License. If you cannot convey a covered work so
as to satisfy simultaneously your obligations under this License and any other
pertinent obligations, then as a consequence you may not convey it at all. For
example, if you agree to terms that obligate you to collect a royalty for
further conveying from those to whom you convey the Program, the only way you
could satisfy both those terms and this License would be to refrain entirely
from conveying the Program.
Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to
link or combine any covered work with a work licensed under version 3 of the
GNU Affero General Public License into a single combined work, and to convey
the resulting work. The terms of this License will continue to apply to the
part which is the covered work, but the special requirements of the GNU Affero
General Public License, section 13, concerning interaction through a network
will apply to the combination as such.
Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU
General Public License from time to time. Such new versions will be similar in
spirit to the present version, but may differ in detail to address new problems
or concerns.
Each version is given a distinguishing version number. If the Program specifies
that a certain numbered version of the GNU General Public License “or any later
version” applies to it, you have the option of following the terms and
conditions either of that numbered version or of any later version published by
the Free Software Foundation. If the Program does not specify a version number
of the GNU General Public License, you may choose any version ever published by
the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the
GNU General Public License can be used, that proxy's public statement of
acceptance of a version permanently authorizes you to choose that version for
the Program.
Later license versions may give you additional or different permissions.
However, no additional obligations are imposed on any author or copyright
holder as a result of your choosing to follow a later version.
Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE
LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER
PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
CORRECTION.
Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS
PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE
PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY
HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot
be given local legal effect according to their terms, reviewing courts shall
apply local law that most closely approximates an absolute waiver of all civil
liability in connection with the Program, unless a warranty or assumption of
liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible
use to the public, the best way to achieve this is to make it free software
which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach
them to the start of each source file to most effectively state the exclusion
of warranty; and each file should have at least the “copyright” line and a
pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, either version 3 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with
this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like
this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author> This program comes with
ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and
you are welcome to redistribute it under certain conditions; type `show c' for
details.
The hypothetical commands show w' and show c' should show the appropriate parts
of the General Public License. Of course, your program's commands might be
different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if
any, to sign a “copyright disclaimer” for the program, if necessary. For more
information on this, and how to apply and follow the GNU GPL, see
http://www.gnu.org/licenses/.
The GNU General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may consider
it more useful to permit linking proprietary applications with the library. If
this is what you want to do, use the GNU Lesser General Public License instead
of this License. But first, please read
http://www.gnu.org/philosophy/why-not-lgpl.html.

184
README.md
View File

@ -2,116 +2,138 @@
### Tinfoil Chat
[![Build Status](https://travis-ci.org/maqp/tfc.svg?branch=master)](https://travis-ci.org/maqp/tfc) [![Coverage Status](https://coveralls.io/repos/github/maqp/tfc/badge.svg?branch=master)](https://coveralls.io/github/maqp/tfc?branch=master)
[![Build Status](https://travis-ci.org/maqp/tfc.svg?branch=master)](https://travis-ci.org/maqp/tfc)
[![Coverage Status](https://coveralls.io/repos/github/maqp/tfc/badge.svg?branch=master)](https://coveralls.io/github/maqp/tfc?branch=master)
Tinfoil Chat (TFC) is a high assurance encrypted messaging system that
operates on top of existing IM clients. The
[free and open source software](https://www.gnu.org/philosophy/free-sw.html)
is used together with free hardware to protect users from
Tinfoil Chat (TFC) is a
[FOSS](https://www.gnu.org/philosophy/free-sw.html)+[FHD](https://www.gnu.org/philosophy/free-hardware-designs.en.html)
messaging system that relies on high assurance hardware architecture to protect
users from
[passive eavesdropping](https://en.wikipedia.org/wiki/Upstream_collection),
[active MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)
and [remote CNE](https://www.youtube.com/watch?v=3euYBPlX9LM) practised by
organized crime and nation state attackers.
[XSalsa20](https://cr.yp.to/snuffle/salsafamily-20071225.pdf)
encryption and
[Poly1305-AES](https://cr.yp.to/mac/poly1305-20050329.pdf)
MACs provide
[end-to-end encrypted](https://en.wikipedia.org/wiki/End-to-end_encryption)
communication with
[deniable authentication](https://en.wikipedia.org/wiki/Deniable_encryption#Deniable_authentication):
Symmetric keys are either pre-shared, or exchanged using
[X25519](https://cr.yp.to/ecdh/curve25519-20060209.pdf),
the base-10 fingerprints of which are verified via out-of-band channel. TFC provides
per-packet forward secrecy with
[hash ratchet](https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm)
the KDF of which chains
[SHA3-256](http://keccak.noekeon.org/Keccak-implementation-3.2.pdf),
[Blake2s](https://blake2.net/blake2_20130129.pdf)
and
[SHA256](http://www.iwar.org.uk/comsec/resources/cipher/sha256-384-512.pdf).
[remote exfiltration](https://www.youtube.com/watch?v=3euYBPlX9LM)
(=hacking) practised by organized crime and nation state actors.
The software is used in hardware configuration that provides strong endpoint
security: Encryption and decryption are separated on two isolated computers.
The split
##### State-of-the-art cryptography
TFC uses
[XChaCha20](https://cr.yp.to/chacha/chacha-20080128.pdf)-[Poly1305](https://cr.yp.to/mac/poly1305-20050329.pdf)
[end-to-end encryption](https://en.wikipedia.org/wiki/End-to-end_encryption)
with
[deniable authentication](https://en.wikipedia.org/wiki/Deniable_encryption#Deniable_authentication).
The symmetric keys are either
[pre-shared](https://en.wikipedia.org/wiki/Pre-shared_key),
or exchanged using
[X448](https://eprint.iacr.org/2015/625.pdf),
the base-10
[fingerprints](https://en.wikipedia.org/wiki/Public_key_fingerprint)
of which are verified via out-of-band channel. TFC provides per-message
[forward secrecy](https://en.wikipedia.org/wiki/Forward_secrecy)
with
[BLAKE2b](https://blake2.net/blake2.pdf)
based
[hash ratchet](https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm).
All persistent user data is encrypted locally using XChaCha20-Poly1305, the key
of which is derived from password and salt using
[Argon2d](https://github.com/P-H-C/phc-winner-argon2/blob/master/argon2-specs.pdf).
Key generation of TFC relies on Linux kernel's
[getrandom()](https://manpages.debian.org/testing/manpages-dev/getrandom.2.en.html),
a syscall for its ChaCha20 based CSPRNG.
##### First messaging system with endpoint security
The software is used in hardware configuration that provides strong
[endpoint security](https://en.wikipedia.org/wiki/Endpoint_security):
Encryption and decryption are separated on two isolated computers. The split
[TCB](https://en.wikipedia.org/wiki/Trusted_computing_base)
interacts with a third, networked computer through unidirectional
[serial](https://en.wikipedia.org/wiki/Universal_asynchronous_receiver/transmitter)
interfaces. Direction of data flow is enforced with free hardware design
[data diodes](https://en.wikipedia.org/wiki/Unidirectional_network);
Lack of bidirectional channels to isolated computers prevents insertion of malware
to the encrypting computer and exfiltration of keys and plaintexts from the
decrypting computer -- even with exploits against
[zero-day vulnerabilities](https://en.wikipedia.org/wiki/Zero-day_(computing))
in software and operating systems running on the TCB halves.
interacts with a third, Networked Computer, through unidirectional
[serial](https://en.wikipedia.org/wiki/Universal_asynchronous_receiver/transmitter)
interfaces. The direction of data flow is enforced with free hardware design
[data diodes](https://en.wikipedia.org/wiki/Unidirectional_network),
technology the certified implementations of which are typically found in
critical infrastructure protection and government networks where classification
level of data varies.
TFC supports multiple IM accounts per user to hide the social graph of
communicating parties, even during end-to-end encrypted group conversations.
TFC allows a group or two parties to defeat metadata about quantity and
schedule of communication with traffic masking, where messages and background
file transmission is inserted into a constant stream of encrypted noise traffic.
##### Anonymous by design
TFC routes all communication through next generation
[Tor](https://www.torproject.org/about/overview.html.en)
([v3](https://trac.torproject.org/projects/tor/wiki/doc/NextGenOnions))
[Onion Services](https://www.torproject.org/docs/onion-services)
to hide metadata about real-life identity and geolocation of users, when and how
much they communicate, the social graph of the users and the fact TFC is
running. TFC also features a traffic masking mode that hides the type, quantity,
and schedule of communication, even if the Networked Computer is compromised.
### How it works
![](https://cs.helsinki.fi/u/oottela/tfcwiki/tfc_overview.jpg)
![](https://www.cs.helsinki.fi/u/oottela/wiki/readme/how_it_works.png)
[System overview](https://www.cs.helsinki.fi/u/oottela/wiki/readme/how_it_works.png)
TFC uses three computers per endpoint. Alice enters her messages and commands
to Transmitter program running on her transmitter computer (TxM), a TCB
separated from network. The Transmitter program encrypts and signs plaintext
data and relays the ciphertext from TxM to her networked computer (NH) trough a
serial interface and a hardware data diode.
TFC uses three computers per endpoint: Source Computer, Networked Computer, and
Destination Computer.
Messages and commands received to NH are relayed to IM client (Pidgin or
Finch), and to Alice's receiver computer (RxM) via another serial interface and
data diode. The Receiver program on Alice's RxM authenticates, decrypts and
processes the received messages and commands.
Alice enters messages and commands to Transmitter Program running on her Source
Computer. Transmitter Program encrypts and signs plaintext data and relays the
ciphertexts from Source Computer to her Networked Computer through a serial
interface and a hardware data diode.
The IM client sends the packet either directly or through Tor network to IM
server, that then forwards it directly (or again through Tor) to Bob.
Relay Program on Alice's Networked Computer relays commands and copies of
outgoing messages to her Destination Computer via the serial interface and data
diode. Receiver Program on Alice's Destination Computer authenticates, decrypts
and processes the received message/command.
IM client on Bob's NH forwards packet to nh.py plugin program, that then
forwards it to Bob's RxM (again through serial interface and data diode).
Bob's Receiver program on his RxM then authenticates, decrypts, and processes
the packet.
Alice's Relay Program shares messages and files to Bob over Tor Onion Service.
The web client of Bob's Relay Program fetches the ciphertext from Alice's Onion
Service and forwards it to his Destination Computer (again through a serial
interface and data diode). Bob's Receiver Program then authenticates, decrypts
and processes the received message/file.
When Bob responds, he will type the message to his transmitter computer and in
the end, Alice reads the message from her receiver computer.
When Bob responds, he will type his message to his Source Computer, and after a
mirrored process, Alice reads the message from her Destination Computer.
### Why keys can not be exfiltrated
### Why keys and plaintexts cannot be exfiltrated
1. Malware that exploits an unknown vulnerability in RxM can infiltrate the
system, but is unable to exfiltrate keys or plaintexts, as data diode prevents
all outbound traffic.
TFC is designed to combine the
[classical and alternative data diode models](https://en.wikipedia.org/wiki/Unidirectional_network#Applications)
to provide hardware enforced endpoint security:
2. Malware can not infiltrate TxM as data diode prevents all inbound traffic.
The only data input to TxM is the public key of contact (e.g.
`5J8 C2h AVE Wv2 cGz oSd oQv Nkm 9tu ABP qwt Kz8 ou4 xvA HGx HUh sJC`),
which is manually typed by the user.
1. The Destination Computer uses the classical data diode model. It is designed
to receive data from the insecure Networked Computer while preventing the export
of any data back to the Networked Computer. Not even malware on Destination
Computer can exfiltrate keys or plaintexts as the data diode prevents all
outbound traffic.
3. The NH is assumed to be compromised: all sensitive data that passes through
it is always encrypted and signed.
2. The Source Computer uses the alternative data diode model that is designed to
allow the export of data to the Networked Computer. The data diode protects the
Source Computer from attacks by physically preventing all inbound traffic. To
allow key exchanges, the short elliptic-curve public keys are input manually by
the user.
![](https://cs.helsinki.fi/u/oottela/tfcwiki/tfc_attacks.jpg)
3. The Networked Computer is assumed to be compromised. All sensitive data that
passes through it is encrypted and signed with no exceptions.
![](https://www.cs.helsinki.fi/u/oottela/wiki/readme/attacks.png)
[Exfiltration security](https://www.cs.helsinki.fi/u/oottela/wiki/readme/attacks.png)
#### Data diode
Optical repeater inside the
[optocoupler](https://en.wikipedia.org/wiki/Opto-isolator)
of the data diode (below) enforces direction of data transmission with the laws
of physics.
[optocouplers](https://en.wikipedia.org/wiki/Opto-isolator)
of the data diode (below) enforce direction of data transmission with the
fundamental laws of physics.
![](https://www.cs.helsinki.fi/u/oottela/tfcwiki/ttl_dd_pb/23.jpg)
![](https://www.cs.helsinki.fi/u/oottela/wiki/readme/readme_dd.jpg)
[TFC data diode](https://www.cs.helsinki.fi/u/oottela/wiki/readme/readme_dd.jpg)
### Supported Operating Systems
#### TxM and RxM
- *buntu 17.04 (64-bit)
#### Source/Destination Computer
- *buntu 18.04 (or newer)
#### NH
- Tails 3.1
- *buntu 17.04 (64-bit)
#### Networked Computer
- Tails (Debian Buster or newer)
- *buntu 18.04 (or newer)
### More information
@ -127,4 +149,4 @@ Software<Br>
&nbsp;&nbsp;&nbsp;&nbsp;[Installation](https://github.com/maqp/tfc/wiki/Installation)<br>
&nbsp;&nbsp;&nbsp;&nbsp;[How to use](https://github.com/maqp/tfc/wiki/How-to-use)<br>
[Update Log](https://github.com/maqp/tfc/wiki/Update-Log)<br>
[Update log](https://github.com/maqp/tfc/wiki/Update-Log)<br>

147
dd.py
View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import multiprocessing.connection
@ -26,55 +27,77 @@ import time
from multiprocessing import Process, Queue
from typing import Tuple
from src.common.misc import get_terminal_height, ignored
from src.common.output import c_print, clear_screen
from src.common.misc import get_terminal_height, get_terminal_width, ignored, monitor_processes
from src.common.output import clear_screen
from src.common.statics import *
def draw_frame(argv: str, message: str, high: bool) -> None:
"""Draw data diode animation frame.
def draw_frame(argv: str, # Arguments for simulator position/orientation
message: str, # Status message to print
high: bool = False # Determines the signal's state (high/low)
) -> None:
"""Draw a data diode animation frame."""
l, r, blink, arrow = dict(scnclr=('Tx', 'Rx', '>', ''),
scncrl=('Rx', 'Tx', '<', ''),
ncdclr=('Rx', 'Tx', '<', ''),
ncdcrl=('Tx', 'Rx', '>', ''))[argv]
:param argv: Arguments for simulator position/orientation
:param message: Status message to print
:param high: Determines signal's state (high/low)
:return: None
"""
l, r, symbol, arrow = dict(txnhlr=('Tx', 'Rx', '>', ''),
nhrxrl=('Tx', 'Rx', '>', ''),
txnhrl=('Rx', 'Tx', '<', ''),
nhrxlr=('Rx', 'Tx', '<', ''))[argv]
arrow = ' ' if message == 'Idle' else arrow
blink = symbol if high else ' '
arrow = arrow if message != 'Idle' else ' '
blink = blink if high else ' '
offset_from_center = 4
print(((get_terminal_height() // 2) - offset_from_center) * '\n')
terminal_width = get_terminal_width()
def c_print(msg: str) -> None:
"""Print string in the center of the screen."""
print(msg.center(terminal_width))
c_print(message)
c_print(arrow)
c_print( "─────╮ " + ' ' + " ╭─────" )
c_print(f" {l}" + blink + f"{r} ")
c_print( "─────╯ " + ' ' + " ╰─────" )
c_print( "────" + ' ' + "────" )
c_print(f" {l}" + blink + f"{r} ")
c_print( "────" + ' ' + "────" )
def animate(argv: str) -> None:
"""Animate the data diode."""
"""Animate the data diode transmission indicator."""
animation_length = 16
for i in range(animation_length):
clear_screen()
draw_frame(argv, 'Data flow', high=(i % 2 == 0))
time.sleep(0.04)
clear_screen()
draw_frame(argv, 'Idle', high=False)
draw_frame(argv, 'Idle')
def tx_loop(io_queue: 'Queue', output_socket: int, argv: str) -> None:
"""Loop that sends packets to receiving program."""
draw_frame(argv, 'Idle', high=False)
def rx_loop(io_queue: 'Queue', # Queue through which to push datagrams through
input_socket: int # Socket number for Transmitter/Relay Program
) -> None:
"""Read datagrams from a transmitting program."""
listener = multiprocessing.connection.Listener((LOCALHOST, input_socket))
interface = listener.accept()
while True:
try:
interface = multiprocessing.connection.Client(('localhost', output_socket))
io_queue.put(interface.recv())
except KeyboardInterrupt:
pass
except EOFError:
sys.exit(0)
def tx_loop(io_queue: 'Queue', # Queue through which to push datagrams through
output_socket: int, # Socket number for Relay/Receiver Program
argv: str # Arguments for simulator position/orientation
) -> None:
"""Send queued datagrams to a receiving program."""
draw_frame(argv, 'Idle')
while True:
try:
interface = multiprocessing.connection.Client((LOCALHOST, output_socket))
break
except socket.error:
time.sleep(0.01)
@ -87,29 +110,14 @@ def tx_loop(io_queue: 'Queue', output_socket: int, argv: str) -> None:
interface.send(io_queue.get())
def rx_loop(io_queue: 'Queue', input_socket: int) -> None:
"""Loop that reads packets from transmitting program."""
listener = multiprocessing.connection.Listener(('localhost', input_socket))
interface = listener.accept()
while True:
time.sleep(0.01)
try:
io_queue.put(interface.recv())
except KeyboardInterrupt:
pass
except EOFError:
sys.exit(0)
def process_arguments() -> Tuple[str, int, int]:
"""Load simulator settings from command line arguments."""
try:
argv = str(sys.argv[1])
input_socket, output_socket = dict(txnhlr=(TXM_DD_LISTEN_SOCKET, NH_LISTEN_SOCKET),
txnhrl=(TXM_DD_LISTEN_SOCKET, NH_LISTEN_SOCKET),
nhrxlr=(RXM_DD_LISTEN_SOCKET, RXM_LISTEN_SOCKET),
nhrxrl=(RXM_DD_LISTEN_SOCKET, RXM_LISTEN_SOCKET))[argv]
input_socket, output_socket = dict(scnclr=(SRC_DD_LISTEN_SOCKET, RP_LISTEN_SOCKET),
scncrl=(SRC_DD_LISTEN_SOCKET, RP_LISTEN_SOCKET),
ncdclr=(DST_DD_LISTEN_SOCKET, DST_LISTEN_SOCKET),
ncdcrl=(DST_DD_LISTEN_SOCKET, DST_LISTEN_SOCKET))[argv]
return argv, input_socket, output_socket
@ -117,32 +125,47 @@ def process_arguments() -> Tuple[str, int, int]:
clear_screen()
print("\nUsage: python3.6 dd.py [OPTION]\n\n"
"\nMandatory arguments"
"\n txnhlr Simulate data diode between TxM and NH (left to right)"
"\n txnhrl Simulate data diode between TxM and NH (right to left)"
"\n nhrxlr Simulate data diode between NH and RxM (left to right)"
"\n nhrxrl Simulate data diode between NH and RxM (right to left)")
"\n Argument Simulate data diodes between..."
"\n scnclr Source Computer and Networked Computer (left to right)"
"\n scncrl Source Computer and Networked Computer (right to left)"
"\n ncdclr Networked Computer and Destination Computer (left to right)"
"\n ncdcrl Networked Computer and Destination Computer (right to left)")
sys.exit(1)
def main() -> None:
"""Read argument from command line and launch processes."""
time.sleep(0.5)
"""
Read argument from the command line and launch the data diode simulator.
This application is the data diode simulator program used to
visualize data transfer inside the data diode #1 between the Source
Computer and the Networked Computer, or data transfer inside the
data diode #2 between the Networked Computer and the Destination
Computer. The local testing terminal multiplexer configurations that
use data diode simulators run two instances of this program.
The visualization is done with an indicator ('<' or '>') that blinks
when data passes from one program to another. The data diode
simulator does not provide any of the security properties to the
endpoint that the hardware data diodes do.
The visualization is designed to make data transfer between programs
slower than is the case with actual serial interfaces. This allows
the user to track the movement of data from one program to another
with their eyes.
"""
time.sleep(0.5) # Wait for terminal multiplexer size to stabilize
argv, input_socket, output_socket = process_arguments()
io_queue = Queue()
process_list = [Process(target=tx_loop, args=(io_queue, output_socket, argv)),
Process(target=rx_loop, args=(io_queue, input_socket ))]
io_queue = Queue() # type: Queue
process_list = [Process(target=rx_loop, args=(io_queue, input_socket )),
Process(target=tx_loop, args=(io_queue, output_socket, argv))]
for p in process_list:
p.start()
while True:
with ignored(EOFError, KeyboardInterrupt):
time.sleep(0.1)
if not all([p.is_alive() for p in process_list]):
for p in process_list:
p.terminate()
sys.exit(0)
monitor_processes(process_list, NC, {EXIT_QUEUE: Queue()}, error_exit_code=0)
if __name__ == '__main__':

View File

@ -1,6 +1,7 @@
#!/usr/bin/env bash
# Copyright (C) 2013-2017 Markus Ottela
# TFC - Onion-routed, endpoint secure messaging system
# Copyright (C) 2013-2019 Markus Ottela
#
# This file is part of TFC.
#
@ -13,180 +14,497 @@
# PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with TFC. If not, see <http://www.gnu.org/licenses/>.
# along with TFC. If not, see <https://www.gnu.org/licenses/>.
dl_verify () {
if ! [ -z "$2" ]; then
mkdir -p $2 2>/dev/null
fi
# Download a TFC file from the GitHub repository and authenticate it
# by comparing its SHA512 hash against the hash pinned in this
# installer file.
wget https://raw.githubusercontent.com/maqp/tfc/master/$2$3 -q -O $2$3
torify wget https://raw.githubusercontent.com/maqp/tfc/master/$2$3 -q
if sha512sum $2$3 | grep -Eo '^\w+' | cmp -s <(echo "$1")
then
echo Valid SHA512 hash for file $2$3
# Check the SHA512 hash of the downloaded file
if sha512sum $3 | grep -Eo '^\w+' | cmp -s <(echo "$1"); then
if [[ ${sudo_pwd} ]]; then
echo ${sudo_pwd} | sudo -S mkdir --parents /opt/tfc/$2
echo ${sudo_pwd} | sudo -S mv $3 /opt/tfc/$2
echo ${sudo_pwd} | sudo -S chown root /opt/tfc/$2$3
echo ${sudo_pwd} | sudo -S chmod 644 /opt/tfc/$2$3
else
echo Error: $2$3 had invalid SHA512 hash
sudo mkdir --parents /opt/tfc/$2
sudo mv $3 /opt/tfc/$2
sudo chown root /opt/tfc/$2$3
sudo chmod 644 /opt/tfc/$2$3
fi
# Check the SHA512 hash of the moved file
if sha512sum /opt/tfc/$2$3 | grep -Eo '^\w+' | cmp -s <(echo "$1"); then
echo OK - Pinned SHA512 hash matched file /opt/tfc/$2$3
else
echo Error: /opt/tfc/$2$3 had invalid SHA512 hash
exit 1
fi
else
echo Error: $3 had invalid SHA512 hash
exit 1
fi
}
download_common () {
dl_verify f91061cbff71f74b65f3dc1df5420d95a6a0f152e7fbda1aa8be1cccbad37966310b8e89f087a4bb0da8ef3b3e1d0af87c1210b2f930b0a43b90b59e74dfb1ed '' LICENSE.md
dl_verify d361e5e8201481c6346ee6a886592c51265112be550d5224f1a7a6e116255c2f1ab8788df579d9b8372ed7bfd19bac4b6e70e00b472642966ab5b319b99a2686 '' LICENSE
dl_verify 04bc1b0bf748da3f3a69fda001a36b7e8ed36901fa976d6b9a4da0847bb0dcaf20cdeb884065ecb45b80bd520df9a4ebda2c69154696c63d9260a249219ae68a '' LICENSE-3RD-PARTY
dl_verify 6d93d5513f66389778262031cbba95e1e38138edaec66ced278db2c2897573247d1de749cf85362ec715355c5dfa5c276c8a07a394fd5cf9b45c7a7ae6249a66 '' tfc.png
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/ __init__.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/common/ __init__.py
dl_verify 094943d26876c8e494e4ffbdaff57557004150193876b6131010e86ce098f3178bf4b813710ac176f361c42582f9b91b96a6526461b39e9080873dc4f8fd792e src/common/ crypto.py
dl_verify b4407e85a84d6e070b252f2c1c91268005d1ae6f69c9309723d2564d89b585e558fa80b7a8f1f52cc7d40e6595c3395cb5b68e3594af9d3e720a4a31ee8ba592 src/common/ db_contacts.py
dl_verify 1cc269c493969ccf98ef51a89895d0f279efdcf0e5c89c2e2e384e0cc7f1fea425566bc619e02ff0ed5ab3d28c3bd9bad93652f08f088c2915cfc3d28cd00d76 src/common/ db_groups.py
dl_verify 0c27e847aee638883928f4437adb8077de2a9444e7f06f48c45ec17e46bda43d8434934b8a04cfc6cfb4006554b5578cfba402f9a4ef96f7329a33d26fc0ac39 src/common/ db_keys.py
dl_verify a38dd34dd681dc7993623921010d5e50ecee5192cd45e37db25a90ebe1e58c1a44864d95b11a607021773d6fe2578f1ac9eb287bfe6d5004a816f88770ab2b6b src/common/ db_logs.py
dl_verify 1516e939ff34838586389b4f920d310d79d09baa7173ef3a5a844d5982d747f4a120be9ac977189fd94d6b97792bb5e52ec78478781ecaa55d2643226a05fdd0 src/common/ db_masterkey.py
dl_verify c9ddfc92ec0043e3253950dd5d0b551bd5b92bc1c5b12aac14b99274e73d891dc10bc4081b9eae71f50af30a52d31507fef5ca309d9e6043aa93fd1dba5ff441 src/common/ db_settings.py
dl_verify a3911e2e60e31154f40d548edc7470c1ed963f4225e0005eba9499dd7b752879b5fd65fae983b513b0d76523b5a7cd3b9744721213a27f4e844a6c797e7780a0 src/common/ encoding.py
dl_verify f67c414fea948fd9b81bf8a53158b159085a34bae562d74cb2aa56fa317b65323b92a3a2d787377900cdecb65a1af8c224a9c7efd3969c377149284fd8a5882f src/common/ exceptions.py
dl_verify be34431336fb68429a9f6ec8603b9a475104a2e0c15b3c4beac63a50d2c4024863d769c7b8d154872afc80a0b8d82635448c29c89b40edcc74595db28a7364d4 src/common/ gateway.py
dl_verify aa1f94542fc78d4a9dd7212d02e4cf710ecbef1edc31662445e6682469e32059e5c3047fe512f751354c869fe9cb03bb3126ca987d7d1570ca9dacc1870ec759 src/common/ input.py
dl_verify 27b562f0d9083aa906465e9ece1817a3a03cf6980a9262ad1fc855e1989491d331871d41530919ee1cd35db8564f54b3c44492b6ef90f2836a2c3a8404f5b3d2 src/common/ misc.py
dl_verify 87e62112217263d4eda7d0a2a0cfdc0a3a698be136e650f3e32c7ffae7450706d059dc307abc40a1ce2b225c718ef34cca9ceaff1dcb51e28a2eb0972b9122cf src/common/ output.py
dl_verify 20a7ec5b54834c54fdaf889bb6261165b630f0f801a7055cab347d26e58cdde16d27d84ff0b437a318bdc5a12c575ee6e7f1d7d3c3897140f3c5ef1f75019f94 src/common/ path.py
dl_verify adea6b33ff23f9fe34539d38b3eb602b3a1075d92d9b8c5fdb4f12ebdf06fdcf6833edb3d94f91c4c0a2d160e0d152594aed776310cbd7cb5f2baf1579edd21d src/common/ reed_solomon.py
dl_verify 71f9221ad6ac787f1ee391487d5f14a31518c496e164022b83eac293d8e717751f1240449206b8f7cdee06fa625f407a32ba2add823f63d4b5eda073eb141308 src/common/ statics.py
dl_verify 003915a43670bbb3185e045de1d9cede67160d9da0a24a72650862e978106c451d94a2da4aa2e1d161315db7575251933b80881294f33f195531c75462bbcf9c src/common/ crypto.py
dl_verify 0dfae6aa49c399983a990ca672e24eef9aa3ed7782686dd6c78ab8041023650e195304a07d40b934ea6f73bb46189529983de4093144ffdef40e718263232365 src/common/ db_contacts.py
dl_verify 49ebf5dff5f34a373dccfaa0a8152e5bea11e6c3afc997d4c83d45b19351b62e0138555647c2ca796faf3cfc946f16d779af4ef9938b5ebffafa9ab155761696 src/common/ db_groups.py
dl_verify 157bc8b1cfea322118b880d9bcc76b695405668af718276246c334f76226781a55779da4adcea571472bfcc7ced2cdd908d49e181268707b16ef71ff4c8ff833 src/common/ db_keys.py
dl_verify 04cc3f2816b903d82e7baaa0bc9e406d7058c27537e8d07db67882a88deb4289fdff84150eb0dd1806721bf0ae1dd7f2757b916670eff6d1c122c660ac6d4ba2 src/common/ db_logs.py
dl_verify 8d53e7348abf71aa1e054e5e852e171e58ed409c394213d97edc392f016c38ce43ed67090d3623aaa5a3f335992fd5b0681cfb6b3170b639c2fa0e80a62af3a4 src/common/ db_masterkey.py
dl_verify 907c8997158a160b71bb964191848db42260a201e80b61133be1e7c7a650604792164499b85eaa4e84c58a7bc1598aff6ed10fda8442d60eb7f939d9de7f09c8 src/common/ db_onion.py
dl_verify 83b2a6d36de528106202eebccc50ca412fc4f0b6d0e5566c8f5e42e25dd18c67ae1b65cf4c19d3824123c59a23d6258e8af739c3d9147f2be04813c7ede3761d src/common/ db_settings.py
dl_verify 88f628cef1973cf0c9a9c8661a527570e01311efbbb6903760abec2b7ff6f4f42b3ff0e00c020d7b1912d66ac647b59b502942199334a83bb9d9dddc2a70c943 src/common/ encoding.py
dl_verify 0e3e6a40928ab781dbbca03f2378a14d6390444b13e85392ea4bdfb8e58ae63f25d6f55b2637f6749e463844784ea9242db5d18291e891ee88776d4c14498060 src/common/ exceptions.py
dl_verify 77b810f709739543dc40b1d1fbafb2a95d1c1772b929d3a4247c32e20b9bb40039c900ff4967c4b41118567463e59b7523fbbbf993b34251e46c60b8588f34ab src/common/ gateway.py
dl_verify 42742ab0e0f6f61bd6b8d7d32644a98e526fa7fd0fd7ed8e790c25e365874d77a6611849c168649160b84774059675a066dd0711db59ed41ffc449790fb5ffa0 src/common/ input.py
dl_verify 18efc508382167d3259c2eb2b8adcddda280c7dbc73e3b958a10cf4895c6eb8e7d4407bc4dc0ee1d0ab7cc974a609786649491874e72b4c31ad45b34d6e91be3 src/common/ misc.py
dl_verify f47308851d7f239237ed2ae82dd1e7cf92921c83bfb89ad44d976ebc0c78db722203c92a93b8b668c6fab6baeca8db207016ca401d4c548f505972d9aaa76b83 src/common/ output.py
dl_verify dc5fdd0f8262815386896e91e08324cda4aa27b5829d8f114e00128eb8e341c3d648ef2522f8eb5b413907975b1270771f60f9f6cdf0ddfaf01f288ba2768e14 src/common/ path.py
dl_verify f80a9906b7de273cec5ca32df80048a70ea95e7877cd093e50f9a8357c2459e5cffb9257c15bf0b44b5475cdd5aaf94eeec903cc72114210e19ac12f139e87f3 src/common/ reed_solomon.py
dl_verify 421fa2ec82f35a384baf5f5a4000afa4701e814ff28b4e8fa45478226cbf2f9272854ddf171def4ad7a489a77531457b9b6d62b68c4417b26b026e0ee6e521e8 src/common/ statics.py
}
download_nh () {
dl_verify 27a60f6f2c4024c41ae11669d6695662b47aa0b1efb21c6cc0af19a20ad66c6e8a34ac57db1558f1d5e84300d43618b72542bb80c3b0aa309fadeacaae14f339 '' nh.py
dl_verify 569f3baa7ad3589f8c95f9ae1c00f2fe19e4031b04f31e68536fb924b19d433adfeff788a6eeb21a4960e44d2f575eaa7479de268ca2333781d4de618295156f '' requirements-nh.txt
download_relay () {
dl_verify 9ff2e54072e9cd9a87d167961bb5dd299caa035f634c08223262cda562faf9407ec09435c63e9cce7cb4121a6273ae0300835334e03f859df3e7f85b367d685c '' relay.py
dl_verify ddcefcf52d992f9027b530471a213e224382db5fbb516cc8dee73d519e40110f9fcca1de834a34e226c8621a96870f546b9a6b2f0e937b11fd8cd35198589e8b '' requirements-relay.txt
dl_verify 3444adc5cd050351bc975397da22a04becefc49a69234bd9d6b41f2333feb5cf0a31765ad6c832f69280120d159e2792dba3d9ed0fd269e0b8e04ec053c2095d launchers/ TFC-NH.desktop
dl_verify 8138bb15be64281c35310a711a136d6953985a0819bc5e47c1b224a848c70a01a4f60bb56e04724a919b1f84a4adfe5bf090109ace48d294f19349c051d3e443 launchers/ TFC-NH-Tails.desktop
dl_verify f2b23d37a3753a906492fcb3e84df42b62bed660f568a0a5503b188f140fa91f86b6efa733b653fceff650168934e2f3f1174c892e7c28712eda7676b076dab8 launchers/ TFC-RP.desktop
dl_verify a86f3ac28bbd902dfec74451034c68c01e74bbe6b6ec609014329fba17cc1224dc34942b103620109ef19336daa72e50dae1a0b25a1a2720445863427724d544 launchers/ TFC-RP-Tails.desktop
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/nh/ __init__.py
dl_verify 5cfc25f56763c4ce96013eb3062ab62646f1a9300a8c596d83e4d7bb4e08754bcee4179301290489ab667ba2229d9a599767e2271f081d0035e4cf0783eddc95 src/nh/ commands.py
dl_verify 98c53fb80482e1941d74ce34b222c9457f4d2a346f352f7624f3e6376843598b3b2a3ad1136c3f6fc9e4df2e42f372d7470dcde2c8ada40b4cef896ae8ed61a5 src/nh/ gateway.py
dl_verify 4c293c3abd62aa0997014423d1b145df144247e834a552a1172a4c06e3dad487ac9c7c0ee56de74c29a4f89a538902206dfda62b8a105e47acb22b842d98f55e src/nh/ misc.py
dl_verify 93c7d4ec6f80e46b5a46a404a5eb676d8efd1700e74fdd06a65bc823fb566a6eee63bccd6da520e56bb54310089aebbffb12483a6c908c66348a4f34c13d600e src/nh/ pidgin.py
dl_verify 97a8d945ebf88708180186f6a7c19cf3bba314da656b46dae2a1fbbeaeda143fd3f31d2ba9ed1981960bd8b04c1143a4b580643595d394f9bdf8ecb560d33d10 src/nh/ settings.py
dl_verify d83d3b0f1157e60589c7428f33091c2239e910e410c94e3254fcbaea8cffbe8a783cc7175dc6230fb10525d17f6056579810100ba0600f0d4a5127bfd4ee0dd2 src/nh/ tcb.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/relay/ __init__.py
dl_verify d009954abc9fa78350f721458071aeec78b6cd8773db588626a248f0756d1e39b32a8c8c58c370e87e9e4eb63f0ea150a427ad2b92b641c8fd71117933059db8 src/relay/ client.py
dl_verify 02c764d58ef8d02f95050cec41aa41fa90938ea08e0107ed49d3ae73357115b48f23f291dfc238ec3e45b12a705089b5c2ad3a1b30f27abb0a4c7498271161a3 src/relay/ commands.py
dl_verify fa7350a1dafe7e27638cb505a30e43815e157b08fc26b700f15633ab34f8ac3ad782a4396cc6b9aba3b59cd48d2e37b6f72befcafbd14772e135bc40fc080050 src/relay/ onion.py
dl_verify fe666032c2448d87355931bef235085039087b701b7b79a74b23f663d06b78264686c53800729f8a4197bf419076d76d1fe3ae74afa9141180035a6b807f0bb5 src/relay/ server.py
dl_verify 380a78c8c0918e33fb6be39a4c51f51a93aa35b0cf320370d6fb892b5dade920e8ca4e4fe9d319c0a0cdc5b3a97f609fdee392b2b41175379200b1d793b75593 src/relay/ tcb.py
}
download_tcb () {
dl_verify ba9fc6dad29b91a78d58f6a7c430e42eb75363d14de69668d293041bf36bb5eea0666007535c8f5a122e0a72d0da7122ff45d8e6c081c9ccacdaeeb47cb93b44 '' tfc.py
dl_verify c2f6afa281f91b88da85668dcfe0cade4af01927ac748ee1dc76c6f160149742980b3d6996c7d04e7fbbf5abca8f79100fd746e71187990d972f4b1aa2c1bf63 '' requirements.txt
dl_verify cec2bc228cd3ef6190ea5637e95b0d65ea821fc159ebb2441f8420af0cdf440b964bdffd8e0791a77ab48081f5b6345a59134db4b8e2752062d7c7f4348a4f0f '' tfc.py
dl_verify 0711aabf9c0a60f6bd4afec9f272ab1dd7e85f1a92ee03b02395f65ed51f130d594d82565df98888dbf3e0bd6dfa30159f8bd1afed9b5ed3b9c6df2766b99793 '' requirements.txt
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/tx/ __init__.py
dl_verify 19c6542e34e58fa8504193d71435c2f06fbb4d5d770342fcc37a27acf401aa05857733a6e364ade4cea1407762fe7340c0e4cd9d3031daf8943a13d14b1e92f1 src/tx/ commands.py
dl_verify 63bf0e11f46d8e5544e091110fd24e1241ddd650daa9cf76c39ed7db43a7062dc252a6b37ef26d55fb875fbc51314b47d23c98176d4fc1bf51fafef7a1f69763 src/tx/ commands_g.py
dl_verify e660fc6368a430a82a8a2d0e38bd4e8aaf94bc0ac5fc6b2c63eceb58f1579ce75ac3cb83382202e929da76fe3617d553732d1798beaded4f52ce0bf7e53b75bc src/tx/ contact.py
dl_verify d215e8983de808526cf9b76b0d299b7cc93a1cb15316113930028fbb0cf66bde51daa57a1e7ef6cfbd9f65e515553631943e142ab78ab89b78571f8612355b51 src/tx/ files.py
dl_verify 4f0fe9684e1aa9caf665fcfa037e7ccba61c9e4385621178912e2875e1a28fed72b9fc48581782dab3c25c29e0cb38bfed2906b2e19179b43a8b35da72656112 src/tx/ input_loop.py
dl_verify 69a90b3e908769821c419ac80779d0b09401103e4b8f79a0bf444fda8f6a20d0c559679f1595869c4bfa569631211f1297141ada7e91b1c3d28ce804961e00f4 src/tx/ key_exchanges.py
dl_verify c782cdeda0faf946a4c97924668697a479d7d60051988e96bb4e62bf0e1ef82bfc982b8fb3465e5371b446d3f042b1c54a32a31393ea64764d281abac95850d9 src/tx/ packet.py
dl_verify 05e76b6d62e694d1f887853ed987a770debf44acf8da12091f9a4f614a8a26c5771593d14f53beeafb7f684d56e0ecaa000f3a73bb69342cb6667f9758b56c9d src/tx/ sender_loop.py
dl_verify afcf71e6d407bc7ef391e795441c3343fd2f172f2636fd1b06ffbadb8d0d38368007be9d8e69916a02679f576407200e836c1eaddf0dd3255d8dc073993d07b1 src/tx/ traffic_masking.py
dl_verify c806320893ecd097ed5f8d14619cb453315fc369d0c081ef40d48cbdce46630fcd3006bd11d8712c0f6d89d7468b674e78b50257048a3a99180093f0a361615f src/tx/ user_input.py
dl_verify 827ecad844d1fb3709b81e59f6f1ad88362a3140517a8a5d36506415e1494d554d00e2dc1dc7cc65db06d09a1182acb1150b939fcffdcd0939e70229de03f3bc src/tx/ windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/transmitter/ __init__.py
dl_verify f91c0f616555725e0d2a4d8e2ee2bf39e1ebc4cbdf0a2547f4e4b5e4f1ee88743273cffb422a43dff98ba42772b18ceb4c270628f933392e27fa5cd6cae991ce src/transmitter/ commands.py
dl_verify f7cf493506a19b9732ae9f780aeb131342a47644632fcf88f0df01f0bda88252fdbad37a4b80e87f97e57feb50079ac2e5194598d745163846e30fdd6d32fe60 src/transmitter/ commands_g.py
dl_verify a1b6af28645df531be3a670375ce3a3da1a48b279d646f04b3c14cfbdf7006060955f33595a2963f98a495ec16dfe969325842495d8fbfae5f93e1459ed047c4 src/transmitter/ contact.py
dl_verify 184c35a32a3858893c67622a21fc7fdbd88bc61f82d4b655ad26ef008563cdb31430a3b713b92c98ea8d983ebadd0db6f9de3f9b1c07ac3dce4cf405aedf21ae src/transmitter/ files.py
dl_verify 019c178982f89b93ba69d26e60625a868380ac102b10351ac42c4d1321a45dd7186694d86028371185a096cce2e2bbe2d68210552439e34c3d5166f67b3578ee src/transmitter/ input_loop.py
dl_verify 742fba91ebd67dca247d03df4cf1820fc6b07e6966449282d7c4019f48cc902dc8dfc4120be9fdd6e61a4f00dd7753a08565a1b04395bc347064631d957c9d82 src/transmitter/ key_exchanges.py
dl_verify a59619b239b747298cc676a53aa6f87a9ef6511f5e84ec9e8a8e323c65ab5e9234cb7878bd25d2e763d5f74b8ff9fe395035637b8340a5fd525c3dc5ccbf7223 src/transmitter/ packet.py
dl_verify c2f77f8d3ebf12c3816c5876cd748dc4d7e9cd11fe8305d247783df510685a9f7a6157762d8c80afda55572dcae5fe60c9f39d5ec599a64d40928a09dd789c35 src/transmitter/ sender_loop.py
dl_verify 5d42f94bf6a6a4b70c3059fd827449af5b0e169095d8c50b37a922d70955bf79058adc10da77ebb79fb565830168dccb774547b6af513b7c866faf786da7c324 src/transmitter/ traffic_masking.py
dl_verify 22e8ba63c1391233612155099f5f9017d33918180f35c2552e31213862c76e3048d552f193f9cd3e4e9a240c0ef9bef4eabefe70b37e911553afeceede1133ca src/transmitter/ user_input.py
dl_verify 39a7b3e4457d9aa6d53cb53d38c3ed9adbd9e3250008b4e79b5a174b9227fd0fac6dad30e6e9b8fe3d635b25b2d4dfc049804df48d04f5dfcc1016b2e0a42577 src/transmitter/ windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/rx/ __init__.py
dl_verify 04f23a236a7f8b5c43a532ef2b3278202a17b026a47b6d1f880a6fb2e775824aff3be78a14167905c955f98a01239bd1c5e63cd08566dc759fe259a4b0c6a74a src/rx/ commands.py
dl_verify eb307d3b780dd90ab2618909707c4cd56db829dc94d49408c4a6b84f46292f395927fde0d36451c90a595fbf948cbcb3f1aa8676ca5658d6b113a3b45f2216db src/rx/ commands_g.py
dl_verify ede3aa62af2b120078f12bbdf7d21364484652c5204817436e30cc5af70ba73fba68a6a7cfd08f43734f6c5778e710508674f7a9653d4b51922460ba1cbec796 src/rx/ files.py
dl_verify 835f6f673b7bc1785b8c311f21aebc7ffab1a4570152f3888d13e00d763c66c81b5a77f602e7488962737c6b675beeda0bb347dfb1d11af51ea036be8932398d src/rx/ key_exchanges.py
dl_verify c06e19c1fc279346d8454eed45fc9d2f6c1b3c561d9b9b45957b145f23ca9ba016cef51d1fad4fadabd9669c6ab4443679ac98630194073294c1ee20afc725de src/rx/ messages.py
dl_verify 425e9bbd17c13f62732687cc798e7fd49159d5f5a291ee4ff292dd45a65bdc8146f2a90c0d4abe7fb28baea855c396335832c484a0c753067db4fa7974cce651 src/rx/ output_loop.py
dl_verify 5f7d66daedb0cf60737a14fe428e3f420b66a08ae7c5b63135d11e17a1f3e11ce43f50d54516249fe7a065b69a17082ee81297f7f4a8c4c9a1f26918575c8dbc src/rx/ packet.py
dl_verify 9f5f9ddf01af12e43cbb7d8423bff2cdaa4a6d3848f1ba9e1e2bbb20da08221b84de4538700c642fdcfa3637db6ad03cd2f7dfe04e67544559b8e4cc96608e61 src/rx/ receiver_loop.py
dl_verify d26e949e7fa57b43a6489e3fe01e2bc26f7e7dfa8ec99915afd2f54f7a3e2a1e86ac16f3d95642e80ae431e35f933a07244d8ca49b3861aad6bcf462dcf2791a src/rx/ windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e src/receiver/ __init__.py
dl_verify 35b035f2794b5d7618eeafd91781246a0100bac9ff6a1f643b16068d5b2dc2946c799e91beba77d94e4118f99d6d6653974ebd5d4008133131f3bf44a7a190fb src/receiver/ commands.py
dl_verify 09f921aaaeae96ee6e9ff787990864ba491d4f8b10c613ab2a01f74c00b62d570270323ea2f5dc08befd8aa7bf4be0c609f8dca1862e4465e521b8016dff14da src/receiver/ commands_g.py
dl_verify 7b1d45caf3faf28c484d7d8d0c96ff9ba6e840682b002e438eac620904d3ca39483009a079d300489d80e22025ba301fa483f235193de5b55a62e9dedb25967f src/receiver/ files.py
dl_verify eab31c334f09930f1167b15fae4d0126711d6fb0efbe5b8ca9e6e49bdbf0a9ca90279be6d2cd0080d588cf15d83686ba895ee60dc6a2bb2cba0f8ed8005c99eb src/receiver/ key_exchanges.py
dl_verify 2894c847fe3f69a829ed7d8e7933b4c5f97355a0d99df7125cee17fffdca9c8740b17aa512513ae02f8f70443d3143f26baea268ace7a197609f6b47b17360b7 src/receiver/ messages.py
dl_verify 57ebdf412723b5ab4f683afeda55f771ef6ef81fde5a18f05c470bca5262f9ff5eefd04a3648f12f749cec58a25fa62e6dfb1c35e3d03082c3ea464ef98168b1 src/receiver/ output_loop.py
dl_verify 3b84dbe9faffeab8b1d5953619e38aefc278ce4e603fd63beaee878af7b5daff46b8ed053ad56f11db164b1a3f5b694c6704c66588386b06db697281c9f81bbf src/receiver/ packet.py
dl_verify 1e5240d346a016b154faf877199227edf76e027d75e1e921f2024c5dd1d0a40c1de7e9197077786a21474a4bbf2c305d290214aacdea50f5abaeb39963ca08a6 src/receiver/ receiver_loop.py
dl_verify e84a92fa500492af0cc16038fd388c74c387334898b870e57bc599d1b95da85b579d50ba403cdfc82ce8d4d5765fc59e772796d54faa914d0b5874150428d762 src/receiver/ windows.py
}
download_common_tests () {
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/ __init__.py
dl_verify 9cba0c6eb96f5e827a669312c2c8d4d52b24ca5133d294ab946fca8d508b71f898328487ec8213af639a61fcf7fee8fef3102c5f1341cd4c588289a03e820003 tests/ mock_classes.py
dl_verify c6432382c52a7665bf2da5ff4c6e502d46b0d29f7d8eeab2feacd77e4e4bd954227c57f9baf1251feb0f4d6923380fe64a38ca8d12d0d7cbb2b8d34c5b803b5a tests/ utils.py
dl_verify c20421e2293f058df4e03dee49e609b51fc1d39e69b4c44dd7580f88a5b2bf0729261167cb69fb0ff81b3838e3edca0e408c5c6410e4d43d06d6c0aa1ef6f805 tests/ mock_classes.py
dl_verify 2acdcd76d44caa417e9d1b3439816c4f07f763258b8240aa165a1dc0c948d68c4d4d5ac5e0ff7c02a0abc594e3d23883463a9578455749c92769fea8ee81490d tests/ utils.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/common/ __init__.py
dl_verify 52c111cc9a956354f5f5a317cff4209003481f4f8bf3c248df529c4925202780c0c2fea3a3fe2289a2651d82c9bcbc8a2801141f2b5b2a8d4ba1b74943de6587 tests/common/ test_crypto.py
dl_verify 8e1b790d9143a7d2decd5dab97826cc3fdf85c071da95340da7a4fdc862d94099408675ad7422c8d105e988aa39eb5b5ef1a39fce9be5a6ae6877fd820e1f899 tests/common/ test_db_contacts.py
dl_verify 8190d1525f5f603293f30a07d2e8e15becad13094458d6b3e75a8f45bf7751019ed9fea8df9b366c09bef083d3eb1b4bf0e3c165912069ddfa862f86107cd420 tests/common/ test_db_groups.py
dl_verify e11f05a0193bfa013c487ff4b646f8f54b5b3ac71e136d69d38d4e572afffd0849ce3f4b0c1639b77f6506c33e6f13c65ca5b4b3f3e8a421a17f89fe2113141f tests/common/ test_db_keys.py
dl_verify 32e6b8562a758eaa29c9e32720434915c7b32a5815203b2b4d11acd81cd9b3669e88ee41d660681d2fb7015f9f4346919e74c901a50bb8202a4f93ba316b0b3d tests/common/ test_db_logs.py
dl_verify e5c0fd0fcff438b92933e81389053b3d5a4440d0b37d5e9744a96c6a8cf5c14169ae90a2714d5490f4f920b0335235d9d5cd6f42e806698333a0ef2821b56e92 tests/common/ test_db_masterkey.py
dl_verify 19233b6f6aa19e50f36d8ca595e93b8a782c20a9f6076e966da8a7c5619ff33a0b8b02a93d16903ecc873930e0a263a79edc4a2c85e39aeaac81279ba1a65d0e tests/common/ test_db_settings.py
dl_verify 4472f5528c6c9c60b4c4dbbc6c41dbe19734710be37b9ffdb27081c84fe308230c4e5b0180c006fdf47e75bb05050e41958df25b6feb752fb7951141bd59c6fa tests/common/ test_encoding.py
dl_verify aad18d42e5366223a88d14e809f8897cf4f989de5e7115b1b5052675b134d9e5bfe30c21bef2cc8d5150385dbb029350f1ce36d388fffbb184b8872014209acb tests/common/ test_exceptions.py
dl_verify 12f791c529dc447c6940049e3b9b44cfd3847c25089864820677e23446ed72d212bdf1dcd849bf80d0ebb1a438337730e5fab395b1f183b98190e49575391038 tests/common/ test_gateway.py
dl_verify 01df5269c6189a55bbed7e5894aa126d5e16d16f6b945160e63c929b397f06ef238b3a4be8fa3d5431567d1b62a0d4eb86faa320cb6df9dcfed971d98df936da tests/common/ test_input.py
dl_verify 029cc1f4cd983c32a4b2ee0b78c0f3f9e40ed3ff417ed323927325a582d5e77c52c2ca48e3ea38471fbe431d87a4e35355de0a6b17e2cb6331d04a25ecda1358 tests/common/ test_misc.py
dl_verify 7ca3a76b69a96e33ce8ef0404bbed696f3c82d63cc8940e25763ec241e7d8be2cf033c54d28a193bed911b3646bf4c111450a30d90f25af347a323e3018da04c tests/common/ test_output.py
dl_verify a17d3bd4fc7b44216a2c59789fb9322a4cdee52c9763dd8f7cc59908c42b500db51aab4681b7372fcfbe6a152055bf823073797b3f94275791b1c56f2a363395 tests/common/ test_path.py
dl_verify bdea73b00b14b8de136112e9c6e1257aca971a704bf0a104e3aefd1014a0d94ce0cd941a2568e058b27202ec595476692c22ac1244d626759965b8242fa3ea74 tests/common/ test_reed_solomon.py
dl_verify 946812a0c4e368b349b31622ddd21ed863cd2feeec1ff145c45a96a5953a47c5865eade0fbe391510cfd116fa35d9f8253e4314187884762e3ae3000dcbc9db3 tests/common/ test_statics.py
dl_verify b62eeed36733c4ddcbb657cf7b2b37737f2a1b0b5d11c7720cb13703f09a99ccb0ead2a379caeff073955a31a5ae123342c925d93bbdd3338cfc8e4efb83fa38 tests/common/ test_crypto.py
dl_verify 7c222cc89248f09992def8fa30c32a9c98a9188c0b30af5f352eeef7b1932bdbf070a87879b47fe09c5cb6f19ad69038f3f8e906479773987e3f47908119f444 tests/common/ test_db_contacts.py
dl_verify cb8e18ba393d05e89c635d9ee22f0a15bc3a2039c68c85cc0e3eafe6d5855601b0c00473d6284bb33c4f88184932f2413793e185e5478e6cb456976bc79ad790 tests/common/ test_db_groups.py
dl_verify b894e5719bbf666b2e86f911b422c857c8e3795b527e346e510ff636c8b9733607c8e4115168584fba3fd6144d64b53b85f65cbba18b21c7dd80ff6e0de2a271 tests/common/ test_db_keys.py
dl_verify ed68245632dcab1a0ff63aa18408514a8c902ffdaa509ee5f9ae6a4f4b57fc11d64d5a4b70cc2884b8f428afb2ee23a586ba0595ad9b921f66b735ae90f257a2 tests/common/ test_db_logs.py
dl_verify 4e7436d7316d56f50f604a900eddc6427bb2fe348073848b1d7845484f51739686c781935118a18bdc52d7848a46f24909ea630306c46f518ec9b72768c3f648 tests/common/ test_db_masterkey.py
dl_verify 9eb4af866f9e5f1561401a3b62f924e8133464dfc3bb06f5e17dc18f2c09b785133ad38cf45d6d218ef7c5eadad4207d53ad6492e82754753ed568884ba4d383 tests/common/ test_db_onion.py
dl_verify 58ed5e733ac373a6c3d69ff7218207a60b9e4138a549da1a9de158d770f5b2514d7042e4ec7feed86966388523ace278797535a77be926f34c406ac3bc4e96ce tests/common/ test_db_settings.py
dl_verify a2036517d264bbaf2db9683e573000fa222067c6a8e3e72337e5b31c6554c1c33259f885540aad73f2cc454f8d0ef289df9557106e43ca4504fbad447c7e4c04 tests/common/ test_encoding.py
dl_verify 3dea267fa9b4361890f374157b137c9f76946f3289f4faf4b293814f26f9769fb202ec98c6fd044891b2a51a3bb69f67fec46022210ebaf27f7270e9dfc779eb tests/common/ test_exceptions.py
dl_verify 3d2d5077bc946a1327c64598a3d7bb30786a6ccb089f5fc67330b05a3d867c46deb0d5cec593927782e1bfbf7efe74678f6aa4b62a3306ba33fa406537ee6499 tests/common/ test_gateway.py
dl_verify dad966ace979c486134dd3146a50eb2d26054984ca8fcad203d61bf9ae804db04664df21e8293e307fbfe9c331cb59a06a46626fb36f445f50ef0fba63b5d93d tests/common/ test_input.py
dl_verify 23d4ddd293defa5ac3dd4eada0e8e9263203c51d9d0260d370a362557f93bb74dbfff75620463e4c046db3350b54ee75889398c58be16df8dcffb928220815a9 tests/common/ test_misc.py
dl_verify d595d7b6c0e05f1c99a89f8dc2e662eff4127f0ad0b807156a4e6f42c9113e33302c00b311e9fdfcfce20e1fea331da02bbeb41a7c44d8e05795317711da8225 tests/common/ test_output.py
dl_verify 4a38809c9afad404b563cbaffe89d9a23b9785ab246c71136b9bb2c802f7b1039ad375580a3076ba671f97beb48bb3f51a6bded4f8179d3c5b8f73899101cd9b tests/common/ test_path.py
dl_verify 1e320f69f236daed5f0fb2e6fda4b5b533dd628fff7db0ee8a6b405efe3c24138a43f24b45693017219cd885779f5ae57d3523d264e077ba9d3b9d2027b95d9c tests/common/ test_reed_solomon.py
dl_verify 223f66cbb3ff0567eba27b66c3be30bd292b6ab1405ea52af79e4adafc87901212998576665bfee5e40e9ece7cc0d369179945be903ae36e5016942cf8c7fd2b tests/common/ test_statics.py
}
download_nh_tests () {
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/nh/ __init__.py
dl_verify 8a3b29d367987feae53c62a08fa3523a2e1fd032d9043f445244a9fd4026f73476daaf5fb9dbfe732b7bbfc5b4b0495f1566bb4cced9d41854a7128ccb802097 tests/nh/ test_commands.py
dl_verify 045f61820b739ad86d475a460788f27a92cfcf651ad4b4d4e798f6f3f4672e3e10fee2941057c919dac23fd1231df06b78f6be3e3a749e7b9d51504ec49044a2 tests/nh/ test_gateway.py
dl_verify 512ad346e350713bd551447e1c305d25d038a6c1a6faaf2a9880c52352255bcf5b057c89148804ec495cd5d996b832f7d139691ef9a3fc3fd65b927a3548aee9 tests/nh/ test_misc.py
dl_verify a32e36680caa2bbcb841369062996d1a1656c13c5eca6bdd75f15841a5123c6a90bf65b85acfc3d8536a888b4e41a1b591a2b44b3b871cb3f0ebe50b63509b1d tests/nh/ test_settings.py
dl_verify 825f26a6baf24fc650a9e3dfc09a2361b1000e48b754273c2b0321b7c01f08f71ebb40bf1617f948ba13bec925158b8f1db974003aa8ef3363ad69f4fd88e843 tests/nh/ test_tcb.py
download_relay_tests () {
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/relay/ __init__.py
dl_verify 9d132ad47baca57c5ce8d7f07222b6c778aec697c190c48b82c86c4eb8588de1935f2309994c05bcdfd44fe2d8d85d20980520aa22771f3846e5ce89ac68a232 tests/relay/ test_client.py
dl_verify 2431fd853a9a0089a3837f1e20455c2d58d96722d5b803fe9e3dc9aa09a3e5fbffa3b0fa9e3e723d81a2aa2abd6b19275777ba6eb541ec1b403854260dd14591 tests/relay/ test_commands.py
dl_verify b64b8cef7f1c4699e34344b6c6ba255d6ead3e8f4765dfd5fb88d2a676962a7d8231d261f68d3399d9eb65196ea0cefb31e6800aa6cc6662dcf0fd927be8c1a4 tests/relay/ test_onion.py
dl_verify 42e494245869a5e652fe6bdcf5e21d1a0299c9ad7485d075fe7cf1d2d53118b444d8563bbea837316f00cbfea31117d569cf4e8694443ab5b50f606369aec987 tests/relay/ test_server.py
dl_verify 54c3026e797e75c46ca1d1493f6a396643948f707f1bc8ad377b7c625fda39d4e0fa6b0ec0fe39149ef0250568caf954e22ae8ebe7e7ac00ca8802ffbc6ae324 tests/relay/ test_tcb.py
}
download_tcb_tests () {
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/tx/ __init__.py
dl_verify 6eed2a31a017772c767907be7b5c825207d25d394d4517c856a882265de84c50f56ae60020e8983c2269d356fc73286bffe414c28d894d851d05bb45c3ef79f5 tests/tx/ test_commands.py
dl_verify 8be45e9c005d6ddb89d0d8a1dc3477c39e13e5b95dfac1d38f94f45a886ee0af64f9b95bf25ee26b1ad2085fbd285237b68145dba916fc56844dbb740ba0d52c tests/tx/ test_commands_g.py
dl_verify b9a27910eba3f09b09c5d88c41ec95629ec0a8cfae8cd393bbabe5ffb699b5a1db98bca825fbf320eae48c8fd9125a7d2dc64e94c992dbd4799d7f00ad0a34b0 tests/tx/ test_contact.py
dl_verify 2b15f293950ce0961e2975a20b60e7dc7e5668507941ce01bcb9147799c2b4f72a1ee35206e58f4e9d3f40f6ff758e0206c3bd6eb428c2d504aded8c254792f7 tests/tx/ test_files.py
dl_verify a6e64b203c0c0b5a7d09e1a41e2faccaa6eeaadfd108117f1895c7120e833b79ac73166cd516c13fa9a2cf31d0196e4e2215a3d9100e26255eb57be738478efd tests/tx/ test_input_loop.py
dl_verify 783a0d0b6fc3b04abfe474b4e5829dce333bc727fe9a2dd570b37ac63dfaa0426e71b24d0b02a5254a1e2711943bb0d61516297cf3a872bd55d57728fcaf6d84 tests/tx/ test_key_exchanges.py
dl_verify 485f6ea31486b6aeceb7c6359bfb46c4a107f2f971b84c3bc36eeddf6cbec0dbbe730ca5109673d66dda61bf1ccb24dfb3f15575dfc0279b6adb6a1c504a2ce4 tests/tx/ test_packet.py
dl_verify 3967b417f32779187a9dff95187a63dc02a7c8dc314f92c029351c9be180344e560574007566050dac58b4c3f066ac9e3e11ea8047b61801f8530808d4d55ed8 tests/tx/ test_sender_loop.py
dl_verify dc783f22c8e0e48430269ef5001c7e4c361a3b555b5e48a9cff136007534f4c093f1d1cfe2b55751adc1c9145d6de08e2cd21332c75e2533d50c2fda70060d21 tests/tx/ test_traffic_masking.py
dl_verify 35774f4d935ba91600b11b73b75aa12605a64297914cfd2eba793d3ebaaf4cc6ad48d8e8ffed43a37d3dd5054bf134b9e7cae693ef7d7232d02c9a0e5b54386d tests/tx/ test_user_input.py
dl_verify ba9abe1222c4bf409c00e5cbbcdcfb28753f3c0b85e52aa89e45c81a2831a461cff6ec16d1ebc7690419b6d02bf220de0ac6b30b7eabd0c040fa571fc4e61f9f tests/tx/ test_windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/transmitter/ __init__.py
dl_verify 3bdb8fd64bb2b4070da025e0187e434b5178b645fb08ec822bdd732bac3824316a8d13ded95e5e7bf754dddda5ea1f5805b6c2a3b46e8100509d3f5b32d18278 tests/transmitter/ test_commands.py
dl_verify c2429b5ffc32aa4a6377fef726553d7c731672367cb4eaa338c0a2099b3fe0455fa8a79c4b86afd9077a53422403649bc1fcf7540e4f996dc0890819c34d9135 tests/transmitter/ test_commands_g.py
dl_verify 3baaa1dc6dff7771f6167d699a81c6cb14f7b0ea307b83797d342a95b21f89d9f2c21e54feac0474f61174a1c708b3f02bc0e3a6b0b504bda8c03cdd16e5fefe tests/transmitter/ test_contact.py
dl_verify 3d86131dfd775aea2ea7c0500759befac8a5d7fe35f590974b2af56da42929db927c0bd86a352a38412fbb79c2bff09d33271b26ebd9aead1bf2b702918cc02a tests/transmitter/ test_files.py
dl_verify 3bc9c3275353f49516fdb2bc9d9a86286c121f085d5382980e118b0ea123da9b9829edeb172448416f30955c9a1c1c3704f36cfa4700ced86c33009e362d0b69 tests/transmitter/ test_input_loop.py
dl_verify 284fefc2a4986948a5ee4de1f935482b43011347b5454ab685f4a79a1036d1bf0518db536381dfddf706318bb44b584db37cfbf8fa07aac1b631a278dfe298d7 tests/transmitter/ test_key_exchanges.py
dl_verify 0c16f45ad9fda006b58a45a7c9a4b9777cf05d08f59c9207addbc27936c29a6aa2aa59146f0ef32fb883a5e24211c5dbdfbf5ad9cf9b72e999e599e9eda0d2ef tests/transmitter/ test_packet.py
dl_verify 49aa0e761771893e8bc057c8e305eb8b5e7103df9a31c80eba333db739f0b2c521eca59901f35bf2e319360902c8be12b112a29948461b73662554bdf55bf6d4 tests/transmitter/ test_sender_loop.py
dl_verify fd4d6cf68a4e555a60caf8efc6ebc6747990ed1c582036c6cc92012c5af82b49b32c42398bf822fda8257e84c822bdb8158260164a8774aea72723ddbe99e639 tests/transmitter/ test_traffic_masking.py
dl_verify b71f7d8e3ce943dca2516f730c9919633f40568af905ac32e05b126e06f2c968c9b0b795cfad81a696511cd07534a0593ef1c9b5d5299ab88b2aff32b9059b64 tests/transmitter/ test_user_input.py
dl_verify 5be56563cab2c9007b6be7ff767778e3fb0df1d3374174d6b6ef7dc6d66b0c692cd798a0a77f156c3eb1ad979a3b532b681db97c4d1948ff8f85cd4a1fa2d51d tests/transmitter/ test_windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/rx/ __init__.py
dl_verify 8c4aa1d4e7df0228172c38e375682f5cdd32fd918168620a61a98d17d0ce79f30215e7793be29390e0a6a51c5daf26a2d80db56458b5d02524f7878a2849c5bd tests/rx/ test_commands.py
dl_verify 467a91fa2161c172506036ba36f8f31cbcf1b9aa1a91f1e7aef2727e3113edae8b24b26488b82b1ba1d4d00411e79944568b8d9c9e2d7e22c3b30ce759ab0137 tests/rx/ test_commands_g.py
dl_verify 081ff658de5c46327ea840038e44d1d1dd5682d31950145affc8f2536e2c06ab779f672db779a555a75a2bed9a1e323117e07bf89d20d5f2ba06a09dedd87e8f tests/rx/ test_files.py
dl_verify 7c0d97bfd5dca727ee36573cdc1b5683077524ff28236e01d8b011da8d51c09988985b76e054c2cdebf6a95fd2e68a14d7a976f1c03a1a39ab9d2a3672e89143 tests/rx/ test_key_exchanges.py
dl_verify aef0fe0e208ce91002924ec2d103c4575079ca3c72544774ba904e44f99ae78aa13cb242a61f2b1fa7c5e7ab8095b0836d17ce276e888792dcdc2b34b8603339 tests/rx/ test_messages.py
dl_verify b6a33ed791e6daab20ee10f304390a8bc890a984c1bf1bec4a57d04741797cfc242d1f1067a0a2854f4daf35fb1302d652fc5ed17749884b5424d700ffb32642 tests/rx/ test_output_loop.py
dl_verify 8dbd77abca3bdab031f5a2e16d5789c2359088c9817a53188a4d6b6b45d4bce087e0ec872810401f35d6cdb170b3052dc27f826e4906ab3f41bb71e49fcfb29e tests/rx/ test_packet.py
dl_verify 6b87bc6c6beaf421c8f9f27ec6ced2d3248efb7b7cd966646b41a486d82d7665f7d2bb2879e1b6baf84fdf77dbef1eba565adcafd8228e7dde5919f8a12e47d1 tests/rx/ test_receiver_loop.py
dl_verify 96e8ad84c9cce083d8a5a85b928a2c78d4b336739a894fdfb69abdef880dbe0fc72f05515393ad576d86250d32f4fc93b65f657c5f7dd7d4aa4c7c2e8b24b62f tests/rx/ test_windows.py
dl_verify cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e tests/receiver/ __init__.py
dl_verify d80af580f76c3c58d72828ab190a055a03f7e74ae17ccbaa2f70dd94e01b7efd85888ac51eefed94d6671027660a8080600f2e1e908bd77622c36ba258a8936e tests/receiver/ test_commands.py
dl_verify dce0fe6cd05915f1a0450259a08e9935b077f9b3af61f315812834811a5c82095b72bea5e4b283fd2b8285e86f8ee4897d43f42a99261767b77841deb471d980 tests/receiver/ test_commands_g.py
dl_verify eb86007ca9b0cfeb4d364b1fb53409443c8b9f95770979c471b8462c1c41205b96afd357670a9cd5949e8360b738d9284a9e726ee6ab89e09a0306b105f1a720 tests/receiver/ test_files.py
dl_verify 01bf3274c675b8cbe6379f8fb1883e0d4ed6c69d164b2c6a44794786d21f2604efc262b34372dfb581607655e6e1e73c178660d3e97f4f2c9bdfb11e4166b2fd tests/receiver/ test_key_exchanges.py
dl_verify 7b9d27497d5765739ee435c02a379e792ad510dd893ff0d3871a7d3f97d196274921a2d26fa656edb5e7974a390155e7c1914135d3e1b6a82ed8f94d46263b66 tests/receiver/ test_messages.py
dl_verify affbd5bccd0fcd87bb50e13b497b1ba3c29ccec954fa53f62bff1a28baa7b35376f614fb54c922ed4605a37f6aa1463efff43a6267619b04a605a2181222e873 tests/receiver/ test_output_loop.py
dl_verify da34f5bdcd8b108b45e955d545954de32c9d8959c26e9d2e3104106139fb2fec69aabd6d5d127beacef7a09ee4f16aab0a92ee7d76b0fa6cd199e56032c12257 tests/receiver/ test_packet.py
dl_verify 717722763a41267929b6038abe859eececee20e68497d0f3c04268b6b8274a04e39e3f8d37d0928c8459c7ef52478176c933d8ec8b2bd0b93ff952a9b92b86f4 tests/receiver/ test_receiver_loop.py
dl_verify e6df26dc7b829b8536e454b99c6c448330fc5cff3ff12a5ebc70103a5fb15ab4fcb8fcb785e27201228b6f50ec610ef214bee4f2d5ff35995b4f00ae23217bc0 tests/receiver/ test_windows.py
}
download_local_test_specific () {
dl_verify b42135363e8ba718e76756496de34a5fad0510162677eeaa584b083342e20c91732b6589bc6c14a7951f100b52f1612634a3c640857276edabccf423daecc705 launchers/ config
dl_verify 17c83b0fe035fe4412531e06e795e6d5b2aa97ea1827e3c7249f9746067cf1f6c7d2351cbd291851fa91d27c565409e66f0e01ec432b040a74123fa4f1631611 launchers/ TFC-local-test.desktop
dl_verify 1defc149fec09999ab424b68c768b8aa43dc171a49016cff069f01c096542d2c3092124e95d4a140f72f7ba9098e9c148eb2297688771eb2404b204a9f88131b '' dd.py
dl_verify dec90e113335d3274d87c3e12dda5a3205df57bd10c1e0532ecad34409520ce0596db21e989478836d4a0ea44da8c42902d2d8f05c9ad027a5560b4d0d5b9f13 '' dd.py
dl_verify 2f426d4d971d67ebf2f59b54fb31cff1a3e2567e343bfa1b3e638b8e0dffed5d0c3cac1f33229b98c302fee0cca3cc43567c2c615b5249a2db6d444e89e5fc70 launchers/ config
dl_verify 5d5351dd24d7afd4dc717835cfffee718fca707133127d1826ae099c66b0bddd878d104c1ad43546c8157807c984bd26b562e455fe219c1a00cf49df6bb73009 launchers/ TFC-local-test.desktop
}
download_tcb_specific () {
dl_verify f4f46d0d44234c094f566e88cc257d07399ee9552ff203181ca415ea2265b091bf14adf570122be7253b3d7fe22cac71f476b2d1fce5a6263f3c3cc7aaa2e8dc launchers/ TFC-TxM.desktop
dl_verify f3c0f471e8046cda7e66c153403c76ea55558bc06e2ee574f300b7507fa81bd2f8e5542ef342b4329f9cb6aee0d050ef4cad43170fbb2f36ac69358e74c035f5 launchers/ TFC-RxM.desktop
dl_verify 883d8df82240d840a215a4a946ba3a15def11b9c50f659e84bdb3543e484fed3e520c471cc10301743d38a7560c2672f1cfd22efa99de495685a90b8559db4ee launchers/ TFC-TxP.desktop
dl_verify c10fb76486ada483cfdd9e351b6d9b89907ae6ccccb32cf4299bc4e67ba565aac7b05a2d62a89c0146a1783c9d0616ee3c9a9660173a98ca6b03f72c3fbe6202 launchers/ TFC-RxP.desktop
}
activate_nh_venv () {
. $HOME/tfc/venv_nh/bin/activate
download_dev_specific () {
dl_verify 2865708ab24c3ceeaf0a6ec382fb7c331fdee52af55a111c1afb862a336dd757d597f91b94267da009eb74bbc77d01bf78824474fa6f0aa820cd8c62ddb72138 '' requirements-dev.txt
}
activate_tfc_venv () {
. $HOME/tfc/venv_tfc/bin/activate
download_venv () {
dl_verify f74b9aeb3a17ef86782afb8c2f621709801631430423d13025310809e6d14ffecb3805ee600cd3740287105b7a0e0726f8ced202e7b55be7bf5b79240e34d35d '' requirements-venv.txt
}
install_tcb () {
create_install_dir
dpkg_check
sudo torify apt update
sudo torify apt install libssl-dev python3-pip python3-setuptools python3-tk net-tools -y
download_venv
download_common
download_tcb
download_tcb_specific
#download_common_tests
#download_tcb_tests
create_user_data_dir
cd $HOME/tfc/
torify pip3 download -r /opt/tfc/requirements-venv.txt --require-hashes
torify pip3 download -r /opt/tfc/requirements.txt --require-hashes
kill_network
pip3 install setuptools-40.6.3-py2.py3-none-any.whl
pip3 install virtualenv-16.2.0-py2.py3-none-any.whl
sudo python3 -m virtualenv /opt/tfc/venv_tcb --system-site-packages --never-download
. /opt/tfc/venv_tcb/bin/activate
sudo pip3 install six-1.12.0-py2.py3-none-any.whl
sudo pip3 install pycparser-2.19.tar.gz
sudo pip3 install cffi-1.11.5-cp36-cp36m-manylinux1_x86_64.whl
sudo pip3 install argon2_cffi-19.1.0-cp34-abi3-manylinux1_x86_64.whl
sudo pip3 install PyNaCl-1.3.0-cp34-abi3-manylinux1_x86_64.whl
sudo pip3 install pyserial-3.4-py2.py3-none-any.whl
sudo pip3 install asn1crypto-0.24.0-py2.py3-none-any.whl
sudo pip3 install cryptography-2.5-cp34-abi3-manylinux1_x86_64.whl
deactivate
sudo mv /opt/tfc/tfc.png /usr/share/pixmaps/
sudo mv /opt/tfc/launchers/TFC-TxP.desktop /usr/share/applications/
sudo mv /opt/tfc/launchers/TFC-RxP.desktop /usr/share/applications/
sudo rm -r /opt/tfc/launchers/
sudo rm /opt/tfc/requirements.txt
sudo rm /opt/tfc/requirements-venv.txt
rm $HOME/tfc/setuptools-40.6.3-py2.py3-none-any.whl
rm $HOME/tfc/virtualenv-16.2.0-py2.py3-none-any.whl
rm $HOME/tfc/six-1.12.0-py2.py3-none-any.whl
rm $HOME/tfc/pycparser-2.19.tar.gz
rm $HOME/tfc/cffi-1.11.5-cp36-cp36m-manylinux1_x86_64.whl
rm $HOME/tfc/argon2_cffi-19.1.0-cp34-abi3-manylinux1_x86_64.whl
rm $HOME/tfc/PyNaCl-1.3.0-cp34-abi3-manylinux1_x86_64.whl
rm $HOME/tfc/pyserial-3.4-py2.py3-none-any.whl
rm $HOME/tfc/asn1crypto-0.24.0-py2.py3-none-any.whl
rm $HOME/tfc/cryptography-2.5-cp34-abi3-manylinux1_x86_64.whl
add_serial_permissions
install_complete "Installation of TFC on this device is now complete."
}
install_local_test () {
create_install_dir
dpkg_check
tor_dependencies
sudo torify apt update
sudo torify apt install libssl-dev python3-pip python3-setuptools python3-tk tor deb.torproject.org-keyring terminator -y
download_venv
download_common
download_tcb
download_relay
download_local_test_specific
#download_common_tests
#download_tcb_tests
#download_relay_tests
torify pip3 install -r /opt/tfc/requirements-venv.txt --require-hashes
sudo python3 -m virtualenv /opt/tfc/venv_tfc --system-site-packages
. /opt/tfc/venv_tfc/bin/activate
sudo torify pip3 install -r /opt/tfc/requirements.txt --require-hashes
sudo torify pip3 install -r /opt/tfc/requirements-relay.txt --require-hashes
deactivate
sudo mv /opt/tfc/tfc.png /usr/share/pixmaps/
sudo mv /opt/tfc/launchers/TFC-local-test.desktop /usr/share/applications/
create_terminator_config "/opt/tfc/launchers/config"
sudo rm -r /opt/tfc/launchers/
sudo rm /opt/tfc/requirements.txt
sudo rm /opt/tfc/requirements-relay.txt
sudo rm /opt/tfc/requirements-venv.txt
install_complete "Installation of TFC for local testing is now complete."
}
install_developer () {
dpkg_check
tor_dependencies
sudo torify apt update
sudo torify apt install git libssl-dev python3-pip python3-setuptools python3-tk tor deb.torproject.org-keyring terminator -y
cd $HOME
torify git clone https://github.com/maqp/tfc.git
cd $HOME/tfc/
torify pip3 install -r requirements-venv.txt --require-hashes
python3.6 -m virtualenv venv_tfc --system-site-packages
. /$HOME/tfc/venv_tfc/bin/activate
torify pip3 install -r requirements.txt --require-hashes
torify pip3 install -r requirements-relay.txt --require-hashes
torify pip3 install -r requirements-dev.txt
deactivate
sudo cp $HOME/tfc/launchers/TFC-local-test.desktop /usr/share/applications/
sudo cp $HOME/tfc/tfc.png /usr/share/pixmaps/
create_terminator_config "$HOME/tfc/launchers/config"
chmod a+rwx -R $HOME/tfc/
add_serial_permissions
install_complete "Installation of the TFC dev environment is now complete."
}
install_relay_ubuntu () {
create_install_dir
dpkg_check
tor_dependencies
sudo torify apt update
sudo torify apt install libssl-dev python3-pip python3-setuptools tor deb.torproject.org-keyring -y
download_venv
download_common
download_relay
#download_common_tests
#download_relay_tests
torify pip3 install -r /opt/tfc/requirements-venv.txt --require-hashes
sudo python3.6 -m virtualenv /opt/tfc/venv_relay --system-site-packages
. /opt/tfc/venv_relay/bin/activate
sudo torify pip3 install -r /opt/tfc/requirements-relay.txt --require-hashes
deactivate
sudo mv /opt/tfc/tfc.png /usr/share/pixmaps/
sudo mv /opt/tfc/launchers/TFC-RP.desktop /usr/share/applications/
sudo rm -r /opt/tfc/launchers/
sudo rm /opt/tfc/requirements-venv.txt
sudo rm /opt/tfc/requirements-relay.txt
add_serial_permissions
install_complete "Installation of the TFC Relay configuration is now complete."
}
install_relay_tails () {
check_tails_tor_version
# Cache password so that Debian doesn't keep asking
# for it during install (it won't be stored on disk).
read_sudo_pwd
create_install_dir
echo ${sudo_pwd} | sudo -S apt update
echo ${sudo_pwd} | sudo -S apt install libssl-dev python3-pip python3-setuptools -y
download_common
download_relay
#download_common_tests
#download_relay_tests
create_user_data_dir
cd $HOME/tfc/
torify pip3 download -r /opt/tfc/requirements-relay.txt --require-hashes
# Pyserial
echo ${sudo_pwd} | sudo -S python3.6 -m pip install pyserial-3.4-py2.py3-none-any.whl
# Stem
echo ${sudo_pwd} | sudo -S python3.6 -m pip install stem-1.7.1.tar.gz
# PySocks
echo ${sudo_pwd} | sudo -S python3.6 -m pip install PySocks-1.6.8.tar.gz
# Requests
echo ${sudo_pwd} | sudo -S python3.6 -m pip install urllib3-1.24.1-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install idna-2.8-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install chardet-3.0.4-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install certifi-2018.11.29-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install requests-2.21.0-py2.py3-none-any.whl
# Flask
echo ${sudo_pwd} | sudo -S python3.6 -m pip install Werkzeug-0.14.1-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install MarkupSafe-1.1.0-cp36-cp36m-manylinux1_x86_64.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install Jinja2-2.10-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install itsdangerous-1.1.0-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install Click-7.0-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install Flask-1.0.2-py2.py3-none-any.whl
# Cryptography
echo ${sudo_pwd} | sudo -S python3.6 -m pip install six-1.12.0-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install asn1crypto-0.24.0-py2.py3-none-any.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install pycparser-2.19.tar.gz
echo ${sudo_pwd} | sudo -S python3.6 -m pip install cffi-1.11.5-cp36-cp36m-manylinux1_x86_64.whl
echo ${sudo_pwd} | sudo -S python3.6 -m pip install cryptography-2.5-cp34-abi3-manylinux1_x86_64.whl
cd $HOME
rm -r $HOME/tfc
echo ${sudo_pwd} | sudo -S mv /opt/tfc/tfc.png /usr/share/pixmaps/
echo ${sudo_pwd} | sudo -S mv /opt/tfc/launchers/TFC-RP-Tails.desktop /usr/share/applications/
echo ${sudo_pwd} | sudo -S rm -r /opt/tfc/launchers/
echo ${sudo_pwd} | sudo -S rm /opt/tfc/requirements-relay.txt
install_complete "Installation of the TFC Relay configuration is now complete."
}
install_relay () {
if [[ "$(lsb_release -a 2>/dev/null | grep Tails)" ]]; then
install_relay_tails
else
install_relay_ubuntu
fi
}
read_sudo_pwd () {
read -s -p "[sudo] password for ${USER}: " sudo_pwd
until (echo ${sudo_pwd} | sudo -S echo '' 2>/dev/null)
do
echo -e '\nSorry, try again.'
read -s -p "[sudo] password for ${USER}: " sudo_pwd
done
echo
}
check_tails_tor_version () {
included=($(tor --version |awk '{print $3}' |head -c 5))
required="0.3.5"
if ! [[ "$(printf '%s\n' "$required" "$included" | sort -V | head -n1)" = "$required" ]]; then
clear
echo -e "\nError: This Tails includes Tor $included but Tor $required is required. Exiting.\n" 1>&2
exit 1
fi
}
tor_dependencies () {
available=($(apt-cache policy tor |grep Candidate | awk '{print $2}' |head -c 5))
required="0.3.5"
if ! [[ "$(printf '%s\n' "$required" "$available" | sort -V | head -n1)" = "$required" ]]; then
# If repository does not provide 0.3.5, default to 0.3.5 experimental.
sudo sudo rm /etc/apt/sources.list.d/torproject.list 2>/dev/null || true
if [[ -f /etc/upstream-release/lsb-release ]]; then
# Linux Mint etc.
codename=($(cat /etc/upstream-release/lsb-release |grep DISTRIB_CODENAME |cut -c 18-))
else
# *buntu
codename=($(lsb_release -a 2>/dev/null |grep Codename |awk '{print $2}'))
fi
url="https://deb.torproject.org/torproject.org"
echo "deb ${url} ${codename} main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
echo "deb-src ${url} ${codename} main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
echo "deb ${url} ${codename} main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
echo "deb-src ${url} ${codename} main" | sudo tee -a /etc/apt/sources.list.d/torproject.list
# SKS Keyservers' Onion Service URL is verifiable via https://sks-keyservers.net/overview-of-pools.php
gpg --keyserver hkp://jirk5u4osbsr34t5.onion --recv-keys A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
fi
}
@ -196,188 +514,126 @@ kill_network () {
done
clear
echo -e "\nThis computer needs to be airgapped. The installer has"\
"\ndisabled network interfaces as a first line of defense."
read -n 1 -s -p "\nDisconnect Ethernet cable now and press any key to continue the installation."
c_echo ''
c_echo " This computer needs to be air gapped. The installer has "
c_echo "disabled network interfaces as the first line of defense."
c_echo ''
c_echo "Disconnect the Ethernet cable and press any key to continue."
read -n 1 -s -p ''
echo -e '\n'
}
install_tcb () {
sudo apt update
sudo apt install python3-pip python3-tk python3.6 python3.6-dev libffi-dev net-tools -y
download_common
download_tcb
download_tcb_specific
# download_common_tests
# download_tcb_tests
python3.6 -m pip download -r requirements.txt --require-hashes
kill_network
python3.6 -m pip install virtualenv-15.1.0-py2.py3-none-any.whl
python3.6 -m virtualenv --system-site-packages venv_tfc
activate_tfc_venv
python3.6 -m pip install six-1.10.0-py2.py3-none-any.whl
python3.6 -m pip install pycparser-2.18.tar.gz
python3.6 -m pip install cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl
python3.6 -m pip install argon2_cffi-16.3.0-cp36-cp36m-manylinux1_x86_64.whl
python3.6 -m pip install PyNaCl-1.1.2-cp36-cp36m-manylinux1_x86_64.whl
python3.6 -m pip install pyserial-3.4-py2.py3-none-any.whl
deactivate
sudo mv $HOME/tfc/tfc.png /usr/share/pixmaps/
sudo mv $HOME/tfc/launchers/TFC-TxM.desktop /usr/share/applications/
sudo mv $HOME/tfc/launchers/TFC-RxM.desktop /usr/share/applications/
chmod a+rwx -R $HOME/tfc/
rm -r $HOME/tfc/launchers/
rm $HOME/tfc/requirements.txt
rm $HOME/tfc/virtualenv-15.1.0-py2.py3-none-any.whl
rm $HOME/tfc/six-1.10.0-py2.py3-none-any.whl
rm $HOME/tfc/pycparser-2.18.tar.gz
rm $HOME/tfc/cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl
rm $HOME/tfc/argon2_cffi-16.3.0-cp36-cp36m-manylinux1_x86_64.whl
rm $HOME/tfc/PyNaCl-1.1.2-cp36-cp36m-manylinux1_x86_64.whl
rm $HOME/tfc/pyserial-3.4-py2.py3-none-any.whl
sudo adduser $USER dialout
add_serial_permissions () {
clear
echo -e "\nInstallation of TFC on this device is now complete."\
"\nReboot the computer to update serial port use rights.\n"
}
c_echo ''
c_echo "Setting serial permissions. If available, please connect the"
c_echo "USB-to-serial/TTL adapter now and press any key to continue."
read -n 1 -s -p ''
echo -e '\n'
sleep 3 # Wait for USB serial interfaces to register
# Add user to the dialout group to allow serial access after reboot
sudo adduser ${USER} dialout
install_local_test () {
sudo apt update
sudo apt install python3-pip python3-tk python3.6 python3.6-dev libffi-dev pidgin pidgin-otr terminator -y
# Add temporary permissions for serial interfaces until reboot
arr=($(ls /sys/class/tty | grep USB)) || true
for i in "${arr[@]}"; do
sudo chmod 666 /dev/${i}
done
download_common
download_tcb
download_nh
download_local_test_specific
# download_common_tests
# download_tcb_tests
# download_nh_tests
python3.5 -m pip install virtualenv
python3.6 -m pip install virtualenv
python3.5 -m virtualenv --system-site-packages venv_nh
python3.6 -m virtualenv --system-site-packages venv_tfc
activate_nh_venv
python3.5 -m pip install -r requirements-nh.txt --require-hashes
deactivate
activate_tfc_venv
python3.6 -m pip install -r requirements.txt --require-hashes
deactivate
sudo mv $HOME/tfc/tfc.png /usr/share/pixmaps/
sudo mv $HOME/tfc/launchers/TFC-local-test.desktop /usr/share/applications/
mkdir -p $HOME/.config/terminator 2>/dev/null
if [ -f $HOME/.config/terminator/config ]; then
mv $HOME/.config/terminator/config "$HOME/.config/terminator/config_backup_at_$(date +%Y-%m-%d_%H-%M-%S)" 2>/dev/null
if [[ -e /dev/ttyS0 ]]; then
sudo chmod 666 /dev/ttyS0
fi
mv $HOME/tfc/launchers/config $HOME/.config/terminator/config
sudo chown $USER -R $HOME/.config/terminator/
chmod a+rwx -R $HOME/tfc/
rm -r $HOME/tfc/launchers/
rm $HOME/tfc/requirements.txt
rm $HOME/tfc/requirements-nh.txt
clear
echo -e "\nInstallation of TFC for local testing is now complete.\n"
}
install_nh_ubuntu () {
sudo apt update
sudo apt install python3-pip python3-tk pidgin pidgin-otr -y
download_common
download_nh
# download_common_tests
# download_nh_tests
python3.5 -m pip install virtualenv
python3.5 -m virtualenv --system-site-packages venv_nh
activate_nh_venv
python3.5 -m pip install -r requirements-nh.txt --require-hashes
deactivate
sudo mv $HOME/tfc/tfc.png /usr/share/pixmaps/
sudo mv $HOME/tfc/launchers/TFC-NH.desktop /usr/share/applications/
chmod a+rwx -R $HOME/tfc/
rm -r $HOME/tfc/launchers/
rm $HOME/tfc/requirements-nh.txt
sudo adduser $USER dialout
clear
echo -e "\nInstallation of NH configuration is now complete."\
"\nReboot the computer to update serial port use rights.\n"
c_echo () {
# Justify printed text to center of terminal
printf "%*s\n" $(( ( $(echo $1 | wc -c ) + 80 ) / 2 )) "$1"
}
install_nh_tails () {
sudo apt update
sudo apt install python3-tk
create_install_dir () {
if [[ ${sudo_pwd} ]]; then
# Tails
if [[ -d "/opt/tfc" ]]; then
echo ${sudo_pwd} | sudo -S rm -r /opt/tfc
fi
echo ${sudo_pwd} | sudo -S mkdir -p /opt/tfc 2>/dev/null
download_common
download_nh
# download_common_tests
# download_nh_tests
sudo mv tfc.png /usr/share/pixmaps/
sudo mv $HOME/tfc/launchers/TFC-NH-Tails.desktop /usr/share/applications/
chmod a+rwx -R $HOME/tfc/
rm -r $HOME/tfc/launchers/
rm $HOME/tfc/requirements-nh.txt
clear
echo -e "\nInstallation of NH configuration is now complete.\n"
# Tails user is already in dialout group so no restart is required.
}
install_nh () {
if [ "$(lsb_release -a 2>/dev/null | grep Tails)" ]; then
install_nh_tails
else
install_nh_ubuntu
# *buntu
if [[ -d "/opt/tfc" ]]; then
sudo rm -r /opt/tfc
fi
sudo mkdir -p /opt/tfc 2>/dev/null
fi
}
architecture_check () {
if ! [ "$(uname -m 2>/dev/null | grep x86_64)" ]; then
echo -e "\nError: Invalid system architecture. Exiting.\n" 1>&2
exit 1
create_user_data_dir () {
if [[ -d "$HOME/tfc" ]]; then
mv $HOME/tfc tfc_backup_at_$(date +%Y-%m-%d_%H-%M-%S)
fi
mkdir -p $HOME/tfc 2>/dev/null
}
create_terminator_config () {
mkdir -p $HOME/.config/terminator 2>/dev/null
if [[ -f $HOME/.config/terminator/config ]]; then
backup_file="$HOME/.config/terminator/config_backup_at_$(date +%Y-%m-%d_%H-%M-%S)"
mv $HOME/.config/terminator/config ${backup_file} 2>/dev/null
clear
c_echo ''
c_echo "NOTICE"
c_echo "An existing configuration file for the Terminator"
c_echo "application was found and backed up into"
c_echo ''
c_echo "${backup_file}"
c_echo ''
c_echo "Press any key to continue."
read -n 1 -s -p ''
echo ''
fi
cp $1 $HOME/.config/terminator/config
sudo chown ${USER} -R $HOME/.config/terminator/
modify_terminator_font_size
}
modify_terminator_font_size () {
width=$(get_screen_width)
# Defaults in terminator config file are for 1920 pixels wide screens
if (( $width < 1600 )); then
sed -i -e 's/font = Monospace 11/font = Monospace 8/g' $HOME/.config/terminator/config # Normal config
sed -i -e 's/font = Monospace 10.5/font = Monospace 7/g' $HOME/.config/terminator/config # Data Diode config
elif (( $width < 1920 )); then
sed -i -e 's/font = Monospace 11/font = Monospace 9/g' $HOME/.config/terminator/config # Normal config
sed -i -e 's/font = Monospace 10.5/font = Monospace 8.5/g' $HOME/.config/terminator/config # Data Diode config
fi
}
root_check() {
if [[ !$EUID -ne 0 ]]; then
clear
echo -e "\nError: This installer must not be run as root.\n" 1>&2
exit 1
fi
get_screen_width () {
xdpyinfo | grep dimensions | sed -r 's/^[^0-9]*([0-9]+).*$/\1/'
}
install_complete () {
clear
c_echo ''
c_echo "$*"
c_echo ''
c_echo "Press any key to close the installer."
read -n 1 -s -p ''
echo ''
kill -9 $PPID
}
@ -386,16 +642,17 @@ dpkg_check () {
tput sc
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
case $(($i % 4)) in
0 ) j="-" ;;
1 ) j="\\" ;;
2 ) j="|" ;;
3 ) j="/" ;;
0 ) j="." ;;
1 ) j="o" ;;
2 ) j="O" ;;
3 ) j="o" ;;
esac
tput rc
echo -en "\r[$j] Waiting for other software managers to finish..."
echo -en "\rWaiting for other software managers to finish..$j"
sleep 0.5
((i=i+1))
done
echo ''
}
@ -403,32 +660,40 @@ arg_error () {
clear
echo -e "\nUsage: bash install.sh [OPTION]\n"
echo "Mandatory arguments"
echo " tcb Install TxM/RxM configuration (Ubuntu 17.04 64-bit)"
echo " nh Install NH configuration (Ubuntu 17.04 64-bit / Tails 3.0+)"
echo -e " lt local testing mode (Ubuntu 17.04 64-bit)\n"
echo " tcb Install Transmitter/Receiver Program (*buntu 18.04+)"
echo " relay Install Relay Program (*buntu 18.04+ / Tails (Debian Buster+))"
echo -e " local Install insecure local testing mode (*buntu 18.04+)\n"
exit 1
}
create_install_dir () {
if [ -d "$HOME/tfc" ]; then
mv $HOME/tfc tfc_backup_at_$(date +%Y-%m-%d_%H-%M-%S)
root_check() {
if [[ !$EUID -ne 0 ]]; then
clear
echo -e "\nError: This installer must not be run as root. Exiting.\n" 1>&2
exit 1
fi
}
architecture_check () {
if ! [[ "$(uname -m 2>/dev/null | grep x86_64)" ]]; then
clear
echo -e "\nError: Invalid system architecture. Exiting.\n" 1>&2
exit 1
fi
mkdir -p $HOME/tfc 2>/dev/null
}
set -e
architecture_check
root_check
dpkg_check
create_install_dir
cd $HOME/tfc/
sudo_pwd='';
case $1 in
tcb ) install_tcb;;
nh ) install_nh;;
lt ) install_local_test;;
* ) arg_error;;
tcb ) install_tcb;;
relay ) install_relay;;
local ) install_local_test;;
dev ) install_developer;;
* ) arg_error;;
esac

View File

@ -1,17 +0,0 @@
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJX431gAAoJENJrq8gPhjL09I8QAK23lNDZvRrWiqipHTV8+RIB
n7MYk69FjgWnbBwLBlqfBrlGiNu5sE0j7yLGZrPUmKJv5s4exKY9Aw8iz+IDK85r
z6a4Ag84hnBwbkGwf/4qVmHFZUfvPFUgRUbPPH/PvB+N8pJbhF90UgaWcNGEJQCi
+jBMUcP9MEUcnUOA5oPwa7U9SfNim9daQEBcwvHiAJwM6kfVqv1ZY8IlhqwpT43x
6IQhhzJSzIwyZR/v1ZVNsGtGd/V33iELaJNscS81dvt8zuv8t3hPc34ea7UCu4Kp
16mdzpzApawN4cwH2CGQomBSkECI7Lo9MMl969w39LXxpq3Y8lvkFyULy1Mi34Fu
BBDzvdsOH4uLFnUML7Y0jn72xU+nsSzN7YYxRqdd+pSkNvv0jSvc/nzocCkPinBU
50toZu0fco21wAjcRaqQ487jfLBNdXvqJ6Shnb0FYl3t4YyKqLWSXnQLnQschEww
tFQ1AlnK1hG7kvdYOMhdFt/02E8/+ANuyavLixDSrOdyAwSeKdG3f6qKyI638izN
P4yF3FNdswxjXHaf1skVN0d27OUc1lezAinOWKbj0PtTQtH/tWccOvVqKStV1xiz
MUP4AX7g4M8V2QgBhDgMZFlqj9fUuqo94ZdmGGoNXeRgKybRmm32GPqll/4M2c0M
2UwA3ijKZWN3fji1jzSt
=OB/+
-----END PGP SIGNATURE-----

View File

@ -1,8 +0,0 @@
[Desktop Entry]
Version=1.17.08
Name=TFC-NH
Exec=gnome-terminal -x bash -c "cd $HOME/tfc && python3.5 'nh.py' || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

View File

@ -1,8 +0,0 @@
[Desktop Entry]
Version=1.17.08
Name=TFC-NH
Exec=gnome-terminal --disable-factory -x bash -c "cd $HOME/tfc && source venv_nh/bin/activate && python3.5 'nh.py' && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

8
launchers/TFC-RP-Tails.desktop Executable file
View File

@ -0,0 +1,8 @@
[Desktop Entry]
Version=1.19.01
Name=TFC-Relay
Exec=gnome-terminal -x bash -c "cd /opt/tfc && python3.5 'relay.py' || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

8
launchers/TFC-RP.desktop Executable file
View File

@ -0,0 +1,8 @@
[Desktop Entry]
Version=1.19.01
Name=TFC-Relay
Exec=gnome-terminal --disable-factory -x bash -c "cd /opt/tfc && source venv_relay/bin/activate && python3.6 'relay.py' && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

View File

@ -1,8 +0,0 @@
[Desktop Entry]
Version=1.17.08
Name=TFC-RxM
Exec=gnome-terminal --disable-factory --maximize -x bash -c "cd $HOME/tfc && source venv_tfc/bin/activate && python3.6 'tfc.py' -rx && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

8
launchers/TFC-RxP.desktop Executable file
View File

@ -0,0 +1,8 @@
[Desktop Entry]
Version=1.19.01
Name=TFC-Receiver
Exec=gnome-terminal --disable-factory --maximize -x bash -c "cd /opt/tfc && source venv_tcb/bin/activate && python3.6 'tfc.py' -r && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

View File

@ -1,8 +0,0 @@
[Desktop Entry]
Version=1.17.08
Name=TFC-TxM
Exec=gnome-terminal --disable-factory --maximize -x bash -c "cd $HOME/tfc && source venv_tfc/bin/activate && python3.6 'tfc.py' && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

8
launchers/TFC-TxP.desktop Executable file
View File

@ -0,0 +1,8 @@
[Desktop Entry]
Version=1.19.01
Name=TFC-Transmitter
Exec=gnome-terminal --disable-factory --maximize -x bash -c "cd /opt/tfc && source venv_tcb/bin/activate && python3.6 'tfc.py' && deactivate || bash"
Icon=tfc.png
Terminal=false
Type=Application
Categories=Network;Messaging;Security;

View File

@ -1,8 +1,8 @@
[Desktop Entry]
Version=1.17.08
Version=1.19.01
Name=TFC-LR
Comment=Local testing
Exec=terminator -m -p tfc -l tfc-lr
Exec=terminator -m -u -p tfc -l tfc-lr
Icon=tfc.png
Terminal=false
Type=Application
@ -11,15 +11,15 @@ Actions=TFC-RL;TFC-DD-LR;TFC-DD-RL
[Desktop Action TFC-RL]
Name=TFC-RL
Exec=terminator -m -p tfc -l tfc-rl
Exec=terminator -m -u -p tfc -l tfc-rl
OnlyShowIn=Unity;
[Desktop Action TFC-DD-LR]
Name=TFC-DD-LR
Exec=terminator -m -p tfc -l tfc-dd-lr
Exec=terminator -m -u -p tfc-dd -l tfc-dd-lr
OnlyShowIn=Unity;
[Desktop Action TFC-DD-RL]
Name=TFC-DD-RL
Exec=terminator -m -p tfc -l tfc-dd-rl
Exec=terminator -m -u -p tfc-dd -l tfc-dd-rl
OnlyShowIn=Unity;

View File

@ -24,7 +24,7 @@
[[[child1]]]
order = 0
parent = root
ratio = 0.5
ratio = 0.585
type = HPaned
[[[child2]]]
order = 0
@ -32,22 +32,22 @@
ratio = 0.5
type = VPaned
[[[txm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l && deactivate || bash
[[[source_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l && deactivate || bash
directory = ""
order = 1
parent = child2
profile = tfc
type = Terminal
[[[rxm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -rx && deactivate || bash
[[[destination_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -r && deactivate || bash
directory = ""
order = 0
parent = child2
profile = tfc
type = Terminal
[[[nh_emulator]]]
command = cd $HOME/tfc/ && source venv_nh/bin/activate && python3.5 nh.py -l && deactivate || bash
[[[networked_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 relay.py -l && deactivate || bash
directory = ""
order = 1
parent = child1
@ -67,7 +67,7 @@
[[[child1]]]
order = 0
parent = root
ratio = 0.5
ratio = 0.415
type = HPaned
[[[child2]]]
order = 1
@ -75,22 +75,22 @@
ratio = 0.5
type = VPaned
[[[txm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l && deactivate || bash
[[[source_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l && deactivate || bash
directory = ""
order = 1
parent = child2
profile = tfc
type = Terminal
[[[rxm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -rx && deactivate || bash
[[[destination_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -r && deactivate || bash
directory = ""
order = 0
parent = child2
profile = tfc
type = Terminal
[[[nh_emulator]]]
command = cd $HOME/tfc/ && source venv_nh/bin/activate && python3.5 nh.py -l && deactivate || bash
[[[networked_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 relay.py -l && deactivate || bash
directory = ""
order = 0
parent = child1
@ -110,7 +110,7 @@
[[[child1]]]
order = 0
parent = root
ratio = 0.45
ratio = 0.545
type = HPaned
[[[child2]]]
order = 0
@ -120,7 +120,7 @@
[[[child3]]]
order = 1
parent = child1
ratio = 0.18
ratio = 0.14
type = HPaned
[[[child4]]]
order = 0
@ -128,44 +128,44 @@
ratio = 0.5
type = VPaned
[[[txm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -d && deactivate || bash
[[[source_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -d && deactivate || bash
directory = ""
order = 1
parent = child2
profile = tfc
profile = tfc-dd
type = Terminal
[[[rxm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -rx && deactivate || bash
[[[destination_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -r && deactivate || bash
directory = ""
order = 0
parent = child2
profile = tfc
profile = tfc-dd
type = Terminal
[[[nh_emulator]]]
command = cd $HOME/tfc/ && source venv_nh/bin/activate && python3.5 nh.py -l -d && deactivate || bash
[[[networked_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 relay.py -l -d && deactivate || bash
directory = ""
order = 1
parent = child3
profile = tfc
profile = tfc-dd
type = Terminal
[[[txm_dd_emulator]]]
command = cd $HOME/tfc/ && python3.6 dd.py txnhlr
[[[source_computer_dd_emulator]]]
command = cd /opt/tfc/ && python3.6 dd.py scnclr
directory = ""
order = 1
parent = child4
profile = tfc
profile = tfc-dd
type = Terminal
[[[rxm_dd_emulator]]]
command = cd $HOME/tfc/ && python3.6 dd.py nhrxlr
[[[destination_computer_dd_emulator]]]
command = cd /opt/tfc/ && python3.6 dd.py ncdclr
directory = ""
order = 0
parent = child4
profile = tfc
profile = tfc-dd
type = Terminal
[[tfc-dd-rl]]
[[[root]]]
fullscreen = False
@ -178,12 +178,12 @@
[[[child1]]]
order = 0
parent = root
ratio = 0.55
ratio = 0.451
type = HPaned
[[[child2]]]
order = 0
parent = child1
ratio = 0.82
ratio = 0.867
type = HPaned
[[[child3]]]
order = 1
@ -196,41 +196,41 @@
ratio = 0.5
type = VPaned
[[[txm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -d && deactivate || bash
[[[source_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -d && deactivate || bash
directory = ""
order = 1
parent = child4
profile = tfc
profile = tfc-dd
type = Terminal
[[[rxm_emulator]]]
command = cd $HOME/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -rx && deactivate || bash
[[[destination_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 tfc.py -l -r && deactivate || bash
directory = ""
order = 0
parent = child4
profile = tfc
profile = tfc-dd
type = Terminal
[[[nh_emulator]]]
command = cd $HOME/tfc/ && source venv_nh/bin/activate && python3.5 nh.py -l -d && deactivate || bash
[[[networked_computer_emulator]]]
command = cd /opt/tfc/ && source venv_tfc/bin/activate && python3.6 relay.py -l -d && deactivate || bash
directory = ""
order = 0
parent = child2
profile = tfc
profile = tfc-dd
type = Terminal
[[[txm_dd_emulator]]]
command = cd $HOME/tfc/ && python3.6 dd.py txnhrl
[[[source_computer_dd_emulator]]]
command = cd /opt/tfc/ && python3.6 dd.py scncrl
directory = ""
order = 1
parent = child3
profile = tfc
profile = tfc-dd
type = Terminal
[[[rxm_dd_emulator]]]
command = cd $HOME/tfc/ && python3.6 dd.py nhrxrl
[[[destination_computer_dd_emulator]]]
command = cd /opt/tfc/ && python3.6 dd.py ncdcrl
directory = ""
order = 0
parent = child3
profile = tfc
profile = tfc-dd
type = Terminal
@ -246,5 +246,19 @@
[[tfc]]
background_color = "#3c3f41"
background_image = None
use_system_font = False
font = Monospace 11
foreground_color = "#a1b6bd"
show_titlebar = False
scrollback_infinite = True
show_titlebar = False
scrollbar_position = hidden
[[tfc-dd]]
background_color = "#3c3f41"
background_image = None
use_system_font = False
font = Monospace 10.5
foreground_color = "#a1b6bd"
scrollback_infinite = True
show_titlebar = False
scrollbar_position = hidden

93
nh.py
View File

@ -1,93 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import subprocess
import sys
import time
from multiprocessing import Process, Queue
from src.common.misc import ignored
from src.common.output import c_print, clear_screen
from src.common.statics import *
from src.nh.commands import nh_command
from src.nh.gateway import Gateway, gateway_loop
from src.nh.misc import process_arguments
from src.nh.pidgin import ensure_im_connection, im_command, im_incoming, im_outgoing
from src.nh.settings import Settings
from src.nh.tcb import rxm_outgoing, txm_incoming
def main() -> None:
"""Load settings, establish gateway and initialize processes."""
settings = Settings(*process_arguments())
gateway = Gateway(settings)
clear_screen()
c_print(TFC, head=1, tail=1)
ensure_im_connection()
queues = {TXM_INCOMING_QUEUE: Queue(), # Packets from gateway to 'txm_incoming' process
RXM_OUTGOING_QUEUE: Queue(), # Packets from TxM/IM client to RxM
TXM_TO_IM_QUEUE: Queue(), # Packets from TxM to IM client
TXM_TO_NH_QUEUE: Queue(), # Commands from TxM to NH
TXM_TO_RXM_QUEUE: Queue(), # Commands from TxM to RxM
NH_TO_IM_QUEUE: Queue(), # Commands from NH to IM client
EXIT_QUEUE: Queue()} # Signal for normal exit
process_list = [Process(target=gateway_loop, args=(queues, gateway )),
Process(target=txm_incoming, args=(queues, settings )),
Process(target=rxm_outgoing, args=(queues, settings, gateway )),
Process(target=im_incoming, args=(queues, )),
Process(target=im_outgoing, args=(queues, settings )),
Process(target=im_command, args=(queues, )),
Process(target=nh_command, args=(queues, settings, sys.stdin.fileno()))]
for p in process_list:
p.start()
while True:
with ignored(EOFError, KeyboardInterrupt):
time.sleep(0.1)
if not all([p.is_alive() for p in process_list]):
for p in process_list:
p.terminate()
sys.exit(1)
if not queues[EXIT_QUEUE].empty():
command = queues[EXIT_QUEUE].get()
for p in process_list:
p.terminate()
if command == WIPE:
if TAILS in subprocess.check_output('lsb_release -a', shell=True):
os.system('sudo poweroff')
else:
subprocess.Popen("find {} -name '{}*' -type f -exec shred -n 3 -z -u {{}} \;".format(DIR_USER_DATA, NH), shell=True).wait()
subprocess.Popen("find {} -type f -exec shred -n 3 -z -u {{}} \;".format('$HOME/.purple/'), shell=True).wait()
os.system('poweroff')
else:
sys.exit(0)
if __name__ == '__main__':
main()

182
relay.py Normal file
View File

@ -0,0 +1,182 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import sys
from multiprocessing import Process, Queue
from typing import Dict
from cryptography.hazmat.primitives.asymmetric.x448 import X448PrivateKey
from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat
from src.common.gateway import Gateway, gateway_loop
from src.common.misc import ensure_dir, monitor_processes, process_arguments
from src.common.output import print_title
from src.common.statics import *
from src.relay.client import c_req_manager, client_manager, g_msg_manager
from src.relay.commands import relay_command
from src.relay.onion import onion_service
from src.relay.server import flask_server
from src.relay.tcb import dst_outgoing, src_incoming
def main() -> None:
"""Load persistent settings and launch the Relay Program.
This function loads settings from the settings database and launches
processes for the Relay Program. It then monitors the EXIT_QUEUE for
EXIT/WIPE signals and each process in case one of them dies.
If you're reading this code to get the big picture on how TFC works,
start by looking at `tfc.py` for Transmitter Program functionality.
After you have reviewed the Transmitter Program's code, revisit the
code of this program.
The Relay Program operates multiple processes to enable real time IO
between multiple data sources and destinations.
Symbols:
process_name denotes the name of the process
>, <, , denotes the direction of data passed from one
process to another
(Description) denotes the description of data passed from one
process to another
, denotes the link between a description and path
of data matching the description
|, | denotes the gateways where the direction of data
flow is enforced with hardware data diodes
Relay Program (Networked Computer)
(Contact management commands)
| |
| > relay_command > c_req_manager
| |
| (Onion Service |(Contact requests)
| private key) |
| |
| onion_service > client on contact's
| (Relay Program Networked Computer
| commands) (Outgoing msg/file/public key)
|
Source |( gateway_loop > src_incoming > flask_server <
Computer | | |
| | |
| (Local keys, commands | |
| and copies of messages)| |
| |
Destination <|( dst_outgoing |
Computer | |
> g_msg_manager |
| |
| (Group (Incoming (URL token)|
| management messages) |
messages) |
|
client_mgr |
> client
flask_server on
contact's Networked
(Incoming message/file/public key/group management message) Computer
The image above gives a rough overview of the structure of the Relay
Program. The Relay Program acts as a protocol converter that reads
datagrams from the Source Computer. Outgoing message/file/public key
datagrams are made available in the user's Tor v3 Onion Service.
Copies of sent message datagrams as well as datagrams from contacts'
Onion Services are forwarded to the Destination Computer.
The Relay-to-Relay encrypted datagrams from contacts such as contact
requests, public keys and group management messages are displayed by
the Relay Program.
Outgoing message datagrams are loaded by contacts from the user's
Flask web server. To request messages intended for them, each
contact uses a contact-specific URL token to load the messages.
The URL token is the X448 shared secret derived from the per-session
ephemeral X448 values of the two conversing parties. The private
value stays on the Relay Program -- the public value is obtained by
connecting to the root domain of contact's Onion Service.
"""
working_dir = f'{os.getenv("HOME")}/{DIR_TFC}'
ensure_dir(working_dir)
os.chdir(working_dir)
_, local_test, data_diode_sockets = process_arguments()
gateway = Gateway(NC, local_test, data_diode_sockets)
print_title(NC)
url_token_private_key = X448PrivateKey.generate()
url_token_public_key = url_token_private_key.public_key().public_bytes(encoding=Encoding.Raw,
format=PublicFormat.Raw).hex()
queues = \
{GATEWAY_QUEUE: Queue(), # All datagrams from `gateway_loop` to `src_incoming`
DST_MESSAGE_QUEUE: Queue(), # Message datagrams from `src_incoming`/`client` to `dst_outgoing`
M_TO_FLASK_QUEUE: Queue(), # Message/pubkey datagrams from `src_incoming` to `flask_server`
F_TO_FLASK_QUEUE: Queue(), # File datagrams from `src_incoming` to `flask_server`
SRC_TO_RELAY_QUEUE: Queue(), # Command datagrams from `src_incoming` to `relay_command`
DST_COMMAND_QUEUE: Queue(), # Command datagrams from `src_incoming` to `dst_outgoing`
CONTACT_KEY_QUEUE: Queue(), # Contact management commands from `relay_command` to `client_manager`
C_REQ_MGR_QUEUE: Queue(), # Contact requests management from `relay_command` to `c_req_manager`
URL_TOKEN_QUEUE: Queue(), # URL tokens from `client` to `flask_server`
GROUP_MSG_QUEUE: Queue(), # Group management messages from `client` to `g_msg_manager`
CONTACT_REQ_QUEUE: Queue(), # Contact requests from `flask_server` to `c_req_manager`
F_REQ_MGMT_QUEUE: Queue(), # Contact list management from `relay_command` to `c_req_manager`
GROUP_MGMT_QUEUE: Queue(), # Contact list management from `relay_command` to `g_msg_manager`
ONION_CLOSE_QUEUE: Queue(), # Onion Service close command from `relay_command` to `onion_service`
ONION_KEY_QUEUE: Queue(), # Onion Service private key from `relay_command` to `onion_service`
TOR_DATA_QUEUE: Queue(), # Open port for Tor from `onion_service` to `client_manager`
EXIT_QUEUE: Queue() # EXIT/WIPE signal from `relay_command` to `main`
} # type: Dict[bytes, Queue]
process_list = [Process(target=gateway_loop, args=(queues, gateway )),
Process(target=src_incoming, args=(queues, gateway )),
Process(target=dst_outgoing, args=(queues, gateway )),
Process(target=client_manager, args=(queues, gateway, url_token_private_key)),
Process(target=g_msg_manager, args=(queues, )),
Process(target=c_req_manager, args=(queues, )),
Process(target=flask_server, args=(queues, url_token_public_key)),
Process(target=onion_service, args=(queues, )),
Process(target=relay_command, args=(queues, gateway, sys.stdin.fileno()) )]
for p in process_list:
p.start()
monitor_processes(process_list, NC, queues)
if __name__ == '__main__':
main()

7
requirements-dev.txt Normal file
View File

@ -0,0 +1,7 @@
# Static type checking tool
mypy
# Unit test tools
pytest
pytest-cov
pytest-xdist

View File

@ -1 +0,0 @@
pyserial==3.4 --hash=sha512:8333ac2843fd136d5d0d63b527b37866f7d18afc3bb33c4938b63af077492aeb118eb32a89ac78547f14d59a2adb1e5d00728728275de62317da48dadf6cdff9

32
requirements-relay.txt Normal file
View File

@ -0,0 +1,32 @@
# Sub-dependencies are listed below dependencies
# Pyserial (Connects the Source/Destination Computer to the Networked Computer)
pyserial==3.4 --hash=sha512:8333ac2843fd136d5d0d63b527b37866f7d18afc3bb33c4938b63af077492aeb118eb32a89ac78547f14d59a2adb1e5d00728728275de62317da48dadf6cdff9
# Stem (Connects to Tor and manages Onion Services)
stem==1.7.1 --hash=sha512:a275f59bba650cb5bb151cf53fb1dd820334f9abbeae1a25e64502adc854c7f54c51bc3d6c1656b595d142fc0695ffad53aab3c57bc285421c1f4f10c9c3db4c
# PySocks (Routes requests library through SOCKS5 proxy making Onion Service connections possible)
pysocks==1.6.8 --hash=sha512:9b544cf11464142a5f347cd5688b48422249363a425ccf3887117152f2f1969713674c4bba714242432ae85f3d62e03edeb9cb7b73ebd225ed3b47b3da6896d5
# Requests (Connects to the contact's Tor Onion Service)
requests==2.21.0 --hash=sha512:f5db1cc049948a8cc38d1c3c2de9f997bc99b65b88bd2e052be62a8c2934773d33f471ce86d8cdcacc2e651b1545d88cc571ace62154a6ccb285a19c83836483
certifi==2018.11.29 --hash=sha512:6f6cb73ec56d85ffc62eddd506c44fa597dfd3a7b74bad7f301482cad47c79d0ab7a3a390905ae46fe2a49f1007f6a1c33c41987ce769f9b5a1ea5fa773ea4eb
chardet==3.0.4 --hash=sha512:bfae58c8ea19c87cc9c9bf3d0b6146bfdb3630346bd954fe8e9f7da1f09da1fc0d6943ff04802798a665ea3b610ee2d65658ce84fe5a89f9e93625ea396a17f4
idna==2.8 --hash=sha512:fb07dbec1de86efbad82a4f73d98123c59b083c1f1277445204bef75de99ca200377ad2f1db8924ae79b31b3dd984891c87d0a6344ec4d07a0ddbbbc655821a3
urllib3==1.24.1 --hash=sha512:fdba3d58539eb31dff22cdfad91536587db3ce575af4f4c803758211dbec46944e6cf9d5459d22da620c49a36fe3ca1ae2067c741bb3f643e7b548c4abfb0d7f
# Flask (Onion Service web server that serves TFC public keys and ciphertexts to contacts)
flask==1.0.2 --hash=sha512:0cca42400dc1019eb8c9fae32460967f64880f05627bdcb06c8df0ef0f7cc2d791c2a96ab6313bca10120a6f785aa0ccdad093e6ab3d7e997ed354fd432257e7
click==7.0 --hash=sha512:6b30987349df7c45c5f41cff9076ed45b178b444fca1ab1965f4ae33d1631522ce0a2868392c736666e83672b8b20e9503ae9ce5016dce3fa8f77bc8a3674130
itsdangerous==1.1.0 --hash=sha512:891c294867f705eb9c66274bd04ac5d93140d6e9beea6cbf9a44e7f9c13c0e2efa3554bdf56620712759a5cd579e112a782d25f3f91ba9419d60b2b4d2bc5b7c
jinja2==2.10 --hash=sha512:672c1a112f76f399600a069c5ee882d5fdf065ff25f6b729ec12a266d7ef6f638c26d5cc680db7b3a375d9e1ae7323aed3c2a49eb03fc39dd1a1ca8b0d658b63
markupsafe==1.1.0 --hash=sha512:103e80f9307ebb46178aad44d8d0fe36cfc019656ecb0249767d2cd249e8fbfc48ee9b2a5d7f25845312662ccf8b09dbee0a93f5ff573883eb40ec4511c89959
werkzeug==0.14.1 --hash=sha512:0fa694cd71fa83d4a178e9f831fa9784c26e42feb5987e390ed88eb60ea2f829da5795206983236e3442ee1479dd4ca587d26dcb074a881d6d1b055bfc493c56
# Cryptography (Handles URL token derivation)
cryptography==2.5 --hash=sha512:820b591f3c838f86ee59e027986511abd3eb537bf8f5f4d2d499ab950a128bd2960c138616f0a6c36408fc72d6eefc27a14fddab9c5a6f4118e6bbad5e9d9d7f
asn1crypto==0.24.0 --hash=sha512:8d9bc344981079ac6c00e71e161c34b6f403e575bbfe1ad06e30a3bcb33e0db317bdcb7aed2d18d510cb1b3ee340a649f7f77a00d271fcf3cc388e6655b67533
cffi==1.11.5 --hash=sha512:32631c8a407f77c4580e75122a79d2f14fbc90ea958ecd9ff0a01c83280aec8b48ac202fc55c1d4aaf09975c9d1b8c21858666076ab554a71577c7a89236e87f
pycparser==2.19 --hash=sha512:7f830e1c9066ee2d297a55e2bf6db4bf6447b6d9da0145d11a88c3bb98505755fb7986eafa6e06ae0b7680838f5e5d6a6d188245ca5ad45c2a727587bac93ab5
six==1.12.0 --hash=sha512:326574c7542110d2cd8071136a36a6cffc7637ba948b55e0abb7f30f3821843073223301ecbec1d48b8361b0d7ccb338725eeb0424696efedc3f6bd2a23331d3

5
requirements-venv.txt Normal file
View File

@ -0,0 +1,5 @@
# Sub-dependencies are listed below dependencies
# Virtual environment (Used to create an isolated Python environment for TFC dependencies)
virtualenv==16.2.0 --hash=sha512:d08800652cf3c2a695971b54be32ded4bccd6b0223b8586c6e2348b8f60be2df7f47aed693f20e106fba11267819c81b7f7a5c3a75f89e36740c6639274a9a50
setuptools==40.6.3 --hash=sha512:bdbd2079d053409838690709389fa09cb498ee055c829e622d57c0b07069b0ec5065c64f5f76994c27fc8563ad47cd08eef843240539744223f5371b4d2daf1a

View File

@ -1,11 +1,25 @@
pyserial==3.4 --hash=sha512:8333ac2843fd136d5d0d63b527b37866f7d18afc3bb33c4938b63af077492aeb118eb32a89ac78547f14d59a2adb1e5d00728728275de62317da48dadf6cdff9
virtualenv==15.1.0 --hash=sha512:9988af801d9ad15c3f9831489ee9b49b54388e8349be201e7f7db3f2f1e59d033d3117f12e2f1909d65f052c5f1eacd87a894c6f7f703d770add3a0179e95863
# Sub-dependencies are listed below dependencies
# Argon2
six==1.10.0 --hash=sha512:a41b40b720c5267e4a47ffb98cdc79238831b4fbc0b20abb125504881b73ae38d5ef0215ee91f0d3582e7887244346e45da9410195d023105fccd96239f0ee95
argon2_cffi==16.3.0 --hash=sha512:0198e9d9c438a4472ee44d73737cace6e15229daca6a82425f67832db79631e9fe56e64bbce68dd06c07de7b408c864df4c1d2e99e7b5729c93391c7f3e72327
# Pyserial (Connects the Source/Destination Computer to the Networked Computer)
pyserial==3.4 --hash=sha512:8333ac2843fd136d5d0d63b527b37866f7d18afc3bb33c4938b63af077492aeb118eb32a89ac78547f14d59a2adb1e5d00728728275de62317da48dadf6cdff9
# PyNaCl
pycparser==2.18 --hash=sha512:4754e4e7556d21da328bf7dbabf72f940c9b18f1457260d48208033b05e576919f45ab399e86ea49e82120116980d7d6f53e8b959d21b7b03a3b5bbea3672f13
cffi==1.10.0 --hash=sha512:b2d3b0ff8c2c750cd405d2fd88555dff10e1d1d4a01a8a0ad636b4e1c9220bc2070e23619a70f0422d8d5b15f88f61fed129f27280520f7208c52df3fc133ec5
PyNaCl==1.1.2 --hash=sha512:05148abb695b79edc118d646aa227a17ba636d07b253ac366c2d9cf7643e1e09c08daa6ffa2d81f9a1156f3446fd9ce770919b17c9205783f843fa176f993c1c
# Argon2 (Derives keys that protect persistent user data)
argon2_cffi==19.1.0 --hash=sha512:77b17303a5d22fc35ac4771be5c710627c80ed7d6bf6705f70015197dbbc2b699ad6af0604b4517d1afd2f6d153058150a5d2933d38e4b4ca741e4ac560ddf72
cffi==1.11.5 --hash=sha512:32631c8a407f77c4580e75122a79d2f14fbc90ea958ecd9ff0a01c83280aec8b48ac202fc55c1d4aaf09975c9d1b8c21858666076ab554a71577c7a89236e87f
pycparser==2.19 --hash=sha512:7f830e1c9066ee2d297a55e2bf6db4bf6447b6d9da0145d11a88c3bb98505755fb7986eafa6e06ae0b7680838f5e5d6a6d188245ca5ad45c2a727587bac93ab5
six==1.12.0 --hash=sha512:326574c7542110d2cd8071136a36a6cffc7637ba948b55e0abb7f30f3821843073223301ecbec1d48b8361b0d7ccb338725eeb0424696efedc3f6bd2a23331d3
# PyNaCl (Handles TCB-side XChaCha20-Poly1305 symmetric encryption)
PyNaCl==1.3.0 --hash=sha512:c4017c38b026a5c531b15839b8d61d1fae9907ba1960c2f97f4cd67fe0827729346d5186a6d6927ba84f64b4cbfdece12b287aa7750a039f4160831be871cea3
# Duplicate sub-dependencies
# cffi==1.11.5 --hash=sha512:32631c8a407f77c4580e75122a79d2f14fbc90ea958ecd9ff0a01c83280aec8b48ac202fc55c1d4aaf09975c9d1b8c21858666076ab554a71577c7a89236e87f
# pycparser==2.19 --hash=sha512:7f830e1c9066ee2d297a55e2bf6db4bf6447b6d9da0145d11a88c3bb98505755fb7986eafa6e06ae0b7680838f5e5d6a6d188245ca5ad45c2a727587bac93ab5
# six==1.12.0 --hash=sha512:326574c7542110d2cd8071136a36a6cffc7637ba948b55e0abb7f30f3821843073223301ecbec1d48b8361b0d7ccb338725eeb0424696efedc3f6bd2a23331d3
# Cryptography (Handles TCB-side X448 key exchange)
cryptography==2.5 --hash=sha512:820b591f3c838f86ee59e027986511abd3eb537bf8f5f4d2d499ab950a128bd2960c138616f0a6c36408fc72d6eefc27a14fddab9c5a6f4118e6bbad5e9d9d7f
asn1crypto==0.24.0 --hash=sha512:8d9bc344981079ac6c00e71e161c34b6f403e575bbfe1ad06e30a3bcb33e0db317bdcb7aed2d18d510cb1b3ee340a649f7f77a00d271fcf3cc388e6655b67533
# Duplicate sub-dependencies
# cffi==1.11.5 --hash=sha512:32631c8a407f77c4580e75122a79d2f14fbc90ea958ecd9ff0a01c83280aec8b48ac202fc55c1d4aaf09975c9d1b8c21858666076ab554a71577c7a89236e87f
# pycparser==2.19 --hash=sha512:7f830e1c9066ee2d297a55e2bf6db4bf6447b6d9da0145d11a88c3bb98505755fb7986eafa6e06ae0b7680838f5e5d6a6d188245ca5ad45c2a727587bac93ab5
# six==1.12.0 --hash=sha512:326574c7542110d2cd8071136a36a6cffc7637ba948b55e0abb7f30f3821843073223301ecbec1d48b8361b0d7ccb338725eeb0424696efedc3f6bd2a23331d3

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,256 +16,462 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
---
This module contains TFC's cryptographic functions. Most algorithms are
based on the ChaCha20 stream cipher by Daniel J. Bernstein (djb).
X448
ChaCha20
Linux kernel CSPRNG
XChaCha20-Poly1305 (IETF) AEAD
BLAKE2b cryptographic hash function
Argon2d key derivation function
"""
import hashlib
import multiprocessing
import os
from typing import Tuple
import argon2
import nacl.encoding
import nacl.bindings
import nacl.exceptions
import nacl.public
import nacl.secret
import nacl.utils
from cryptography.hazmat.primitives.asymmetric.x448 import X448PrivateKey, X448PublicKey
from cryptography.hazmat.primitives.serialization import Encoding, PublicFormat
from src.common.exceptions import CriticalError
from src.common.misc import ignored
from src.common.output import c_print, clear_screen, phase, print_on_previous_line
from src.common.misc import ignored, separate_header
from src.common.output import m_print, phase, print_on_previous_line
from src.common.statics import *
def sha3_256(message: bytes) -> bytes:
"""Generate SHA3-256 digest from message."""
return hashlib.sha3_256(message).digest()
def blake2b(message: bytes, # Message to hash
key: bytes = b'', # Key for keyed hashing
salt: bytes = b'', # Salt for randomized hashing
person: bytes = b'', # Personalization string
digest_size: int = BLAKE2_DIGEST_LENGTH # Length of the digest
) -> bytes: # The BLAKE2b digest
"""Generate BLAKE2b digest (i.e. cryptographic hash) of a message.
BLAKE2 is the successor of SHA3-finalist BLAKE*, designed by
Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn and
Christian Winnerlein. The hash function is based on the ChaCha20
stream cipher, designed by djb.
def blake2s(message: bytes, key: bytes = b'') -> bytes:
"""Generate Blake2s digest from message."""
return hashlib.blake2s(message, key=key).digest()
* BLAKE was designed by Jean-Philippe Aumasson, Luca Henzen,
Willi Meier, and Raphael C.-W. Phan.
For more details, see
https://blake2.net/
https://leastauthority.com/blog/BLAKE2-harder-better-faster-stronger-than-MD5/
def sha256(message: bytes) -> bytes:
"""Generate SHA256 digest from message."""
return hashlib.sha256(message).digest()
The reasons for using BLAKE2b in TFC include
o BLAKE received* more in-depth cryptanalysis than Keccak (SHA3):
def hash_chain(message: bytes) -> bytes:
"""Mix several hash functions to distribute trust.
"Keccak received a significant amount of cryptanalysis,
although not quite the depth of analysis applied to BLAKE,
Grøstl, or Skein."
(https://nvlpubs.nist.gov/nistpubs/ir/2012/NIST.IR.7896.pdf # p. 13)
This construction remains secure in case a weakness is discovered
in one of the hash functions (e.g. insecure algorithm that is not
unpredictable or that has weak preimage resistance, or if the
algorithm is badly implemented).
* https://blake2.net/#cr
In case where the implementation is malicious, this construction
forces stateless implementations -- that try to compromise mixing
phase -- to guess it's position in the construction, which will
eventually lead to key state mismatch and thus detection.
o BLAKE shares design elements with SHA-2 that has 16 years of
cryptanalysis behind it.
(https://en.wikipedia.org/wiki/SHA-2#Cryptanalysis_and_validation)
o 128-bit collision/preimage/second-preimage resistance against
Grover's algorithm running on a quantum Turing machine.
o The algorithm is bundled in Python3.6's hashlib.
o Compared to SHA3-256, the algorithm runs faster on CPUs which
means better hash ratchet performance.
o Compared to SHA3-256, the algorithm runs slower on ASICs which
means attacks by high-budget adversaries are slower.
Note that while the default length of BLAKE2b (the implementation
optimized for AMD64 systems) digest is 512 bits, the digest size is
truncated to 256 bits for the use in TFC.
The correctness of the BLAKE2b implementation* is tested by TFC unit
tests. The testing is done in limited scope by using an official KAT.
* https://github.com/python/cpython/tree/3.6/Modules/_blake2
https://github.com/python/cpython/blob/3.6/Lib/hashlib.py
"""
d1 = sha3_256(blake2s(sha256(message)))
d2 = sha3_256(sha256(blake2s(message)))
d3 = blake2s(sha3_256(sha256(message)))
d4 = blake2s(sha256(sha3_256(message)))
d5 = sha256(blake2s(sha3_256(message)))
d6 = sha256(sha3_256(blake2s(message)))
d7 = sha3_256(message)
d8 = blake2s(message)
d9 = sha256(message)
# Mixing phase
x1 = xor(d1, d2)
x2 = xor(x1, d3)
x3 = xor(x2, d4)
x4 = xor(x3, d5)
x5 = xor(x4, d6)
x6 = xor(x5, d7)
x7 = xor(x6, d8)
x8 = xor(x7, d9)
return x8
return hashlib.blake2b(message, digest_size=digest_size, key=key, salt=salt, person=person).digest()
def argon2_kdf(password: str,
salt: bytes,
rounds: int = ARGON2_ROUNDS,
memory: int = ARGON2_MIN_MEMORY,
parallelism: int = None,
local_test: bool = False) -> Tuple[bytes, int]:
"""Derive key from password and salt using Argon2d (PHC winner).
def argon2_kdf(password: str, # Password to derive the key from
salt: bytes, # Salt to derive the key from
rounds: int = ARGON2_ROUNDS, # Number of iterations
memory: int = ARGON2_MIN_MEMORY, # Amount of memory to use (in bytes)
parallelism: int = 1 # Number of threads to use
) -> bytes: # The derived key
"""Derive an encryption key from password and salt using Argon2d.
:param password: Password to derive key from
:param salt: Salt to derive key from
:param rounds: Number of iterations
:param memory: Memory usage
:param parallelism: Number of threads to use
:param local_test: When True, splits parallelism to half
:return: Derived key, amount of memory and number of threads used
Argon2 is a key derivation function (KDF) designed by Alex Biryukov,
Daniel Dinu, and Dmitry Khovratovich from the University of
Luxembourg. The algorithm is the winner of the 2015 Password Hashing
Competition (PHC).
For more details, see
https://password-hashing.net/
https://github.com/P-H-C/phc-winner-argon2/blob/master/argon2-specs.pdf
https://en.wikipedia.org/wiki/Argon2
The purpose of the KDF is to stretch a password into a 256-bit key.
Argon2 features a slow, memory-hard hash function that consumes
computational resources of an attacker that attempts a dictionary
or a brute force attack. The accompanied 256-bit salt prevents
rainbow-table attacks, forcing each attack to take place against an
individual (physically compromised) TFC-endpoint, or PSK
transmission media.
The used Argon2 version is Argon2d that uses data-dependent memory
access, which maximizes security against time-memory trade-off
(TMTO) attacks at the risk of side-channel attacks. The IETF
recommends using Argon2id (that is side-channel resistant and almost
as secure as Argon2d against TMTO attacks) **except** when there is
a reason to prefer Argon2d (or Argon2i). The reason TFC uses Argon2d
is key derivation only takes place on Source and Destination
Computer. As these computers are connected to the Networked Computer
only via a data diode, they do not leak any information via
side-channels to the adversary. The expected attacks are against
physically compromised data storage devices where the encrypted data
is at rest. In such situation, Argon2d is the most secure option.
The correctness of the Argon2d implementation* is tested by TFC unit
tests. The testing is done in limited scope by using an official KAT.
* https://github.com/P-H-C/phc-winner-argon2
https://github.com/hynek/argon2_cffi
"""
assert len(salt) == ARGON2_SALT_LEN
if parallelism is None:
parallelism = multiprocessing.cpu_count()
if local_test:
parallelism = max(1, parallelism // 2)
assert len(salt) == ARGON2_SALT_LENGTH
key = argon2.low_level.hash_secret_raw(secret=password.encode(),
salt=salt,
time_cost=rounds,
memory_cost=memory,
parallelism=parallelism,
hash_len=KEY_LENGTH,
type=argon2.Type.D)
return key, parallelism
hash_len=SYMMETRIC_KEY_LENGTH,
type=argon2.Type.D) # type: bytes
return key
def encrypt_and_sign(plaintext: bytes, key: bytes) -> bytes:
"""Encrypt plaintext with XSalsa20-Poly1305.
:param plaintext: Plaintext to encrypt
:param key: 32-byte key
:return: Ciphertext + tag
class X448(object):
"""
assert len(key) == KEY_LENGTH
X448 is the Diffie-Hellman function for Curve448-Goldilocks, a
state-of-the-art elliptical curve designed by Mike Hamburg in 2014:
https://eprint.iacr.org/2015/625.pdf
secret_box = nacl.secret.SecretBox(key)
nonce = nacl.utils.random(nacl.secret.SecretBox.NONCE_SIZE)
return bytes(secret_box.encrypt(plaintext, nonce))
The reasons for using X448 in TFC include
o It meets the criterion for a safe curve.
(https://safecurves.cr.yp.to/)
def auth_and_decrypt(nonce_ct_tag: bytes,
key: bytes,
soft_e: bool = False) -> bytes:
"""Authenticate and decrypt XSalsa20-Poly1305 ciphertext.
o NIST has announced X448 will be included in the SP 800-186.
(https://csrc.nist.gov/News/2017/Transition-Plans-for-Key-Establishment-Schemes)
:param nonce_ct_tag: Nonce, ciphertext and tag
:param key: 32-byte key
:param soft_e: When True, raises soft error
:return: Plaintext
o It provides conservative 224 bits of symmetric security.
o It is immune against invalid curve attacks: Its public keys do
not require validation as long as the public key is not zero.
o Its public keys are reasonably short (84 Base58 chars) to be
manually typed from Networked Computer to Source Computer.
The correctness of the X448 implementation* is tested by TFC unit
tests. The testing is done in limited scope by using official test
vectors.
* https://github.com/openssl/openssl/tree/OpenSSL_1_1_1-stable/crypto/ec/curve448
https://github.com/pyca/cryptography/blob/master/src/cryptography/hazmat/primitives/asymmetric/x448.py
"""
assert len(key) == KEY_LENGTH
@staticmethod
def generate_private_key() -> 'X448PrivateKey':
"""Generate the X448 private key.
The size of the private key is 56 bytes (448 bits).
"""
return X448PrivateKey.generate()
@staticmethod
def derive_public_key(private_key: 'X448PrivateKey') -> bytes:
"""Derive public key from X448 private key."""
public_key = private_key.public_key().public_bytes(encoding=Encoding.Raw,
format=PublicFormat.Raw) # type: bytes
return public_key
@staticmethod
def shared_key(private_key: 'X448PrivateKey', public_key: bytes) -> bytes:
"""Derive the X448 shared key.
Since the shared secret is zero if contact's public key is zero,
this function asserts the public key is a valid non-zero
bytestring.
Because the raw bits of the X448 shared secret might not be
uniformly distributed in the keyspace (i.e. bits might have bias
towards 0 or 1), the raw shared secret is passed through BLAKE2b
CSPRF to ensure uniformly random shared key.
"""
assert len(public_key) == TFC_PUBLIC_KEY_LENGTH
assert public_key != bytes(TFC_PUBLIC_KEY_LENGTH)
shared_secret = private_key.exchange(X448PublicKey.from_public_bytes(public_key))
return blake2b(shared_secret, digest_size=SYMMETRIC_KEY_LENGTH)
def encrypt_and_sign(plaintext: bytes, # Plaintext to encrypt
key: bytes, # 32-byte symmetric key
ad: bytes = b'' # Associated data
) -> bytes: # Nonce + ciphertext + tag
"""Encrypt plaintext with XChaCha20-Poly1305.
ChaCha20 is a stream cipher published by Daniel J. Bernstein (djb)
in 2008. The algorithm is an improved version of Salsa20 -- another
stream cipher by djb -- selected by ECRYPT into the eSTREAM
portfolio in 2008. The improvement in question is, ChaCha20
increases the per-round diffusion compared to Salsa20 while
maintaining or increasing speed.
For more details, see
https://cr.yp.to/chacha/chacha-20080128.pdf
https://en.wikipedia.org/wiki/Salsa20#ChaCha_variant
The Poly1305 is a Wegman-Carter Message Authentication Code also
designed by djb. The MAC is provably secure if ChaCha20 is secure.
The 128-bit tag space ensures the attacker's advantage to create an
existential forgery is negligible.
For more details, see
https://cr.yp.to/mac.html
The version used in TFC is the XChaCha20-Poly1305-IETF*, a variant
of the ChaCha20-Poly1305-IETF (RFC 7539**). Quoting libsodium, the
XChaCha20 (=eXtended-nonce ChaCha20) variant allows encryption of
~2^64 bytes per message, encryption of up to 2^64 messages per key,
and safe use of random nonces due to the 192-bit nonce space***.
* https://tools.ietf.org/html/draft-arciszewski-xchacha-00
** https://tools.ietf.org/html/rfc7539
*** https://download.libsodium.org/doc/secret-key_cryptography/aead/chacha20-poly1305#variants
The reasons for using XChaCha20-Poly1305 in TFC include
o The Salsa20 algorithm has 14 years of cryptanalysis behind it.
(https://en.wikipedia.org/wiki/Salsa20#Cryptanalysis_of_Salsa20)
o The increased diffusion over the well-received Salsa20.
o The algorithm is much faster compared to AES (in cases where
the CPU and/or implementation does not support AES-NI).
o Security against cache-timing attacks on all CPUs (unlike AES
on CPUs without AES-NI).
o The good name of djb.
The correctness of the XChaCha20-Poly1305 implementation* is tested
by TFC unit tests. The testing is done in limited scope by using
libsodium and IETF test vectors.
* https://github.com/jedisct1/libsodium/tree/master/src/libsodium/crypto_aead/xchacha20poly1305/sodium
https://github.com/pyca/pynacl/blob/master/src/nacl/bindings/crypto_aead.py
"""
assert len(key) == SYMMETRIC_KEY_LENGTH
nonce = csprng(XCHACHA20_NONCE_LENGTH)
ct_tag = nacl.bindings.crypto_aead_xchacha20poly1305_ietf_encrypt(plaintext, ad, nonce, key) # type: bytes
return nonce + ct_tag
def auth_and_decrypt(nonce_ct_tag: bytes, # Nonce + ciphertext + tag
key: bytes, # 32-byte symmetric key
database: str = '', # When provided, gracefully exists TFC when the tag is invalid
ad: bytes = b'' # Associated data
) -> bytes: # Plaintext
"""Authenticate and decrypt XChaCha20-Poly1305 ciphertext.
The Poly1305 tag is checked using constant time `sodium_memcmp`:
https://download.libsodium.org/doc/helpers#constant-time-test-for-equality
When TFC decrypts ciphertext from an untrusted source (i.e., a
contact), no `database` parameter is provided. In such situation, if
the tag of the untrusted ciphertext is invalid, TFC discards the
ciphertext and recovers appropriately.
When TFC decrypts ciphertext from a trusted source (i.e., a
database), the `database` parameter is provided, so the function
knows which database is in question. In case the authentication
fails due to invalid tag, the data is assumed to be either tampered
or corrupted. TFC will in such case gracefully exit to avoid
processing the unsafe data and warn the user in which database the
issue was detected.
"""
assert len(key) == SYMMETRIC_KEY_LENGTH
nonce, ct_tag = separate_header(nonce_ct_tag, XCHACHA20_NONCE_LENGTH)
try:
secret_box = nacl.secret.SecretBox(key)
return secret_box.decrypt(nonce_ct_tag)
plaintext = nacl.bindings.crypto_aead_xchacha20poly1305_ietf_decrypt(ct_tag, ad, nonce, key) # type: bytes
return plaintext
except nacl.exceptions.CryptoError:
if not soft_e:
raise CriticalError("Ciphertext MAC fail.")
if database:
raise CriticalError(f"Authentication of data in database '{database}' failed.")
raise
def byte_padding(string: bytes) -> bytes:
"""Pad byte string to next 255 bytes.
def byte_padding(bytestring: bytes # Bytestring to be padded
) -> bytes: # Padded bytestring
"""Pad bytestring to next 255 bytes.
Padding of output messages hides plaintext length and contributes
to traffic flow confidentiality when traffic masking is enabled.
TFC adds padding to messages it outputs. The padding ensures each
assembly packet has a constant length. When traffic masking is
disabled, because of padding the packet length reveals only the
maximum length of the compressed message.
:param string: String to be padded
:return: Padded string
When traffic masking is enabled, the padding contributes to traffic
flow confidentiality: During traffic masking, TFC will output a
constant stream of padded packets at constant intervals that hides
metadata about message length (i.e., the adversary won't be able to
distinguish when transmission of packet or series of packets starts
and stops), as well as the type (message/file) of transferred data.
TFC uses PKCS #7 padding scheme described in RFC 2315 and RFC 5652:
https://tools.ietf.org/html/rfc2315#section-10.3
https://tools.ietf.org/html/rfc5652#section-6.3
For a better explanation, see
https://en.wikipedia.org/wiki/Padding_(cryptography)#PKCS#5_and_PKCS#7
"""
length = PADDING_LEN - (len(string) % PADDING_LEN)
string += length * bytes([length])
padding_len = PADDING_LENGTH - (len(bytestring) % PADDING_LENGTH)
bytestring += padding_len * bytes([padding_len])
assert len(string) % PADDING_LEN == 0
assert len(bytestring) % PADDING_LENGTH == 0
return string
return bytestring
def rm_padding_bytes(string: bytes) -> bytes:
def rm_padding_bytes(bytestring: bytes # Bytestring from which padding is removed
) -> bytes: # Bytestring without padding
"""Remove padding from plaintext.
The length of padding is determined by the ord-value
of last character that is always part of padding.
:param string: String from which padding is removed
:return: String without padding
The length of padding is determined by the ord-value of the last
byte that is always part of the padding.
"""
return string[:-ord(string[-1:])]
length = ord(bytestring[-1:])
return bytestring[:-length]
def xor(string1: bytes, string2: bytes) -> bytes:
"""XOR two byte strings together."""
if len(string1) != len(string2):
raise CriticalError("String length mismatch.")
def csprng(key_length: int = SYMMETRIC_KEY_LENGTH) -> bytes:
"""Generate a cryptographically secure random key.
return b''.join([bytes([b1 ^ b2]) for b1, b2 in zip(string1, string2)])
The default key length is 256 bits.
The key is generated by the Linux kernel's cryptographically secure
pseudo-random number generator (CSPRNG).
def csprng() -> bytes:
"""Generate a cryptographically secure, 256-bit random key.
Since Python 3.6.0, `os.urandom` is a wrapper for best available
CSPRNG. The 3.17 and earlier versions of Linux kernel do not support
the GETRANDOM call, and Python 3.6's `os.urandom` will in those
cases fall back to non-blocking `/dev/urandom` that is not secure on
live distros as they have low entropy at the start of the session.
Key is generated with kernel CSPRNG, the output of which is further
compressed with hash_chain. This increases preimage resistance that
protects the internal state of the entropy pool. Additional hashing
is done as per the recommendation of djb:
TFC uses `os.getrandom(n, flags=0)` explicitly. This forces use of
recent enough Python interpreter (3.6.0 or later) and limits Linux
kernel version to 3.17 or later.* The flag 0 will block urandom if
the internal state of the CSPRNG has less than 128 bits of entropy.
See PEP 524 for more details:
https://www.python.org/dev/peps/pep-0524/
* The `/dev/urandom` was redesigned around ChaCha20 in the version
4.8 of Linux kernel (https://lwn.net/Articles/686033/), so as a
good practice TFC runs the `check_kernel_version` to ensure only
the new design of the CSPRNG is used.
Quoting PEP 524:
"The os.getrandom() is a thin wrapper on the getrandom()
syscall/C function and so inherit of its behaviour. For
example, on Linux, it can return less bytes than
requested if the syscall is interrupted by a signal."
However, quoting (https://lwn.net/Articles/606141/) on GETRANDOM:
"--reads of 256 bytes or less from /dev/urandom are guaranteed to
return the full request once that device has been initialized."
Since the largest key generated in TFC is the 56-byte X448 private
key, GETRANDOM is guaranteed to always work. As a good practice
however, TFC asserts that the length of the obtained entropy is
correct.
The output of GETRANDOM is further compressed with BLAKE2b. The
preimage resistance of the hash function protects the internal
state of the entropy pool just in case some user decides to modify
the source to accept pre-4.8 Linux Kernel that has no backtracking
protection. Another reason for the hashing is its recommended by djb:
https://media.ccc.de/v/32c3-7210-pqchacks#video&t=1116
Since Python3.6.0, os.urandom is a wrapper for best available
CSPRNG. The 3.17 and earlier versions of Linux kernel do not support
the GETRANDOM call, and Python3.6's os.urandom will in those cases
fallback to non-blocking /dev/urandom that is not secure on live
distros as they have low entropy at the start of the session.
TFC uses os.getrandom(32, flags=0) explicitly. This forces use of
recent enough Python interpreter (3.6 or later) and limits Linux
kernel version to 3.17 or later.* The flag 0 will block urandom if
internal state of CSPRNG has less than 128 bits of entropy.
* Since kernel 4.8, ChaCha20 has replaced SHA-1 as the compressor
for /dev/urandom. As a good practice, TFC runs the
check_kernel_version to ensure minimum version is actually 4.8,
not 3.17.
:return: Cryptographically secure 256-bit random key
Since BLAKE2b only produces 1..64 byte digests, its use limits the
size of the key to 64 bytes. This is not a problem for TFC because
again, the largest key it generates is the 56-byte X448 private key.
"""
# As Travis CI lacks GETRANDOM syscall, fallback to urandom.
if 'TRAVIS' in os.environ and os.environ['TRAVIS'] == 'true':
entropy = os.urandom(KEY_LENGTH)
else:
entropy = os.getrandom(KEY_LENGTH, flags=0)
assert key_length <= BLAKE2_DIGEST_LENGTH_MAX
assert len(entropy) == KEY_LENGTH
entropy = os.getrandom(key_length, flags=0)
assert len(entropy) == key_length
return hash_chain(entropy)
compressed = blake2b(entropy, digest_size=key_length)
assert len(compressed) == key_length
return compressed
def check_kernel_entropy() -> None:
"""Wait until Kernel CSPRNG is sufficiently seeded.
"""Wait until the kernel CSPRNG is sufficiently seeded.
Wait until entropy_avail file states that system has at least 512
bits of entropy. The headroom allows room for error in accuracy of
entropy collector's entropy estimator; As long as input has at least
4 bits per byte of actual entropy, kernel CSPRNG will be sufficiently
seeded when it generates 256-bit keys.
Wait until the `entropy_avail` file states that kernel entropy pool
has at least 512 bits of entropy. The waiting ensures the ChaCha20
CSPRNG is fully seeded (i.e., it has the maximum of 384 bits of
entropy) when it generates keys. The same entropy threshold is used
by the GETRANDOM syscall in random.c:
#define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
For more information on the kernel CSPRNG threshold, see
https://security.stackexchange.com/a/175771/123524
https://crypto.stackexchange.com/a/56377
"""
clear_screen()
phase("Waiting for Kernel CSPRNG entropy pool to fill up", head=1)
message = "Waiting for kernel CSPRNG entropy pool to fill up"
phase(message, head=1)
ent_avail = 0
while ent_avail < ENTROPY_THRESHOLD:
with ignored(EOFError, KeyboardInterrupt):
with open('/proc/sys/kernel/random/entropy_avail') as f:
value = f.read()
ent_avail = int(value.strip())
c_print(f"{ent_avail}/{ENTROPY_THRESHOLD}")
ent_avail = int(f.read().strip())
m_print(f"{ent_avail}/{ENTROPY_THRESHOLD}")
print_on_previous_line(delay=0.1)
print_on_previous_line()
phase("Waiting for Kernel CSPRNG entropy pool to fill up")
phase(message)
phase(DONE)
def check_kernel_version() -> None:
"""Check that the Linux kernel version is at least 4.8.
This check ensures that TFC only runs on Linux kernels that use
the new ChaCha20 based CSPRNG: https://lkml.org/lkml/2016/7/25/43
This check ensures that TFC only runs on Linux kernels that use the
new ChaCha20 based CSPRNG that among many things, adds backtracking
protection:
https://lkml.org/lkml/2016/7/25/43
"""
major_v, minor_v = [int(i) for i in os.uname()[2].split('.')[:2]]

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,67 +16,193 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import typing
from typing import Generator, Iterable, List, Sized
from typing import Generator, Iterable, List, Optional, Sized
from src.common.crypto import auth_and_decrypt, encrypt_and_sign
from src.common.encoding import bool_to_bytes, str_to_bytes
from src.common.encoding import bytes_to_bool, bytes_to_str
from src.common.misc import ensure_dir, get_terminal_width, split_byte_string
from src.common.encoding import bool_to_bytes, pub_key_to_onion_address, str_to_bytes, pub_key_to_short_address
from src.common.encoding import bytes_to_bool, onion_address_to_pub_key, bytes_to_str
from src.common.misc import ensure_dir, get_terminal_width, separate_headers, split_byte_string
from src.common.output import clear_screen
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from cryptography.hazmat.primitives.asymmetric.x448 import X448PrivateKey
class Contact(object):
"""\
Contact object collects data related to
contact that is not part of key rotation.
Contact object contains contact data not related to key management
and hash ratchet state:
onion_pub_key: The public key of the contact's v3 Tor Onion
Service. The Relay Program on user's Networked
Computer uses this public key to anonymously
discover the Onion Service and to authenticate the
end-to-end encryption used between Relay Computers.
Since Relay Program might run on an amnesic distro
like Tails, the Transmitter and Receiver Programs
handle long-term storage of the contact's Onion
Service public key. All `onion_pub_key` variables
across the codebase refer to the public key of a
contact (never that of the user).
nick: As per Zooko's triangle and Stiegler's Petname
Systems, .onion names (i.e., TFC accounts) cannot
be global, secure and memorable at the same time*.
To deal with hard to remember accounts, in TFC
contacts (and groups) are managed mostly with
nicknames assigned by the user. The nickname must
be unique among both contacts and groups so that
single command `/msg <selection>` can select a
specific contact or group. Some nicknames are
reserved so that messages from contacts cannot be
confused with system messages of Receiver Program.
Nicknames also have a length limit of 254 chars.
* https://trac.torproject.org/projects/tor/wiki/doc/HiddenServiceNames#Whyare.onionnamescreatedthatway
TFC stores the 32-byte public key fingerprints of the ECDHE key
exchange into the contact database. These values allow the user to
verify at any time no MITM attack took place during the key
exchange. When PSKs are used, a null-byte string is used as a
placeholder value.
tx_fingerprint: The user's fingerprint. This fingerprint is derived
from the user's public key which means it's
automatically authentic. During verification over
an authenticated channel, the user reads this value
to the contact out loud.
rx_fingerprint: The purported fingerprint for the contact. This
fingerprint depends on the public key received from
the insecure network and therefore, it shouldn't be
trusted implicitly. During verification over an
authenticated channel, the contact reads their
`tx_fingerprint` to the user out loud, and the user
then compares it to this purported value.
kex_status: This byte remembers the key exchange status of the
contact.
TFC stores the contact-specific settings to the contact database:
log_messages: This setting defines whether the Receiver Program
on Destination Computer writes the assembly packets
of a successfully received message into a log file.
When logging is enabled, Transmitter Program will
also log assembly packets of sent messages to its
log file.
file_reception: This setting defines whether the Receiver Program
accepts files sent by the contact. The setting has
no effect on user's Transmitter Program.
notifications: This setting defines whether, in situations where
some other window is active, the Receiver Program
displays a notification about the contact sending a
new message to their window. The setting has no
effect on user's Transmitter Program.
tfc_private_key: This value is an ephemerally stored private key
for situations where the user interrupts the key
exchange. The purpose of the value is to prevent
the user from generating different ECDHE values
when re-selecting the contact to continue the key
exchange. Note that once a shared key is derived
from this private key (and contact's public key),
it is discarded. New private key will thus be
generated if the users decide to exchange new keys
with each other.
"""
def __init__(self,
rx_account: str,
tx_account: str,
onion_pub_key: bytes,
nick: str,
tx_fingerprint: bytes,
rx_fingerprint: bytes,
kex_status: bytes,
log_messages: bool,
file_reception: bool,
notifications: bool) -> None:
"""Create a new Contact object."""
self.rx_account = rx_account
self.tx_account = tx_account
self.nick = nick
self.tx_fingerprint = tx_fingerprint
self.rx_fingerprint = rx_fingerprint
self.log_messages = log_messages
self.file_reception = file_reception
self.notifications = notifications
notifications: bool
) -> None:
"""Create a new Contact object.
`self.short_address` is a truncated version of the account used
to identify TFC account in printed messages.
"""
self.onion_pub_key = onion_pub_key
self.nick = nick
self.tx_fingerprint = tx_fingerprint
self.rx_fingerprint = rx_fingerprint
self.kex_status = kex_status
self.log_messages = log_messages
self.file_reception = file_reception
self.notifications = notifications
self.onion_address = pub_key_to_onion_address(self.onion_pub_key)
self.short_address = pub_key_to_short_address(self.onion_pub_key)
self.tfc_private_key = None # type: Optional[X448PrivateKey]
def serialize_c(self) -> bytes:
"""Return contact data as constant length byte string."""
return (str_to_bytes(self.rx_account)
+ str_to_bytes(self.tx_account)
+ str_to_bytes(self.nick)
"""Return contact data as a constant length byte string.
This function serializes the contact's data into a byte string
that has the exact length of 3*32 + 4*1 + 1024 = 1124 bytes. The
length is guaranteed regardless of the content or length of the
attributes' values, including the contact's nickname. The
purpose of the constant length serialization is to hide any
metadata about the contact the ciphertext length of the contact
database would reveal.
"""
return (self.onion_pub_key
+ self.tx_fingerprint
+ self.rx_fingerprint
+ self.kex_status
+ bool_to_bytes(self.log_messages)
+ bool_to_bytes(self.file_reception)
+ bool_to_bytes(self.notifications))
+ bool_to_bytes(self.notifications)
+ str_to_bytes(self.nick))
def uses_psk(self) -> bool:
"""\
Return True if the user and the contact are using pre-shared
keys (PSKs), else False.
When the user sets up pre-shared keys with the contact, the key
exchange status can only have two specific values (that remember
whether the PSK of the contact has been imported). That fact can
be used to determine whether the keys with contact were
pre-shared.
"""
return self.kex_status in [KEX_STATUS_NO_RX_PSK, KEX_STATUS_HAS_RX_PSK]
class ContactList(Iterable, Sized):
"""\
ContactList object manages list of contact
objects and the encrypted contact database.
ContactList object manages TFC's Contact objects and the storage of
the objects in an encrypted database.
The main purpose of this object is to manage the `self.contacts`
list that contains TFC's contacts. The database is stored on disk
in encrypted form. Prior to encryption, the database is padded with
dummy contacts. The dummy contacts hide the number of actual
contacts that would otherwise be revealed by the size of the
encrypted database. As long as the user has less than 50 contacts,
the database will effectively hide the actual number of contacts.
The maximum number of contacts (and thus the size of the database)
can be changed by editing the `max_number_of_contacts` setting. This
can however, in theory, reveal to a physical attacker the user has
more than 50 contacts.
The ContactList object also provides handy methods with human-
readable names for making queries to the database.
"""
def __init__(self, master_key: 'MasterKey', settings: 'Settings') -> None:
@ -84,159 +211,288 @@ class ContactList(Iterable, Sized):
self.settings = settings
self.contacts = [] # type: List[Contact]
self.dummy_contact = self.generate_dummy_contact()
self.dummy_id = self.dummy_contact.rx_account.encode('utf-32')
self.file_name = f'{DIR_USER_DATA}{settings.software_operation}_contacts'
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_contacts()
self._load_contacts()
else:
self.store_contacts()
def __iter__(self) -> Generator:
"""Iterate over contacts in contact list."""
"""Iterate over Contact objects in `self.contacts`."""
yield from self.contacts
def __len__(self) -> int:
"""Return number of contacts in contact list."""
return len(self.contacts)
"""Return the number of contacts in `self.contacts`.
The Contact object that represents the local key is left out of
the calculation.
"""
return len(self.get_list_of_contacts())
def store_contacts(self) -> None:
"""Write contacts to encrypted database."""
contacts = self.contacts + [self.dummy_contact] * (self.settings.max_number_of_contacts - len(self.contacts))
pt_bytes = b''.join([c.serialize_c() for c in contacts])
"""Write the list of contacts to an encrypted database.
This function will first create a list of contacts and dummy
contacts. It will then serialize every Contact object on that
list and join the constant length byte strings to form the
plaintext that will be encrypted and stored in the database.
By default, TFC has a maximum number of 50 contacts. In
addition, the database stores the contact that represents the
local key (used to encrypt commands from Transmitter to Receiver
Program). The plaintext length of 51 serialized contacts is
51*1124 = 57364 bytes. The ciphertext includes a 24-byte nonce
and a 16-byte tag, so the size of the final database is 57313
bytes.
"""
pt_bytes = b''.join([c.serialize_c() for c in self.contacts + self._dummy_contacts()])
ct_bytes = encrypt_and_sign(pt_bytes, self.master_key.master_key)
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(ct_bytes)
def load_contacts(self) -> None:
"""Load contacts from encrypted database."""
def _load_contacts(self) -> None:
"""Load contacts from the encrypted database.
This function first reads and decrypts the database content. It
then splits the plaintext into a list of 1124-byte blocks: each
block contains the serialized data of one contact. Next, the
function will remove from the list all dummy contacts (that
start with dummy contact's public key). The function will then
populate the `self.contacts` list with Contact objects, the data
of which is sliced and decoded from the dummy-free blocks.
"""
with open(self.file_name, 'rb') as f:
ct_bytes = f.read()
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key)
entries = split_byte_string(pt_bytes, item_len=CONTACT_LENGTH)
contacts = [e for e in entries if not e.startswith(self.dummy_id)]
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key, database=self.file_name)
blocks = split_byte_string(pt_bytes, item_len=CONTACT_LENGTH)
df_blocks = [b for b in blocks if not b.startswith(self.dummy_contact.onion_pub_key)]
for c in contacts:
assert len(c) == CONTACT_LENGTH
for block in df_blocks:
assert len(block) == CONTACT_LENGTH
self.contacts.append(Contact(rx_account =bytes_to_str( c[ 0:1024]),
tx_account =bytes_to_str( c[1024:2048]),
nick =bytes_to_str( c[2048:3072]),
tx_fingerprint= c[3072:3104],
rx_fingerprint= c[3104:3136],
log_messages =bytes_to_bool(c[3136:3137]),
file_reception=bytes_to_bool(c[3137:3138]),
notifications =bytes_to_bool(c[3138:3139])))
(onion_pub_key, tx_fingerprint, rx_fingerprint, kex_status_byte,
log_messages_byte, file_reception_byte, notifications_byte,
nick_bytes) = separate_headers(block,
[ONION_SERVICE_PUBLIC_KEY_LENGTH]
+ 2*[FINGERPRINT_LENGTH]
+ 4*[ENCODED_BOOLEAN_LENGTH])
self.contacts.append(Contact(onion_pub_key =onion_pub_key,
tx_fingerprint=tx_fingerprint,
rx_fingerprint=rx_fingerprint,
kex_status =kex_status_byte,
log_messages =bytes_to_bool(log_messages_byte),
file_reception=bytes_to_bool(file_reception_byte),
notifications =bytes_to_bool(notifications_byte),
nick =bytes_to_str(nick_bytes)))
@staticmethod
def generate_dummy_contact() -> Contact:
"""Generate a dummy contact."""
return Contact(rx_account =DUMMY_CONTACT,
tx_account =DUMMY_STR,
nick =DUMMY_STR,
tx_fingerprint=bytes(FINGERPRINT_LEN),
rx_fingerprint=bytes(FINGERPRINT_LEN),
"""Generate a dummy Contact object.
The dummy contact simplifies the code around the constant length
serialization when the data is stored to, or read from the
database.
"""
return Contact(onion_pub_key =onion_address_to_pub_key(DUMMY_CONTACT),
nick =DUMMY_NICK,
tx_fingerprint=bytes(FINGERPRINT_LENGTH),
rx_fingerprint=bytes(FINGERPRINT_LENGTH),
kex_status =KEX_STATUS_NONE,
log_messages =False,
file_reception=False,
notifications =False)
def _dummy_contacts(self) -> List[Contact]:
"""\
Generate a list of dummy contacts for database padding.
The number of dummy contacts depends on the number of actual
contacts.
The additional contact (+1) is the local contact used to
represent the presence of the local key on Transmitter Program's
`input_loop` process side that does not have access to the
KeyList database that contains the local key.
"""
number_of_contacts_to_store = self.settings.max_number_of_contacts + 1
number_of_dummies = number_of_contacts_to_store - len(self.contacts)
return [self.dummy_contact] * number_of_dummies
def add_contact(self,
rx_account: str,
tx_account: str,
onion_pub_key: bytes,
nick: str,
tx_fingerprint: bytes,
rx_fingerprint: bytes,
kex_status: bytes,
log_messages: bool,
file_reception: bool,
notifications: bool) -> None:
"""Add new contact to contact list, write changes to database."""
if self.has_contact(rx_account):
self.remove_contact(rx_account)
notifications: bool
) -> None:
"""\
Add a new contact to `self.contacts` list and write changes to
the database.
self.contacts.append(Contact(rx_account, tx_account, nick,
tx_fingerprint, rx_fingerprint,
log_messages, file_reception, notifications))
Because TFC's hardware separation prevents automated DH-ratchet,
the only way for the users to re-negotiate new keys is to start
a new session by re-adding the contact. If the contact is
re-added, TFC will need to remove the existing Contact object
before adding the new one. In such case, TFC will update the
nick, kex status, and fingerprints, but it will keep the old
logging, file reception, and notification settings of the
contact (as opposed to using the defaults determined by TFC's
Settings object).
"""
if self.has_pub_key(onion_pub_key):
current_contact = self.get_contact_by_pub_key(onion_pub_key)
log_messages = current_contact.log_messages
file_reception = current_contact.file_reception
notifications = current_contact.notifications
self.remove_contact_by_pub_key(onion_pub_key)
self.contacts.append(Contact(onion_pub_key,
nick,
tx_fingerprint,
rx_fingerprint,
kex_status,
log_messages,
file_reception,
notifications))
self.store_contacts()
def remove_contact(self, selector: str) -> None:
"""\
Remove account based on account/nick,
update database file if necessary.
def remove_contact_by_pub_key(self, onion_pub_key: bytes) -> None:
"""Remove the contact that has a matching Onion Service public key.
If the contact was found and removed, write changes to the database.
"""
for i, c in enumerate(self.contacts):
if selector in [c.rx_account, c.nick]:
if c.onion_pub_key == onion_pub_key:
del self.contacts[i]
self.store_contacts()
break
def get_contact(self, selector: str) -> Contact:
"""Get contact from list based on UID (account name or nick)."""
return next(c for c in self.contacts if selector in [c.rx_account, c.nick])
def remove_contact_by_address_or_nick(self, selector: str) -> None:
"""Remove the contact that has a matching nick or Onion Service address.
If the contact was found and removed, write changes to the database.
"""
for i, c in enumerate(self.contacts):
if selector in [c.onion_address, c.nick]:
del self.contacts[i]
self.store_contacts()
break
def get_contact_by_pub_key(self, onion_pub_key: bytes) -> Contact:
"""\
Return the Contact object from `self.contacts` list that has the
matching Onion Service public key.
"""
return next(c for c in self.contacts if onion_pub_key == c.onion_pub_key)
def get_contact_by_address_or_nick(self, selector: str) -> Contact:
"""\
Return the Contact object from `self.contacts` list that has the
matching nick or Onion Service address.
"""
return next(c for c in self.contacts if selector in [c.onion_address, c.nick])
def get_list_of_contacts(self) -> List[Contact]:
"""Return list of contacts."""
return [c for c in self.contacts if c.rx_account != LOCAL_ID]
"""Return list of Contact objects in `self.contacts` list."""
return [c for c in self.contacts if c.onion_address != LOCAL_ID]
def get_list_of_accounts(self) -> List[str]:
"""Return list of accounts."""
return [c.rx_account for c in self.contacts if c.rx_account != LOCAL_ID]
def get_list_of_addresses(self) -> List[str]:
"""Return list of contacts' TFC accounts."""
return [c.onion_address for c in self.contacts if c.onion_address != LOCAL_ID]
def get_list_of_nicks(self) -> List[str]:
"""Return list of nicks."""
return [c.nick for c in self.contacts if c.nick != LOCAL_ID]
"""Return list of contacts' nicks."""
return [c.nick for c in self.contacts if c.onion_address != LOCAL_ID]
def get_list_of_users_accounts(self) -> List[str]:
"""Return list of user's accounts."""
return list(set([c.tx_account for c in self.contacts if c.tx_account != LOCAL_ID]))
def get_list_of_pub_keys(self) -> List[bytes]:
"""Return list of contacts' public keys."""
return [c.onion_pub_key for c in self.contacts if c.onion_address != LOCAL_ID]
def get_list_of_pending_pub_keys(self) -> List[bytes]:
"""Return list of public keys for contacts that haven't completed key exchange yet."""
return [c.onion_pub_key for c in self.contacts if c.kex_status == KEX_STATUS_PENDING]
def get_list_of_existing_pub_keys(self) -> List[bytes]:
"""Return list of public keys for contacts with whom key exchange has been completed."""
return [c.onion_pub_key for c in self.get_list_of_contacts()
if c.kex_status in [KEX_STATUS_UNVERIFIED, KEX_STATUS_VERIFIED,
KEX_STATUS_HAS_RX_PSK, KEX_STATUS_NO_RX_PSK]]
def contact_selectors(self) -> List[str]:
"""Return list of UIDs contacts can be selected with."""
return self.get_list_of_accounts() + self.get_list_of_nicks()
"""Return list of string-type UIDs that can be used to select a contact."""
return self.get_list_of_addresses() + self.get_list_of_nicks()
def has_contacts(self) -> bool:
"""Return True if contact list has any contacts, else False."""
return any(self.get_list_of_accounts())
"""Return True if ContactList has any contacts, else False."""
return any(self.get_list_of_contacts())
def has_contact(self, selector: str) -> bool:
"""Return True if contact with account/nick exists, else False."""
return selector in self.contact_selectors()
def has_only_pending_contacts(self) -> bool:
"""Return True if ContactList only has pending contacts, else False."""
return all(c.kex_status == KEX_STATUS_PENDING for c in self.get_list_of_contacts())
def has_pub_key(self, onion_pub_key: bytes) -> bool:
"""Return True if contact with public key exists, else False."""
return onion_pub_key in self.get_list_of_pub_keys()
def has_local_contact(self) -> bool:
"""Return True if local key exists, else False."""
return any(c.rx_account == LOCAL_ID for c in self.contacts)
"""Return True if the local key has been exchanged, else False."""
return any(c.onion_address == LOCAL_ID for c in self.contacts)
def print_contacts(self) -> None:
"""Print list of contacts."""
# Columns
c1 = ['Contact']
c2 = ['Logging']
c3 = ['Notify']
c4 = ['Files ']
c5 = ['Key Ex']
c6 = ['Account']
"""Print the list of contacts.
Neatly printed contact list allows easy contact management:
It allows the user to check active logging, file reception and
notification settings, as well as what key exchange was used
and what is the state of that key exchange. The contact list
also shows and what the account displayed by the Relay Program
corresponds to what nick etc.
"""
# Initialize columns
c1 = ['Contact']
c2 = ['Account']
c3 = ['Logging']
c4 = ['Notify']
c5 = ['Files ']
c6 = ['Key Ex']
# Key exchange status dictionary
kex_dict = {KEX_STATUS_PENDING: f"{ECDHE} (Pending)",
KEX_STATUS_UNVERIFIED: f"{ECDHE} (Unverified)",
KEX_STATUS_VERIFIED: f"{ECDHE} (Verified)",
KEX_STATUS_NO_RX_PSK: f"{PSK} (No contact key)",
KEX_STATUS_HAS_RX_PSK: PSK
}
# Populate columns with contact data
for c in self.get_list_of_contacts():
c1.append(c.nick)
c2.append('Yes' if c.log_messages else 'No')
c3.append('Yes' if c.notifications else 'No')
c4.append('Accept' if c.file_reception else 'Reject')
c5.append('PSK' if c.tx_fingerprint == bytes(FINGERPRINT_LEN) else 'X25519')
c6.append(c.rx_account)
c2.append(c.short_address)
c3.append('Yes' if c.log_messages else 'No')
c4.append('Yes' if c.notifications else 'No')
c5.append('Accept' if c.file_reception else 'Reject')
c6.append(kex_dict[c.kex_status])
lst = []
for nick, log_setting, notify_setting, file_rcv_setting, key_exchange, account in zip(c1, c2, c3, c4, c5, c6):
lst.append('{0:{1}} {2:{3}} {4:{5}} {6:{7}} {8:{9}} {10}'.format(
nick, max(len(v) for v in c1) + CONTACT_LIST_INDENT,
log_setting, max(len(v) for v in c2) + CONTACT_LIST_INDENT,
notify_setting, max(len(v) for v in c3) + CONTACT_LIST_INDENT,
file_rcv_setting, max(len(v) for v in c4) + CONTACT_LIST_INDENT,
key_exchange, max(len(v) for v in c5) + CONTACT_LIST_INDENT,
account, max(len(v) for v in c6) + CONTACT_LIST_INDENT))
# Calculate column widths
c1w, c2w, c3w, c4w, c5w, = [max(len(v) for v in column) + CONTACT_LIST_INDENT
for column in [c1, c2, c3, c4, c5]]
lst.insert(1, get_terminal_width() * '')
# Align columns by adding whitespace between fields of each line
lines = [f'{f1:{c1w}}{f2:{c2w}}{f3:{c3w}}{f4:{c4w}}{f5:{c5w}}{f6}'
for f1, f2, f3, f4, f5, f6 in zip(c1, c2, c3, c4, c5, c6)]
# Add a terminal-wide line between the column names and the data
lines.insert(1, get_terminal_width() * '')
# Print the contact list
clear_screen()
print('\n' + '\n'.join(lst) + '\n\n')
print('\n' + '\n'.join(lines) + '\n\n')

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
@ -25,9 +26,10 @@ import typing
from typing import Callable, Generator, Iterable, List, Sized
from src.common.crypto import auth_and_decrypt, encrypt_and_sign
from src.common.encoding import bool_to_bytes, int_to_bytes, str_to_bytes
from src.common.encoding import bool_to_bytes, int_to_bytes, str_to_bytes, onion_address_to_pub_key, b58encode
from src.common.encoding import bytes_to_bool, bytes_to_int, bytes_to_str
from src.common.misc import ensure_dir, get_terminal_width, round_up, split_byte_string
from src.common.misc import ensure_dir, get_terminal_width, round_up, separate_header, separate_headers
from src.common.misc import split_byte_string
from src.common.statics import *
if typing.TYPE_CHECKING:
@ -38,19 +40,73 @@ if typing.TYPE_CHECKING:
class Group(Iterable, Sized):
"""\
Group object contains a list of contact objects
(group members) and settings related to the group.
Group object contains a list of Contact objects (group members) and
settings related to the group:
name: In TFC, groups are identified by random group IDs
that are hard to remember. Groups are therefore
managed mostly with names assigned by the user. The
name of the group must be unique among group names
and nicknames of contacts. This way a single command
`/msg <selection>` can select the specified contact
or group. Some group names are reserved, e.g., for
database padding. Group names also have a length
limit of 254 chars.
group_id: Group ID is a random 4-byte value used to identify a
group among user's peers. To prevent data leakage
from Destination Computer via group IDs, the
received group management messages are displayed by
the Relay Program on Networked Computer. Since group
ID must be considered public information, they are
random. For more details on Destination Computer
exfiltration attacks, refer to TFC's documentation
regarding Security Design. Identification of groups
via a separate group ID allows the user to choose the
name for the group which is useful because users do
not need to take into account what names their
contacts have chosen for their groups.
log_messages: This setting defines whether the Receiver Program
writes the assembly packets of a successfully
received group message into a log file. When logging
is enabled, Transmitter Program will also log
assembly packets of sent group messages to its log
file.
notifications: This setting defines whether in situations where some
other window is active the Receiver Program displays
a notification about a group member sending a new
message to the group's window. The setting has no
effect on user's Transmitter Program.
members: Manually managed list of Contact objects that the
user accepts as members of their side of the group.
The Transmitter Program of user multicasts messages
to these contacts when the group is active. The
Receiver Program of user accepts messages from these
contacts to Group's window when the contact sends the
user a message, that contains the group ID in its
header.
"""
def __init__(self,
name: str,
group_id: bytes,
log_messages: bool,
notifications: bool,
members: List['Contact'],
settings: 'Settings',
store_groups: Callable) -> None:
"""Create a new Group object."""
store_groups: Callable
) -> None:
"""Create a new Group object.
The `self.store_groups` is a reference to the method of the
parent object GroupList that stores the list of groups into an
encrypted database.
"""
self.name = name
self.group_id = group_id
self.log_messages = log_messages
self.notifications = notifications
self.members = members
@ -58,67 +114,99 @@ class Group(Iterable, Sized):
self.store_groups = store_groups
def __iter__(self) -> Generator:
"""Iterate over members in group."""
"""Iterate over members (Contact objects) in the Group object."""
yield from self.members
def __len__(self) -> int:
"""Return number of members in group."""
"""Return the number of members in the Group object."""
return len(self.members)
def serialize_g(self) -> bytes:
"""Return group data as constant length byte string."""
name = str_to_bytes(self.name)
log_messages = bool_to_bytes(self.log_messages)
notifications = bool_to_bytes(self.notifications)
members = self.get_list_of_member_accounts()
num_of_dummies = self.settings.max_number_of_group_members - len(self.members)
members += num_of_dummies * [DUMMY_MEMBER]
member_bytes = b''.join([str_to_bytes(m) for m in members])
"""Return group data as a constant length bytestring.
return name + log_messages + notifications + member_bytes
This function serializes the group's data into a bytestring
that always has a constant length. The exact length depends on
the attribute `max_number_of_group_members` of TFC's Settings
object. With the default setting of 50 members per group, the
length of the serialized data is
1024 + 4 + 2*1 + 50*32 = 2630 bytes
The purpose of the constant length serialization is to hide any
metadata the ciphertext length of the group database could
reveal.
"""
members = self.get_list_of_member_pub_keys()
number_of_dummies = self.settings.max_number_of_group_members - len(self.members)
members += number_of_dummies * [onion_address_to_pub_key(DUMMY_MEMBER)]
member_bytes = b''.join(members)
return (str_to_bytes(self.name)
+ self.group_id
+ bool_to_bytes(self.log_messages)
+ bool_to_bytes(self.notifications)
+ member_bytes)
def add_members(self, contacts: List['Contact']) -> None:
"""Add list of contact objects to group."""
for c in contacts:
if c.rx_account not in self.get_list_of_member_accounts():
self.members.append(c)
"""Add a list of Contact objects to the group."""
pre_existing = self.get_list_of_member_pub_keys()
self.members.extend((c for c in contacts if c.onion_pub_key not in pre_existing))
self.store_groups()
def remove_members(self, accounts: List[str]) -> bool:
"""Remove contact objects from group."""
to_remove = set(accounts) & set(self.get_list_of_member_accounts())
def remove_members(self, pub_keys: List[bytes]) -> bool:
"""Remove a list of Contact objects from the group.
Return True if the member(s) were removed, else False.
"""
to_remove = set(pub_keys) & set(self.get_list_of_member_pub_keys())
if to_remove:
self.members = [m for m in self.members if m.rx_account not in to_remove]
self.members = [m for m in self.members if m.onion_pub_key not in to_remove]
self.store_groups()
return any(to_remove)
def get_list_of_member_accounts(self) -> List[str]:
"""Return list of members' rx_accounts."""
return [m.rx_account for m in self.members]
def get_list_of_member_pub_keys(self) -> List[bytes]:
"""Return list of members' public keys."""
return [m.onion_pub_key for m in self.members]
def get_list_of_member_nicks(self) -> List[str]:
"""Return list of members' nicks."""
return [m.nick for m in self.members]
def has_member(self, onion_pub_key: bytes) -> bool:
"""Return True if a member with Onion public key is in the group, else False."""
return any(m.onion_pub_key == onion_pub_key for m in self.members)
def has_member(self, account: str) -> bool:
"""Return True if specified account is in group, else False."""
return any(m.rx_account == account for m in self.members)
def has_members(self) -> bool:
"""Return True if group has contact objects, else False."""
return any(self.members)
def empty(self) -> bool:
"""Return True if the group is empty, else False."""
return not any(self.members)
class GroupList(Iterable, Sized):
"""\
GroupList object manages list of group
objects and encrypted group database.
GroupList object manages TFC's Group objects and the storage of the
objects in an encrypted database.
The main purpose of this object is to manage the `self.groups`-list
that contains TFC's groups. The database is stored on disk in
encrypted form. Prior to encryption, the database is padded with
dummy groups. Because each group might have a different number of
members, each group is also padded with dummy members. The dummy
groups and members hide the actual number of groups and members that
could otherwise be revealed by the size of the encrypted database.
As long as the user sticks to default settings that limits TFC's
group database to 50 groups and 50 members per group, the database
will effectively hide the actual number of groups and number of
members in them. The maximum number of groups and number of members
per group can be changed by editing the `max_number_of_groups` and
`max_number_of_group_members` settings respectively. Deviating from
the default settings can, however, in theory, reveal to a physical
attacker, the user has more than 50 groups or more than 50 members
in a group.
The GroupList object also provides handy methods with human-readable
names for making queries to the database.
"""
def __init__(self,
master_key: 'MasterKey',
settings: 'Settings',
contact_list: 'ContactList') -> None:
contact_list: 'ContactList'
) -> None:
"""Create a new GroupList object."""
self.master_key = master_key
self.settings = settings
@ -128,197 +216,316 @@ class GroupList(Iterable, Sized):
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_groups()
self._load_groups()
else:
self.store_groups()
def __iter__(self) -> Generator:
"""Iterate over list of groups."""
"""Iterate over Group objects in `self.groups`."""
yield from self.groups
def __len__(self) -> int:
"""Return number of groups."""
"""Return the number of Group objects in `self.groups`."""
return len(self.groups)
def store_groups(self) -> None:
"""Write groups to encrypted database."""
groups = self.groups + [self.generate_dummy_group()] * (self.settings.max_number_of_groups - len(self.groups))
pt_bytes = self.generate_group_db_header()
pt_bytes += b''.join([g.serialize_g() for g in groups])
"""Write the list of groups to an encrypted database.
This function will first generate a header that stores
information about the group database content and padding at the
moment of calling. Next, the function will serialize every Group
object (including dummy groups) to form the constant length
plaintext that will be encrypted and stored in the database.
By default, TFC has a maximum number of 50 groups with 50
members. In addition, the group database stores the header that
contains four 8-byte values. The database plaintext length with
50 groups, each with 50 members is
4*8 + 50*( 1024 + 4 + 2*1 + 50*32)
= 32 + 50*2630
= 131532 bytes.
The ciphertext includes a 24-byte nonce and a 16-byte tag, so
the size of the final database is 131572 bytes.
"""
pt_bytes = self._generate_group_db_header()
pt_bytes += b''.join([g.serialize_g() for g in (self.groups + self._dummy_groups())])
ct_bytes = encrypt_and_sign(pt_bytes, self.master_key.master_key)
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(ct_bytes)
def load_groups(self) -> None:
"""Load groups from encrypted database."""
def _load_groups(self) -> None:
"""Load groups from the encrypted database.
The function first reads, authenticates and decrypts the group
database data. Next, it slices and decodes the header values
that help the function to properly de-serialize the database
content. The function then removes dummy groups based on header
data. Next, the function updates the group database settings if
necessary. It then splits group data based on header data into
blocks, which are further sliced, and processed if necessary, to
obtain data required to create Group objects. Finally, if
needed, the function will update the group database content.
"""
with open(self.file_name, 'rb') as f:
ct_bytes = f.read()
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key)
update_db = False
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key, database=self.file_name)
# Slice and decode headers
padding_for_group_db = bytes_to_int(pt_bytes[0:8])
padding_for_members = bytes_to_int(pt_bytes[8:16])
number_of_actual_groups = bytes_to_int(pt_bytes[16:24])
largest_group = bytes_to_int(pt_bytes[24:32])
group_db_headers, pt_bytes = separate_header(pt_bytes, GROUP_DB_HEADER_LENGTH)
if number_of_actual_groups > self.settings.max_number_of_groups:
self.settings.max_number_of_groups = round_up(number_of_actual_groups)
self.settings.store_settings()
update_db = True
print("Group database had {} groups. Increased max number of groups to {}."
.format(number_of_actual_groups, self.settings.max_number_of_groups))
padding_for_group_db, padding_for_members, number_of_groups, members_in_largest_group \
= list(map(bytes_to_int, split_byte_string(group_db_headers, ENCODED_INTEGER_LENGTH)))
if largest_group > self.settings.max_number_of_group_members:
self.settings.max_number_of_group_members = round_up(largest_group)
self.settings.store_settings()
update_db = True
print("A group in group database had {} members. Increased max size of groups to {}."
.format(largest_group, self.settings.max_number_of_group_members))
# Slice dummy groups
bytes_per_group = GROUP_STATIC_LENGTH + padding_for_members * ONION_SERVICE_PUBLIC_KEY_LENGTH
dummy_data_len = (padding_for_group_db - number_of_groups) * bytes_per_group
group_data = pt_bytes[:-dummy_data_len]
group_name_field = 1
string_fields_in_group = padding_for_members + group_name_field
bytes_per_group = string_fields_in_group * PADDED_UTF32_STR_LEN + 2 * BOOLEAN_SETTING_LEN
update_db = self._check_db_settings(number_of_groups, members_in_largest_group)
blocks = split_byte_string(group_data, item_len=bytes_per_group)
# Remove group header and dummy groups
dummy_group_data = (padding_for_group_db - number_of_actual_groups) * bytes_per_group
group_data = pt_bytes[GROUP_DB_HEADER_LEN:-dummy_group_data]
all_pub_keys = self.contact_list.get_list_of_pub_keys()
dummy_pub_key = onion_address_to_pub_key(DUMMY_MEMBER)
groups = split_byte_string(group_data, item_len=bytes_per_group)
# Deserialize group objects
for block in blocks:
assert len(block) == bytes_per_group
for g in groups:
assert len(g) == bytes_per_group
name_bytes, group_id, log_messages_byte, notification_byte, ser_pub_keys \
= separate_headers(block, [PADDED_UTF32_STR_LENGTH, GROUP_ID_LENGTH] + 2*[ENCODED_BOOLEAN_LENGTH])
name = bytes_to_str( g[ 0:1024])
log_messages = bytes_to_bool( g[1024:1025])
notifications = bytes_to_bool( g[1025:1026])
members_bytes = split_byte_string(g[1026:], item_len=PADDED_UTF32_STR_LEN)
members_w_dummies = [bytes_to_str(m) for m in members_bytes]
members = [m for m in members_w_dummies if m != DUMMY_MEMBER]
pub_key_list = split_byte_string(ser_pub_keys, item_len=ONION_SERVICE_PUBLIC_KEY_LENGTH)
group_pub_keys = [k for k in pub_key_list if k != dummy_pub_key]
group_members = [self.contact_list.get_contact_by_pub_key(k) for k in group_pub_keys if k in all_pub_keys]
# Load contacts based on stored rx_account
group_members = [self.contact_list.get_contact(m) for m in members if self.contact_list.has_contact(m)]
self.groups.append(Group(name =bytes_to_str(name_bytes),
group_id =group_id,
log_messages =bytes_to_bool(log_messages_byte),
notifications=bytes_to_bool(notification_byte),
members =group_members,
settings =self.settings,
store_groups =self.store_groups))
# Update group database if any member has been removed from contact database
if not all(m in self.contact_list.get_list_of_accounts() for m in members):
update_db = True
self.groups.append(Group(name, log_messages, notifications, group_members, self.settings, self.store_groups))
update_db |= set(all_pub_keys) > set(group_pub_keys)
if update_db:
self.store_groups()
def generate_group_db_header(self) -> bytes:
def _check_db_settings(self,
number_of_actual_groups: int,
members_in_largest_group: int
) -> bool:
"""\
Adjust TFC's settings automatically if loaded group database was
stored using larger database setting values.
If settings had to be adjusted, return True so
`self._load_groups` knows to write changes to a new database.
"""
update_db = False
if number_of_actual_groups > self.settings.max_number_of_groups:
self.settings.max_number_of_groups = round_up(number_of_actual_groups)
update_db = True
if members_in_largest_group > self.settings.max_number_of_group_members:
self.settings.max_number_of_group_members = round_up(members_in_largest_group)
update_db = True
if update_db:
self.settings.store_settings()
return update_db
def _generate_group_db_header(self) -> bytes:
"""Generate group database metadata header.
padding_for_group_db helps define how many groups are actually in the database.
This function produces a 32-byte bytestring that contains four
values that allow the Transmitter or Receiver program to
properly de-serialize the database content:
padding_for_members defines to how many members each group is padded to.
`max_number_of_groups` helps slice off dummy groups when
loading the database.
number_of_actual_groups helps define how many groups are actually in the database.
Also allows TFC to automatically adjust the minimum
settings for number of groups. This is needed e.g. in cases
where the group database is swapped to a backup that has
different number of groups than TFC's settings expect.
`max_number_of_group_members` helps split dummy free group data
into proper length blocks that can
be further sliced and decoded to
data used to build Group objects.
largest_group helps TFC to automatically adjust minimum setting for max
number of members in each group (e.g. in cases like the one
described above).
`len(self.groups)` helps slice off dummy groups when
loading the database. It also
allows TFC to automatically adjust
the max_number_of_groups setting.
The value is needed, e.g., in
cases where the group database is
swapped to a backup that has a
different number of groups than
TFC's settings expect.
`self.largest_group()` helps TFC to automatically adjust
the max_number_of_group_members
setting (e.g., in cases like the
one described above).
"""
return b''.join(list(map(int_to_bytes, [self.settings.max_number_of_groups,
self.settings.max_number_of_group_members,
len(self.groups),
self.largest_group()])))
def generate_dummy_group(self) -> 'Group':
"""Generate a dummy group."""
def _generate_dummy_group(self) -> 'Group':
"""Generate a dummy Group object.
The dummy group simplifies the code around the constant length
serialization when the data is stored to, or read from the
database.
"""
dummy_member = self.contact_list.generate_dummy_contact()
return Group(name =DUMMY_GROUP,
group_id =bytes(GROUP_ID_LENGTH),
log_messages =False,
notifications=False,
members =self.settings.max_number_of_group_members * [self.contact_list.generate_dummy_contact()],
members =self.settings.max_number_of_group_members * [dummy_member],
settings =self.settings,
store_groups =lambda: None)
def _dummy_groups(self) -> List[Group]:
"""Generate a proper size list of dummy groups for database padding."""
number_of_dummies = self.settings.max_number_of_groups - len(self.groups)
dummy_group = self._generate_dummy_group()
return [dummy_group] * number_of_dummies
def add_group(self,
name: str,
group_id: bytes,
log_messages: bool,
notifications: bool,
members: List['Contact']) -> None:
"""Add a new group to group list."""
"""Add a new group to `self.groups` and write changes to the database."""
if self.has_group(name):
self.remove_group(name)
self.remove_group_by_name(name)
self.groups.append(Group(name, log_messages, notifications, members, self.settings, self.store_groups))
self.groups.append(Group(name,
group_id,
log_messages,
notifications,
members,
self.settings,
self.store_groups))
self.store_groups()
def remove_group(self, name: str) -> None:
"""Remove group from group list."""
def remove_group_by_name(self, name: str) -> None:
"""Remove the specified group from the group list.
If a group with the matching name was found and removed, write
changes to the database.
"""
for i, g in enumerate(self.groups):
if g.name == name:
del self.groups[i]
self.store_groups()
break
def remove_group_by_id(self, group_id: bytes) -> None:
"""Remove the specified group from the group list.
If a group with the matching group ID was found and removed,
write changes to the database.
"""
for i, g in enumerate(self.groups):
if g.group_id == group_id:
del self.groups[i]
self.store_groups()
break
def get_group(self, name: str) -> Group:
"""Return Group object based on its name."""
return next(g for g in self.groups if g.name == name)
def get_group_by_id(self, group_id: bytes) -> Group:
"""Return Group object based on its group ID."""
return next(g for g in self.groups if g.group_id == group_id)
def get_list_of_group_names(self) -> List[str]:
"""Return list of group names."""
return [g.name for g in self.groups]
def get_group(self, name: str) -> Group:
"""Return group object based on it's name."""
return next(g for g in self.groups if g.name == name)
def get_list_of_group_ids(self) -> List[bytes]:
"""Return list of group IDs."""
return [g.group_id for g in self.groups]
def get_group_members(self, name: str) -> List['Contact']:
"""Return list of group members."""
return self.get_group(name).members
def get_list_of_hr_group_ids(self) -> List[str]:
"""Return list of human readable (B58 encoded) group IDs."""
return [b58encode(g.group_id) for g in self.groups]
def get_group_members(self, group_id: bytes) -> List['Contact']:
"""Return list of group members (Contact objects)."""
return self.get_group_by_id(group_id).members
def has_group(self, name: str) -> bool:
"""Return True if group list has group with specified name, else False."""
return any([g.name == name for g in self.groups])
"""Return True if group list has a group with the specified name, else False."""
return any(g.name == name for g in self.groups)
def has_groups(self) -> bool:
"""Return True if group list has groups, else False."""
return any(self.groups)
def has_group_id(self, group_id: bytes) -> bool:
"""Return True if group list has a group with the specified group ID, else False."""
return any(g.group_id == group_id for g in self.groups)
def largest_group(self) -> int:
"""Return size of group with most members."""
"""Return size of the group that has the most members."""
return max([0] + [len(g) for g in self.groups])
def print_groups(self) -> None:
"""Print list of groups."""
# Columns
c1 = ['Group ']
c2 = ['Logging']
c3 = ['Notify' ]
c4 = ['Members']
"""Print list of groups.
Neatly printed group list allows easy group management and it
also allows the user to check active logging and notification
setting, as well as what group ID Relay Program shows
corresponds to what group, and which contacts are in the group.
"""
# Initialize columns
c1 = ['Group' ]
c2 = ['Group ID']
c3 = ['Logging ']
c4 = ['Notify' ]
c5 = ['Members' ]
# Populate columns with group data that has only a single line
for g in self.groups:
c1.append(g.name)
c2.append('Yes' if g.log_messages else 'No')
c3.append('Yes' if g.notifications else 'No')
c2.append(b58encode(g.group_id))
c3.append('Yes' if g.log_messages else 'No')
c4.append('Yes' if g.notifications else 'No')
if g.has_members():
m_indent = max(len(g.name) for g in self.groups) + 28
m_string = ', '.join(sorted([m.nick for m in g.members]))
wrapper = textwrap.TextWrapper(width=max(1, (get_terminal_width() - m_indent)))
mem_lines = wrapper.fill(m_string).split('\n')
f_string = mem_lines[0] + '\n'
# Calculate the width of single-line columns
c1w, c2w, c3w, c4w = [max(len(v) for v in column) + CONTACT_LIST_INDENT for column in [c1, c2, c3, c4]]
for l in mem_lines[1:]:
f_string += m_indent * ' ' + l + '\n'
c4.append(f_string)
# Create a wrapper for Members-column
wrapped_members_line_indent = c1w + c2w + c3w + c4w
members_column_width = max(1, get_terminal_width() - wrapped_members_line_indent)
wrapper = textwrap.TextWrapper(width=members_column_width)
# Populate the Members-column
for g in self.groups:
if g.empty():
c5.append("<Empty group>\n")
else:
c4.append("<Empty group>\n")
comma_separated_nicks = ', '.join(sorted([m.nick for m in g.members]))
members_column_lines = wrapper.fill(comma_separated_nicks).split('\n')
lst = []
for name, log_setting, notify_setting, members in zip(c1, c2, c3, c4):
lst.append('{0:{1}} {2:{3}} {4:{5}} {6}'.format(
name, max(len(v) for v in c1) + CONTACT_LIST_INDENT,
log_setting, max(len(v) for v in c2) + CONTACT_LIST_INDENT,
notify_setting, max(len(v) for v in c3) + CONTACT_LIST_INDENT,
members))
final_str = members_column_lines[0] + '\n'
for line in members_column_lines[1:]:
final_str += wrapped_members_line_indent * ' ' + line + '\n'
lst.insert(1, get_terminal_width() * '')
print('\n'.join(lst) + '\n')
c5.append(final_str)
# Align columns by adding whitespace between fields of each line
lines = [f'{f1:{c1w}}{f2:{c2w}}{f3:{c3w}}{f4:{c4w}}{f5}' for f1, f2, f3, f4, f5 in zip(c1, c2, c3, c4, c5)]
# Add a terminal-wide line between the column names and the data
lines.insert(1, get_terminal_width() * '')
# Print the group list
print('\n'.join(lines) + '\n')

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
@ -23,11 +24,11 @@ import typing
from typing import Any, Callable, List
from src.common.crypto import auth_and_decrypt, encrypt_and_sign, hash_chain
from src.common.crypto import auth_and_decrypt, blake2b, csprng, encrypt_and_sign
from src.common.encoding import int_to_bytes, onion_address_to_pub_key
from src.common.encoding import bytes_to_int
from src.common.exceptions import CriticalError
from src.common.encoding import str_to_bytes, int_to_bytes
from src.common.encoding import bytes_to_str, bytes_to_int
from src.common.misc import ensure_dir, split_byte_string
from src.common.misc import ensure_dir, separate_headers, split_byte_string
from src.common.statics import *
if typing.TYPE_CHECKING:
@ -37,69 +38,114 @@ if typing.TYPE_CHECKING:
class KeySet(object):
"""\
KeySet object handles frequently changing
keys and hash ratchet counters of contacts.
KeySet object contains frequently changing keys and hash ratchet
counters of contacts:
onion_pub_key: The public key that corresponds to the contact's v3
Tor Onion Service address. Used to uniquely identify
the KeySet object.
tx_mk: Forward secret message key for sent messages.
rx_mk: Forward secret message key for received messages.
Used only by the Receiver Program.
tx_hk: Static header key used to encrypt and sign the hash
ratchet counter provided along the encrypted
assembly packet.
rx_hk: Static header key used to authenticate and decrypt
the hash ratchet counter of received messages. Used
only by the Receiver Program.
tx_harac: The hash ratchet counter for sent messages.
rx_harac: The hash ratchet counter for received messages. Used
only by the Receiver Program.
"""
def __init__(self,
rx_account: str,
tx_key: bytes,
rx_key: bytes,
tx_hek: bytes,
rx_hek: bytes,
tx_harac: int,
rx_harac: int,
store_keys: Callable) -> None:
onion_pub_key: bytes,
tx_mk: bytes,
rx_mk: bytes,
tx_hk: bytes,
rx_hk: bytes,
tx_harac: int,
rx_harac: int,
store_keys: Callable
) -> None:
"""Create a new KeySet object.
:param rx_account: UID for each recipient
:param tx_key: Forward secret message key for sent messages
:param rx_key: Forward secret message key for received messages (RxM only)
:param tx_hek: Static header key for hash ratchet counter of sent messages
:param rx_hek: Static header key for hash ratchet counter of received messages (RxM only)
:param tx_harac: Hash ratchet counter for sent messages
:param rx_harac: Hash ratchet counter for received messages (RxM only)
:param store_keys: Reference to KeyLists's method that writes all keys to db
The `self.store_keys` is a reference to the method of the parent
object KeyList that stores the list of KeySet objects into an
encrypted database.
"""
self.rx_account = rx_account
self.tx_key = tx_key
self.rx_key = rx_key
self.tx_hek = tx_hek
self.rx_hek = rx_hek
self.tx_harac = tx_harac
self.rx_harac = rx_harac
self.store_keys = store_keys
self.onion_pub_key = onion_pub_key
self.tx_mk = tx_mk
self.rx_mk = rx_mk
self.tx_hk = tx_hk
self.rx_hk = rx_hk
self.tx_harac = tx_harac
self.rx_harac = rx_harac
self.store_keys = store_keys
def serialize_k(self) -> bytes:
"""Return keyset data as constant length byte string."""
return (str_to_bytes(self.rx_account)
+ self.tx_key
+ self.rx_key
+ self.tx_hek
+ self.rx_hek
"""Return KeySet data as a constant length byte string.
This function serializes the KeySet's data into a byte string
that has the exact length of 32 + 4*32 + 2*8 = 176 bytes. The
length is guaranteed regardless of the content of the
attributes' values. The purpose of the constant length
serialization is to hide any metadata about the KeySet database
the ciphertext length of the key database would reveal.
"""
return (self.onion_pub_key
+ self.tx_mk
+ self.rx_mk
+ self.tx_hk
+ self.rx_hk
+ int_to_bytes(self.tx_harac)
+ int_to_bytes(self.rx_harac))
def rotate_tx_key(self) -> None:
def rotate_tx_mk(self) -> None:
"""\
Update TxM side tx-key and harac (provides
forward secrecy for sent messages).
Update Transmitter Program's tx-message key and tx-harac.
Replacing the key with its hash provides per-message forward
secrecy for sent messages. The hash ratchet used is also known
as the SCIMP Ratchet[1], and it is widely used, e.g., as part of
Signal's Double Ratchet[2].
To ensure the hash ratchet does not fall into a short cycle of
keys, the harac (that is a non-repeating value) is used as an
additional input when deriving the next key.
[1] (pp. 17-18) https://netzpolitik.org/wp-upload/SCIMP-paper.pdf
[2] https://signal.org/blog/advanced-ratcheting/
"""
self.tx_key = hash_chain(self.tx_key)
self.tx_mk = blake2b(self.tx_mk + int_to_bytes(self.tx_harac), digest_size=SYMMETRIC_KEY_LENGTH)
self.tx_harac += 1
self.store_keys()
def update_key(self, direction: str, key: bytes, offset: int) -> None:
"""\
Update RxM side tx/rx-key and harac (provides
forward secrecy for received messages).
def update_mk(self,
direction: str,
key: bytes,
offset: int
) -> None:
"""Update Receiver Program's tx/rx-message key and tx/rx-harac.
This method provides per-message forward secrecy for received
messages. Due to the possibility of dropped packets, the
Receiver Program might have to jump over some key values and
ratchet counter states. Therefore, the increase done by this
function is not linear like in the case of `rotate_tx_mk`.
"""
if direction == TX:
self.tx_key = key
self.tx_mk = key
self.tx_harac += offset
self.store_keys()
elif direction == RX:
self.rx_key = key
self.rx_mk = key
self.rx_harac += offset
self.store_keys()
else:
@ -108,12 +154,22 @@ class KeySet(object):
class KeyList(object):
"""\
KeyList object manages list of KeySet
objects and encrypted keyset database.
KeyList object manages TFC's KeySet objects and the storage of the
objects in an encrypted database.
The keyset database is separated from contact database as traffic
The main purpose of this object is to manage the `self.keysets`-list
that contains TFC's keys. The database is stored on disk in
encrypted form. Prior to encryption, the database is padded with
dummy KeySets. The dummy KeySets hide the number of actual KeySets
and thus the number of contacts, that would otherwise be revealed by
the size of the encrypted database. As long as the user has less
than 50 contacts, the database will effectively hide the actual
number of contacts.
The KeySet database is separated from contact database as traffic
masking needs to update keys frequently with no risk of read/write
queue blocking that occurs e.g. when new nick is being stored.
queue blocking that occurs, e.g., when an updated nick of contact is
being stored in the database.
"""
def __init__(self, master_key: 'MasterKey', settings: 'Settings') -> None:
@ -122,114 +178,190 @@ class KeyList(object):
self.settings = settings
self.keysets = [] # type: List[KeySet]
self.dummy_keyset = self.generate_dummy_keyset()
self.dummy_id = self.dummy_keyset.rx_account.encode('utf-32')
self.dummy_id = self.dummy_keyset.onion_pub_key
self.file_name = f'{DIR_USER_DATA}{settings.software_operation}_keys'
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_keys()
self._load_keys()
else:
self.store_keys()
def store_keys(self) -> None:
"""Write keys to encrypted database."""
keysets = self.keysets + [self.dummy_keyset] * (self.settings.max_number_of_contacts - len(self.keysets))
pt_bytes = b''.join([k.serialize_k() for k in keysets])
"""Write the list of KeySet objects to an encrypted database.
This function will first create a list of KeySets and dummy
KeySets. It will then serialize every KeySet object on that list
and join the constant length byte strings to form the plaintext
that will be encrypted and stored in the database.
By default, TFC has a maximum number of 50 contacts. In
addition, the database stores the KeySet used to encrypt
commands from Transmitter to Receiver Program). The plaintext
length of 51 serialized KeySets is 51*176 = 8976 bytes. The
ciphertext includes a 24-byte nonce and a 16-byte tag, so the
size of the final database is 9016 bytes.
"""
pt_bytes = b''.join([k.serialize_k() for k in self.keysets + self._dummy_keysets()])
ct_bytes = encrypt_and_sign(pt_bytes, self.master_key.master_key)
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(ct_bytes)
def load_keys(self) -> None:
"""Load keys from encrypted database."""
def _load_keys(self) -> None:
"""Load KeySets from the encrypted database.
This function first reads and decrypts the database content. It
then splits the plaintext into a list of 176-byte blocks. Each
block contains the serialized data of one KeySet. Next, the
function will remove from the list all dummy KeySets (that start
with the `dummy_id` byte string). The function will then
populate the `self.keysets` list with KeySet objects, the data
of which is sliced and decoded from the dummy-free blocks.
"""
with open(self.file_name, 'rb') as f:
ct_bytes = f.read()
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key)
entries = split_byte_string(pt_bytes, item_len=KEYSET_LENGTH)
keysets = [e for e in entries if not e.startswith(self.dummy_id)]
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key, database=self.file_name)
blocks = split_byte_string(pt_bytes, item_len=KEYSET_LENGTH)
df_blocks = [b for b in blocks if not b.startswith(self.dummy_id)]
for k in keysets:
assert len(k) == KEYSET_LENGTH
for block in df_blocks:
assert len(block) == KEYSET_LENGTH
self.keysets.append(KeySet(rx_account=bytes_to_str(k[ 0:1024]),
tx_key = k[1024:1056],
rx_key = k[1056:1088],
tx_hek = k[1088:1120],
rx_hek = k[1120:1152],
tx_harac =bytes_to_int(k[1152:1160]),
rx_harac =bytes_to_int(k[1160:1168]),
onion_pub_key, tx_mk, rx_mk, tx_hk, rx_hk, tx_harac_bytes, rx_harac_bytes \
= separate_headers(block, [ONION_SERVICE_PUBLIC_KEY_LENGTH] + 4*[SYMMETRIC_KEY_LENGTH] + [HARAC_LENGTH])
self.keysets.append(KeySet(onion_pub_key=onion_pub_key,
tx_mk=tx_mk,
rx_mk=rx_mk,
tx_hk=tx_hk,
rx_hk=rx_hk,
tx_harac=bytes_to_int(tx_harac_bytes),
rx_harac=bytes_to_int(rx_harac_bytes),
store_keys=self.store_keys))
def change_master_key(self, master_key: 'MasterKey') -> None:
"""Change master key and encrypt database with new key."""
self.master_key = master_key
self.store_keys()
@staticmethod
def generate_dummy_keyset() -> 'KeySet':
"""Generate dummy keyset."""
return KeySet(rx_account=DUMMY_CONTACT,
tx_key =bytes(KEY_LENGTH),
rx_key =bytes(KEY_LENGTH),
tx_hek =bytes(KEY_LENGTH),
rx_hek =bytes(KEY_LENGTH),
tx_harac =INITIAL_HARAC,
rx_harac =INITIAL_HARAC,
"""Generate a dummy KeySet object.
The dummy KeySet simplifies the code around the constant length
serialization when the data is stored to, or read from the
database.
In case the dummy keyset would ever be loaded accidentally, it
uses a set of random keys to prevent decryption by eavesdropper.
"""
return KeySet(onion_pub_key=onion_address_to_pub_key(DUMMY_CONTACT),
tx_mk=csprng(),
rx_mk=csprng(),
tx_hk=csprng(),
rx_hk=csprng(),
tx_harac=INITIAL_HARAC,
rx_harac=INITIAL_HARAC,
store_keys=lambda: None)
def add_keyset(self,
rx_account: str,
tx_key: bytes,
rx_key: bytes,
tx_hek: bytes,
rx_hek: bytes) -> None:
"""Add new keyset to key list and write changes to database."""
if self.has_keyset(rx_account):
self.remove_keyset(rx_account)
def _dummy_keysets(self) -> List[KeySet]:
"""\
Generate a proper size list of dummy KeySets for database
padding.
self.keysets.append(KeySet(rx_account,
tx_key, rx_key,
tx_hek, rx_hek,
INITIAL_HARAC, INITIAL_HARAC,
self.store_keys))
The additional contact (+1) is the local key.
"""
number_of_contacts_to_store = self.settings.max_number_of_contacts + 1
number_of_dummies = number_of_contacts_to_store - len(self.keysets)
return [self.dummy_keyset] * number_of_dummies
def add_keyset(self,
onion_pub_key: bytes,
tx_mk: bytes,
rx_mk: bytes,
tx_hk: bytes,
rx_hk: bytes) -> None:
"""\
Add a new KeySet to `self.keysets` list and write changes to the
database.
"""
if self.has_keyset(onion_pub_key):
self.remove_keyset(onion_pub_key)
self.keysets.append(KeySet(onion_pub_key=onion_pub_key,
tx_mk=tx_mk,
rx_mk=rx_mk,
tx_hk=tx_hk,
rx_hk=rx_hk,
tx_harac=INITIAL_HARAC,
rx_harac=INITIAL_HARAC,
store_keys=self.store_keys))
self.store_keys()
def remove_keyset(self, name: str) -> None:
def remove_keyset(self, onion_pub_key: bytes) -> None:
"""\
Remove keyset from keys based on account
and write changes to database.
Remove KeySet from `self.keysets` based on Onion Service public key.
If the KeySet was found and removed, write changes to the database.
"""
for i, k in enumerate(self.keysets):
if name == k.rx_account:
if k.onion_pub_key == onion_pub_key:
del self.keysets[i]
self.store_keys()
break
def get_keyset(self, account: str) -> KeySet:
"""Load keyset from list based on unique account name."""
return next(k for k in self.keysets if account == k.rx_account)
def change_master_key(self, master_key: 'MasterKey') -> None:
"""Change the master key and encrypt the database with the new key."""
self.master_key = master_key
self.store_keys()
def has_keyset(self, account: str) -> bool:
"""Return True if keyset for account exists, else False."""
return any(account == k.rx_account for k in self.keysets)
def update_database(self, settings: 'Settings') -> None:
"""Update settings and database size."""
self.settings = settings
self.store_keys()
def has_rx_key(self, account: str) -> bool:
"""Return True if keyset has rx-key, else False."""
return self.get_keyset(account).rx_key != bytes(KEY_LENGTH)
def get_keyset(self, onion_pub_key: bytes) -> KeySet:
"""\
Return KeySet object from `self.keysets`-list that matches the
Onion Service public key used as the selector.
"""
return next(k for k in self.keysets if k.onion_pub_key == onion_pub_key)
def has_local_key(self) -> bool:
"""Return True if local key exists, else False."""
return any(k.rx_account == LOCAL_ID for k in self.keysets)
def get_list_of_pub_keys(self) -> List[bytes]:
"""Return list of Onion Service public keys for KeySets."""
return [k.onion_pub_key for k in self.keysets if k.onion_pub_key != LOCAL_PUBKEY]
def has_keyset(self, onion_pub_key: bytes) -> bool:
"""Return True if KeySet with matching Onion Service public key exists, else False."""
return any(onion_pub_key == k.onion_pub_key for k in self.keysets)
def has_rx_mk(self, onion_pub_key: bytes) -> bool:
"""\
Return True if KeySet with matching Onion Service public key has
rx-message key, else False.
When the PSK key exchange option is selected, the KeySet for
newly created contact on Receiver Program is a null-byte string.
This default value indicates the PSK of contact has not yet been
imported.
"""
return self.get_keyset(onion_pub_key).rx_mk != bytes(SYMMETRIC_KEY_LENGTH)
def has_local_keyset(self) -> bool:
"""Return True if local KeySet object exists, else False."""
return any(k.onion_pub_key == LOCAL_PUBKEY for k in self.keysets)
def manage(self, command: str, *params: Any) -> None:
"""Manage keyset database based on data received from km_queue."""
"""Manage KeyList based on a command.
The command is delivered from `input_process` to `sender_loop`
process via the `KEY_MANAGEMENT_QUEUE`.
"""
if command == KDB_ADD_ENTRY_HEADER:
self.add_keyset(*params)
elif command == KDB_REMOVE_ENTRY_HEADER:
self.remove_keyset(*params)
elif command == KDB_CHANGE_MASTER_KEY_HEADER:
self.change_master_key(*params)
elif command == KDB_UPDATE_SIZE_HEADER:
self.update_database(*params)
else:
raise CriticalError("Invalid KeyList management command.")

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,30 +16,28 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os.path
import re
import struct
import sys
import textwrap
import time
import typing
import zlib
from collections import defaultdict
from datetime import datetime
from typing import DefaultDict, Dict, List, Tuple, Union
from datetime import datetime
from typing import Dict, IO, List, Tuple, Union
from src.common.crypto import auth_and_decrypt, encrypt_and_sign, rm_padding_bytes
from src.common.crypto import auth_and_decrypt, encrypt_and_sign
from src.common.encoding import b58encode, bytes_to_bool, bytes_to_timestamp, pub_key_to_short_address
from src.common.exceptions import FunctionReturn
from src.common.encoding import bytes_to_str, str_to_bytes
from src.common.misc import ensure_dir, get_terminal_width, ignored
from src.common.output import c_print, clear_screen
from src.common.misc import ensure_dir, get_terminal_width, ignored, separate_header, separate_headers
from src.common.output import clear_screen
from src.common.statics import *
from src.rx.windows import RxWindow
from src.receiver.packet import PacketList
from src.receiver.windows import RxWindow
if typing.TYPE_CHECKING:
from multiprocessing import Queue
@ -46,76 +45,128 @@ if typing.TYPE_CHECKING:
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.tx.windows import TxWindow
from src.transmitter.windows import TxWindow
MsgTuple = Tuple[datetime, str, bytes, bytes, bool, bool]
def log_writer_loop(queues: Dict[bytes, 'Queue'], unittest: bool = False) -> None:
"""Read log data from queue and write entry to log database.
def log_writer_loop(queues: Dict[bytes, 'Queue'], # Dictionary of queues
settings: 'Settings', # Settings object
unittest: bool = False # When True, exits the loop when UNITTEST_QUEUE is no longer empty.
) -> None:
"""Write assembly packets to log database.
When traffic masking is enabled, this process separates writing to
logfile from sender_loop to prevent IO delays (caused by access to
logfile) from revealing metadata about when communication takes place.
When traffic masking is enabled, the fact this loop is run as a
separate process, means the rate at which `sender_loop` outputs
packets is not altered by i/o delays (caused by access to the log
file). This hides metadata about when communication takes place,
even from an adversary performing timing attacks from within the
Networked Computer of the user.
"""
queue = queues[LOG_PACKET_QUEUE]
log_packet_queue = queues[LOG_PACKET_QUEUE]
log_setting_queue = queues[LOG_SETTING_QUEUE]
traffic_masking_queue = queues[TRAFFIC_MASKING_QUEUE]
logfile_masking_queue = queues[LOGFILE_MASKING_QUEUE]
logging_state = False
logfile_masking = settings.log_file_masking
traffic_masking = settings.traffic_masking
while True:
with ignored(EOFError, KeyboardInterrupt):
while queue.qsize() == 0:
while log_packet_queue.qsize() == 0:
time.sleep(0.01)
log_packet, log_as_ph, packet, rx_account, settings, master_key = queue.get()
if traffic_masking_queue.qsize() != 0:
traffic_masking = traffic_masking_queue.get()
if logfile_masking_queue.qsize() != 0:
logfile_masking = logfile_masking_queue.get()
if rx_account is None or not log_packet:
onion_pub_key, assembly_packet, log_messages, log_as_ph, master_key = log_packet_queue.get()
# Detect and ignore commands.
if onion_pub_key is None:
continue
header = bytes([packet[0]])
# `logging_state` retains the logging setting for noise packets
# that do not know the log setting of the window. To prevent
# logging of noise packets in situation where logging has
# been disabled, but no new message assembly packet carrying
# the logging setting is received, the LOG_SETTING_QUEUE
# is checked for up-to-date logging setting for every
# received noise packet.
if assembly_packet[:ASSEMBLY_PACKET_HEADER_LENGTH] == P_N_HEADER:
if log_setting_queue.qsize() != 0:
logging_state = log_setting_queue.get()
else:
logging_state = log_messages
if header == P_N_HEADER or header.isupper() or log_as_ph:
packet = PLACEHOLDER_DATA
if not (settings.session_traffic_masking and settings.logfile_masking):
# Detect if we are going to log the packet at all.
if not logging_state:
continue
# Only noise packets, whisper-messages, file key delivery
# packets and file assembly packets have `log_as_ph` enabled.
# These packets are stored as placeholder data to hide
# metadata revealed by the differences in log file size vs
# the number of sent assembly packets.
if log_as_ph:
# It's pointless to hide number of messages in the log
# file if that information is revealed by observing the
# Networked Computer when traffic masking is disabled.
if not traffic_masking:
continue
write_log_entry(packet, rx_account, settings, master_key)
# If traffic masking is enabled, log file masking might
# still be unnecessary if the user does not care to hide
# the tiny amount of metadata (total amount of
# communication) from a physical attacker. This after
# all consumes 333 bytes of disk space per noise packet.
# So finally we check that the user has opted in for log
# file masking.
if not logfile_masking:
continue
assembly_packet = PLACEHOLDER_DATA
write_log_entry(assembly_packet, onion_pub_key, settings, master_key)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
def write_log_entry(assembly_packet: bytes,
account: str,
settings: 'Settings',
master_key: 'MasterKey',
origin: bytes = ORIGIN_USER_HEADER) -> None:
"""Add assembly packet to encrypted logfile.
def write_log_entry(assembly_packet: bytes, # Assembly packet to log
onion_pub_key: bytes, # Onion Service public key of the associated contact
settings: 'Settings', # Settings object
master_key: 'MasterKey', # Master key object
origin: bytes = ORIGIN_USER_HEADER # The direction of logged packet
) -> None:
"""Add an assembly packet to the encrypted log database.
This method of logging allows reconstruction of conversation while
protecting the metadata about the length of messages other logfile
formats would reveal.
Logging assembly packets allows reconstruction of conversation while
protecting metadata about the length of messages alternative log
file formats could reveal.
TxM can only log sent messages. This is not useful for recalling
conversations but serves an important role in audit of recipient's
RxM-side logs, where malware could have substituted logged data.
Transmitter Program can only log sent messages. This is not useful
for recalling conversations but it makes it possible to audit
recipient's Destination Computer-side logs, where malware could have
substituted content of the sent messages.
Files are not content produced or accessed by TFC, thus keeping a
copy of file data in log database is pointless and potentially
dangerous if user thinks they have deleted the file from their
system. However, from the perspective of metadata, having a
difference in number of logged packets when compared to number of
output packets could reveal additional metadata about file
transmission. To solve both issues, TFC only logs placeholder data.
:param assembly_packet: Assembly packet to log
:param account: Recipient's account (UID)
:param settings: Settings object
:param master_key: Master key object
:param origin: Direction of logged packet
:return: None
Files are not produced or accessed by TFC. Thus, keeping a copy of
file data in the log database is pointless and potentially dangerous,
because the user should be right to assume deleting the file from
`received_files` directory is enough. However, from the perspective
of metadata, a difference between the number of logged packets and
the number of output packets could reveal additional metadata about
communication. Thus, during traffic masking, if
`settings.log_file_masking` is enabled, instead of file data, TFC
writes placeholder data to the log database.
"""
encoded_account = str_to_bytes(account)
unix_timestamp = int(time.time())
timestamp_bytes = struct.pack('<L', unix_timestamp)
timestamp = struct.pack('<L', int(time.time()))
pt_bytes = encoded_account + timestamp_bytes + origin + assembly_packet
pt_bytes = onion_pub_key + timestamp + origin + assembly_packet
ct_bytes = encrypt_and_sign(pt_bytes, key=master_key.master_key)
assert len(ct_bytes) == LOG_ENTRY_LENGTH
@ -126,133 +177,127 @@ def write_log_entry(assembly_packet: bytes,
f.write(ct_bytes)
def get_logfile(file_name: str) -> IO:
"""Load file descriptor for log database."""
ensure_dir(DIR_USER_DATA)
if not os.path.isfile(file_name):
raise FunctionReturn("No log database available.")
return open(file_name, 'rb')
def access_logs(window: Union['TxWindow', 'RxWindow'],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey',
msg_to_load: int = 0,
export: bool = False) -> None:
export: bool = False
) -> None:
"""\
Decrypt 'msg_to_load' last messages from
log database and display/export it.
"""
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
if not os.path.isfile(file_name):
raise FunctionReturn(f"Error: Could not find log database.")
Load 'msg_to_load' last messages from log database and display or
export them.
log_file = open(file_name, 'rb')
ts_message_list = [] # type: List[Tuple['datetime', str, str, bytes, bool]]
assembly_p_buf = defaultdict(list) # type: DefaultDict[str, List[bytes]]
group_msg_id = b''
The default value of zero for `msg_to_load` means all messages for
the window will be retrieved from the log database.
"""
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
log_file = get_logfile(file_name)
packet_list = PacketList(settings, contact_list)
message_log = [] # type: List[MsgTuple]
group_msg_id = b''
for ct in iter(lambda: log_file.read(LOG_ENTRY_LENGTH), b''):
pt = auth_and_decrypt(ct, key=master_key.master_key)
account = bytes_to_str(pt[0:1024])
plaintext = auth_and_decrypt(ct, master_key.master_key, database=file_name)
if window.type == WIN_TYPE_CONTACT and window.uid != account:
onion_pub_key, timestamp, origin, assembly_packet = separate_headers(plaintext,
[ONION_SERVICE_PUBLIC_KEY_LENGTH,
TIMESTAMP_LENGTH,
ORIGIN_HEADER_LENGTH])
if window.type == WIN_TYPE_CONTACT and onion_pub_key != window.uid:
continue
time_stamp = datetime.fromtimestamp(struct.unpack('<L', pt[1024:1028])[0])
origin = pt[1028:1029]
assembly_header = pt[1029:1030]
assembly_pt = pt[1030:1325]
key = origin.decode() + account
packet = packet_list.get_packet(onion_pub_key, origin, MESSAGE, log_access=True)
try:
packet.add_packet(assembly_packet)
except FunctionReturn:
continue
if not packet.is_complete:
continue
if assembly_header == M_C_HEADER:
assembly_p_buf.pop(key, None)
whisper_byte, header, message = separate_headers(packet.assemble_message_packet(), [WHISPER_FIELD_LENGTH,
MESSAGE_HEADER_LENGTH])
whisper = bytes_to_bool(whisper_byte)
elif assembly_header == M_L_HEADER:
assembly_p_buf[key] = [assembly_pt]
if header == PRIVATE_MESSAGE_HEADER and window.type == WIN_TYPE_CONTACT:
message_log.append(
(bytes_to_timestamp(timestamp), message.decode(), onion_pub_key, packet.origin, whisper, False))
elif assembly_header == M_A_HEADER:
if key not in assembly_p_buf:
elif header == GROUP_MESSAGE_HEADER and window.type == WIN_TYPE_GROUP:
purp_group_id, message = separate_header(message, GROUP_ID_LENGTH)
if window.group is not None and purp_group_id != window.group.group_id:
continue
assembly_p_buf[key].append(assembly_pt)
elif assembly_header in [M_S_HEADER, M_E_HEADER]:
if assembly_header == M_S_HEADER:
depadded = rm_padding_bytes(assembly_pt)
decompressed = zlib.decompress(depadded)
else:
if key not in assembly_p_buf:
purp_msg_id, message = separate_header(message, GROUP_MSG_ID_LENGTH)
if packet.origin == ORIGIN_USER_HEADER:
if purp_msg_id == group_msg_id:
continue
assembly_p_buf[key].append(assembly_pt)
group_msg_id = purp_msg_id
pt_buffer = b''.join(assembly_p_buf.pop(key))
inner_layer = rm_padding_bytes(pt_buffer)
decrypted = auth_and_decrypt(nonce_ct_tag=inner_layer[:-KEY_LENGTH],
key =inner_layer[-KEY_LENGTH:])
decompressed = zlib.decompress(decrypted)
header = decompressed[:1]
if header == PRIVATE_MESSAGE_HEADER:
if window.type == WIN_TYPE_GROUP:
continue
message = decompressed[1:].decode()
ts_message_list.append((time_stamp, message, account, origin, False))
elif header == GROUP_MESSAGE_HEADER:
purp_msg_id = decompressed[1:1+GROUP_MSG_ID_LEN]
group_name, message = [f.decode() for f in decompressed[1+GROUP_MSG_ID_LEN:].split(US_BYTE)]
if group_name != window.name:
continue
if origin == ORIGIN_USER_HEADER:
if purp_msg_id == group_msg_id: # Skip duplicates of outgoing messages
continue
group_msg_id = purp_msg_id
ts_message_list.append((time_stamp, message, account, origin, False))
message_log.append(
(bytes_to_timestamp(timestamp), message.decode(), onion_pub_key, packet.origin, whisper, False))
log_file.close()
print_logs(ts_message_list[-msg_to_load:], export, msg_to_load, window, contact_list, group_list, settings)
print_logs(message_log[-msg_to_load:], export, msg_to_load, window, contact_list, group_list, settings)
def print_logs(ts_message_list: List[Tuple['datetime', str, str, bytes, bool]],
export: bool,
msg_to_load: int,
window: Union['TxWindow', 'RxWindow'],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> None:
"""Print list of logged messages to screen."""
def print_logs(message_list: List[MsgTuple],
export: bool,
msg_to_load: int,
window: Union['TxWindow', 'RxWindow'],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings'
) -> None:
"""Print list of logged messages to screen or export them to file."""
terminal_width = get_terminal_width()
system, m_dir = dict(tx=("TxM", "sent to"),
rx=("RxM", "to/from"),
ut=("UtM", "to/from"))[settings.software_operation]
system, m_dir = {TX: ("Transmitter", "sent to"),
RX: ("Receiver", "to/from")}[settings.software_operation]
f_name = open(f"{system} - Plaintext log ({window.name})", 'w+') if export else sys.stdout
subset = '' if msg_to_load == 0 else f"{msg_to_load} most recent "
title = textwrap.fill(f"Logfile of {subset}message{'' if msg_to_load == 1 else 's'} {m_dir} {window.name}", terminal_width)
title = textwrap.fill(f"Log file of {subset}message(s) {m_dir} {window.type} {window.name}", terminal_width)
log_window = RxWindow(window.uid, contact_list, group_list, settings)
packet_list = PacketList(settings, contact_list)
log_window = RxWindow(window.uid, contact_list, group_list, settings, packet_list)
log_window.is_active = True
log_window.message_log = ts_message_list
log_window.message_log = message_list
if ts_message_list:
if message_list:
if not export:
clear_screen()
print(title + '\n' + terminal_width * '', file=f_name)
log_window.redraw( file=f_name)
print("<End of logfile>\n", file=f_name)
print(title, file=f_name)
print(terminal_width * '', file=f_name)
log_window.redraw( file=f_name)
print("<End of log file>\n", file=f_name)
else:
raise FunctionReturn(f"No logged messages for '{window.uid}'")
raise FunctionReturn(f"No logged messages for {window.type} '{window.name}'.", head_clear=True)
if export:
f_name.close()
def re_encrypt(previous_key: bytes, new_key: bytes, settings: 'Settings') -> None:
"""Re-encrypt log database with new master key."""
def change_log_db_key(previous_key: bytes,
new_key: bytes,
settings: 'Settings'
) -> None:
"""Re-encrypt log database with a new master key."""
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
temp_name = f'{DIR_USER_DATA}{settings.software_operation}_logs_temp'
temp_name = f'{file_name}_temp'
if not os.path.isfile(file_name):
raise FunctionReturn(f"Error: Could not find log database.")
raise FunctionReturn("Error: Could not find log database.")
if os.path.isfile(temp_name):
os.remove(temp_name)
@ -260,9 +305,9 @@ def re_encrypt(previous_key: bytes, new_key: bytes, settings: 'Settings') -> Non
f_old = open(file_name, 'rb')
f_new = open(temp_name, 'ab+')
for ct_old in iter(lambda: f_old.read(LOG_ENTRY_LENGTH), b''):
pt_new = auth_and_decrypt(ct_old, key=previous_key)
f_new.write(encrypt_and_sign(pt_new, key=new_key))
for ct in iter(lambda: f_old.read(LOG_ENTRY_LENGTH), b''):
pt = auth_and_decrypt(ct, key=previous_key, database=file_name)
f_new.write(encrypt_and_sign(pt, key=new_key))
f_old.close()
f_new.close()
@ -271,115 +316,85 @@ def re_encrypt(previous_key: bytes, new_key: bytes, settings: 'Settings') -> Non
os.rename(temp_name, file_name)
def remove_logs(selector: str,
settings: 'Settings',
master_key: 'MasterKey') -> None:
"""Remove log entries for selector (group name / account).
def remove_logs(contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey',
selector: bytes
) -> None:
"""\
Remove log entries for selector (public key of an account/group ID).
If selector is a contact, all messages sent to and received from
the contact are removed. If selector is a group, only messages
for that group are removed.
If the selector is a public key, all messages (both the private
conversation and any associated group messages) sent to and received
from the associated contact are removed. If the selector is a group
ID, only messages for group determined by that group ID are removed.
"""
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
if not os.path.isfile(file_name):
raise FunctionReturn(f"Error: Could not find log database.")
log_file = open(file_name, 'rb')
ct_to_keep = [] # type: List[bytes]
maybe_keep_buf = defaultdict(list) # type: DefaultDict[str, List[bytes]]
assembly_p_buf = defaultdict(list) # type: DefaultDict[str, List[bytes]]
removed = False
window_type = WIN_TYPE_CONTACT if re.match(ACCOUNT_FORMAT, selector) else WIN_TYPE_GROUP
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
temp_name = f'{file_name}_temp'
log_file = get_logfile(file_name)
packet_list = PacketList(settings, contact_list)
ct_to_keep = [] # type: List[bytes]
removed = False
contact = len(selector) == ONION_SERVICE_PUBLIC_KEY_LENGTH
for ct in iter(lambda: log_file.read(LOG_ENTRY_LENGTH), b''):
pt = auth_and_decrypt(ct, key=master_key.master_key)
account = bytes_to_str(pt[0:1024])
plaintext = auth_and_decrypt(ct, master_key.master_key, database=file_name)
if window_type == WIN_TYPE_CONTACT:
if selector == account:
onion_pub_key, _, origin, assembly_packet = separate_headers(plaintext, [ONION_SERVICE_PUBLIC_KEY_LENGTH,
TIMESTAMP_LENGTH,
ORIGIN_HEADER_LENGTH])
if contact:
if onion_pub_key == selector:
removed = True
continue
else:
ct_to_keep.append(ct)
# To remove messages for specific group, messages in log database must
# be assembled to reveal their group name. Assembly packets' ciphertexts are
# buffered to 'maybe_keep_buf', from where they will be moved to 'ct_to_keep'
# if their associated group name differs from the one selected for log removal.
elif window_type == WIN_TYPE_GROUP:
origin = pt[1028:1029]
assembly_header = pt[1029:1030]
assembly_pt = pt[1030:1325]
key = origin.decode() + account
else: # Group
packet = packet_list.get_packet(onion_pub_key, origin, MESSAGE, log_access=True)
try:
packet.add_packet(assembly_packet, ct)
except FunctionReturn:
continue
if not packet.is_complete:
continue
if assembly_header == M_C_HEADER:
# Since log database is being altered anyway, also discard
# sequences of assembly packets that end in cancel packet.
assembly_p_buf.pop(key, None)
maybe_keep_buf.pop(key, None)
_, header, message = separate_headers(packet.assemble_message_packet(), [WHISPER_FIELD_LENGTH,
MESSAGE_HEADER_LENGTH])
elif assembly_header == M_L_HEADER:
maybe_keep_buf[key] = [ct]
assembly_p_buf[key] = [assembly_pt]
if header == PRIVATE_MESSAGE_HEADER:
ct_to_keep.extend(packet.log_ct_list)
packet.clear_assembly_packets()
elif assembly_header == M_A_HEADER:
if key not in assembly_p_buf:
continue
maybe_keep_buf[key].append(ct)
assembly_p_buf[key].append(assembly_pt)
elif assembly_header in [M_S_HEADER, M_E_HEADER]:
if assembly_header == M_S_HEADER:
maybe_keep_buf[key] = [ct]
depadded = rm_padding_bytes(assembly_pt)
decompressed = zlib.decompress(depadded)
elif header == GROUP_MESSAGE_HEADER:
group_id, _ = separate_header(message, GROUP_ID_LENGTH)
if group_id == selector:
removed = True
else:
if key not in assembly_p_buf:
continue
maybe_keep_buf[key].append(ct)
assembly_p_buf[key].append(assembly_pt)
buffered_pt = b''.join(assembly_p_buf.pop(key))
inner_layer = rm_padding_bytes(buffered_pt)
decrypted = auth_and_decrypt(nonce_ct_tag=inner_layer[:-KEY_LENGTH],
key =inner_layer[-KEY_LENGTH:])
decompressed = zlib.decompress(decrypted)
# The message is assembled by this point. We thus know if the
# long message was a group message, and if it's to be removed.
header = decompressed[:1]
if header == PRIVATE_MESSAGE_HEADER:
ct_to_keep.extend(maybe_keep_buf.pop(key))
elif header == GROUP_MESSAGE_HEADER:
group_name, *_ = [f.decode() for f in decompressed[1+GROUP_MSG_ID_LEN:].split(US_BYTE)] # type: Tuple[str, Union[str, List[str]]]
if group_name == selector:
removed = True
else:
ct_to_keep.extend(maybe_keep_buf[key])
maybe_keep_buf.pop(key)
elif header in [GROUP_MSG_INVITEJOIN_HEADER, GROUP_MSG_MEMBER_ADD_HEADER,
GROUP_MSG_MEMBER_REM_HEADER, GROUP_MSG_EXIT_GROUP_HEADER]:
group_name, *_ = [f.decode() for f in decompressed[1:].split(US_BYTE)]
if group_name == selector:
removed = True
else:
ct_to_keep.extend(maybe_keep_buf[key])
maybe_keep_buf.pop(key)
ct_to_keep.extend(packet.log_ct_list)
packet.clear_assembly_packets()
log_file.close()
with open(file_name, 'wb+') as f:
if os.path.isfile(temp_name):
os.remove(temp_name)
with open(temp_name, 'wb+') as f:
if ct_to_keep:
f.write(b''.join(ct_to_keep))
w_type = {WIN_TYPE_GROUP: 'group', WIN_TYPE_CONTACT: 'contact'}[window_type]
os.remove(file_name)
os.rename(temp_name, file_name)
if not removed:
raise FunctionReturn(f"Found no log entries for {w_type} '{selector}'")
try:
name = contact_list.get_contact_by_pub_key(selector).nick \
if contact else group_list.get_group_by_id(selector).name
except StopIteration:
name = pub_key_to_short_address(selector) \
if contact else b58encode(selector)
c_print(f"Removed log entries for {w_type} '{selector}'", head=1, tail=1)
action = "Removed" if removed else "Found no"
win_type = "contact" if contact else "group"
raise FunctionReturn(f"{action} log entries for {win_type} '{name}'.")

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,107 +16,145 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import multiprocessing
import os.path
import time
from src.common.crypto import argon2_kdf, csprng, hash_chain
from src.common.encoding import int_to_bytes, bytes_to_int
from src.common.exceptions import graceful_exit
from src.common.crypto import argon2_kdf, blake2b, csprng
from src.common.encoding import bytes_to_int, int_to_bytes
from src.common.exceptions import CriticalError, graceful_exit
from src.common.input import pwd_prompt
from src.common.misc import ensure_dir
from src.common.output import c_print, clear_screen, phase, print_on_previous_line
from src.common.misc import ensure_dir, separate_headers
from src.common.output import clear_screen, m_print, phase, print_on_previous_line
from src.common.statics import *
class MasterKey(object):
"""\
MasterKey object manages the 32-byte
master key and methods related to it.
MasterKey object manages the 32-byte master key and methods related
to it. Master key is the key that protects all data written on disk.
"""
def __init__(self, operation: str, local_test: bool) -> None:
"""Create a new MasterKey object."""
self.master_key = None # type: bytes
self.file_name = f'{DIR_USER_DATA}{operation}_login_data'
self.local_test = local_test
ensure_dir(DIR_USER_DATA)
try:
if os.path.isfile(self.file_name):
self.load_master_key()
self.master_key = self.load_master_key()
else:
self.new_master_key()
except KeyboardInterrupt:
self.master_key = self.new_master_key()
except (EOFError, KeyboardInterrupt):
graceful_exit()
def new_master_key(self) -> None:
"""Create a new master key from salt and password."""
def new_master_key(self) -> bytes:
"""Create a new master key from password and salt.
The generated master key depends on a 256-bit salt and the
password entered by the user. Additional computational strength
is added by the slow hash function (Argon2d). This method
automatically tweaks the Argon2 memory parameter so that key
derivation on used hardware takes at least three seconds. The
more cores and the faster each core is, the more security a
given password provides.
The preimage resistance of BLAKE2b prevents derivation of master
key from the stored hash, and Argon2d ensures brute force and
dictionary attacks against the master password are painfully
slow even with GPUs/ASICs/FPGAs, as long as the password is
sufficiently strong.
The salt does not need additional protection as the security it
provides depends on the salt space in relation to the number of
attacked targets (i.e. if two or more physically compromised
systems happen to share the same salt, the attacker can speed up
the attack against those systems with time-memory-trade-off
attack).
A 256-bit salt ensures that even in a group of 4.8*10^29 users,
the probability that two users share the same salt is just
10^(-18).*
* https://en.wikipedia.org/wiki/Birthday_attack
"""
password = MasterKey.new_password()
salt = csprng()
rounds = ARGON2_ROUNDS
salt = csprng(ARGON2_SALT_LENGTH)
memory = ARGON2_MIN_MEMORY
parallelism = multiprocessing.cpu_count()
if self.local_test:
parallelism = max(1, parallelism // 2)
phase("Deriving master key", head=2)
while True:
time_start = time.monotonic()
master_key, parallellism = argon2_kdf(password, salt, rounds, memory=memory, local_test=self.local_test)
time_final = time.monotonic() - time_start
master_key = argon2_kdf(password, salt, ARGON2_ROUNDS, memory, parallelism)
kd_time = time.monotonic() - time_start
if time_final > 3.0:
self.master_key = master_key
ensure_dir(f'{DIR_USER_DATA}/')
if kd_time < MIN_KEY_DERIVATION_TIME:
memory *= 2
else:
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(salt
+ hash_chain(self.master_key)
+ int_to_bytes(rounds)
+ blake2b(master_key)
+ int_to_bytes(memory)
+ int_to_bytes(parallellism))
+ int_to_bytes(parallelism))
phase(DONE)
break
else:
memory *= 2
return master_key
def load_master_key(self) -> None:
"""Derive master key from password and salt."""
def load_master_key(self) -> bytes:
"""Derive the master key from password and salt.
Load the salt, hash, and key derivation settings from the login
database. Derive the purported master key from the salt and
entered password. If the BLAKE2b hash of derived master key
matches the hash in the login database, accept the derived
master key.
"""
with open(self.file_name, 'rb') as f:
data = f.read()
salt = data[0:32]
key_hash = data[32:64]
rounds = bytes_to_int(data[64:72])
memory = bytes_to_int(data[72:80])
parallelism = bytes_to_int(data[80:88])
if len(data) != MASTERKEY_DB_SIZE:
raise CriticalError(f"Invalid {self.file_name} database size.")
salt, key_hash, memory_bytes, parallelism_bytes \
= separate_headers(data, [ARGON2_SALT_LENGTH, BLAKE2_DIGEST_LENGTH, ENCODED_INTEGER_LENGTH])
memory = bytes_to_int(memory_bytes)
parallelism = bytes_to_int(parallelism_bytes)
while True:
password = MasterKey.get_password()
phase("Deriving master key", head=2, offset=16)
purp_key, _ = argon2_kdf(password, salt, rounds, memory, parallelism)
phase("Deriving master key", head=2, offset=len("Password correct"))
purp_key = argon2_kdf(password, salt, ARGON2_ROUNDS, memory, parallelism)
if hash_chain(purp_key) == key_hash:
self.master_key = purp_key
phase("Password correct", done=True)
clear_screen(delay=0.5)
break
if blake2b(purp_key) == key_hash:
phase("Password correct", done=True, delay=1)
clear_screen()
return purp_key
else:
phase("Invalid password", done=True)
print_on_previous_line(reps=5, delay=1)
phase("Invalid password", done=True, delay=1)
print_on_previous_line(reps=5)
@classmethod
def new_password(cls, purpose: str = "master password") -> str:
"""Prompt user to enter and confirm a new password."""
"""Prompt the user to enter and confirm a new password."""
password_1 = pwd_prompt(f"Enter a new {purpose}: ")
password_2 = pwd_prompt(f"Confirm the {purpose}: ", second=True)
password_2 = pwd_prompt(f"Confirm the {purpose}: ", repeat=True)
if password_1 == password_2:
return password_1
else:
c_print("Error: Passwords did not match. Try again.", head=1, tail=1)
time.sleep(1)
print_on_previous_line(reps=7)
m_print("Error: Passwords did not match. Try again.", head=1, tail=1)
print_on_previous_line(delay=1, reps=7)
return cls.new_password(purpose)
@classmethod
def get_password(cls, purpose: str = "master password") -> str:
"""Prompt user to enter a password."""
"""Prompt the user to enter a password."""
return pwd_prompt(f"Enter {purpose}: ")

102
src/common/db_onion.py Normal file
View File

@ -0,0 +1,102 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import typing
import nacl.signing
from src.common.crypto import auth_and_decrypt, csprng, encrypt_and_sign
from src.common.encoding import pub_key_to_onion_address, pub_key_to_short_address
from src.common.misc import ensure_dir
from src.common.output import phase
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_masterkey import MasterKey
class OnionService(object):
"""\
OnionService object manages the persistent Ed25519 key used
to create a v3 Tor Onion Service on the Networked Computer.
The reason the key is generated by Transmitter Program on Source
Computer, is this ensures that even when Networked Computer runs an
amnesic Linux distribution like Tails, the long term private
signing key is not lost between sessions.
The private key for Onion Service can not be kept as protected as
TFC's other private message/header keys (that never leave
Source/Destination computer). This is however OK, as the Onion
Service private key is only as secure as the networked endpoint
anyway.
"""
def __init__(self, master_key: 'MasterKey') -> None:
"""Create a new OnionService object."""
self.master_key = master_key
self.file_name = f'{DIR_USER_DATA}{TX}_onion_db'
self.is_delivered = False
self.conf_code = csprng(CONFIRM_CODE_LENGTH)
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.onion_private_key = self.load_onion_service_private_key()
else:
self.onion_private_key = self.new_onion_service_private_key()
self.store_onion_service_private_key()
assert len(self.onion_private_key) == ONION_SERVICE_PRIVATE_KEY_LENGTH
self.public_key = bytes(nacl.signing.SigningKey(seed=self.onion_private_key).verify_key)
self.user_onion_address = pub_key_to_onion_address(self.public_key)
self.user_short_address = pub_key_to_short_address(self.public_key)
@staticmethod
def new_onion_service_private_key() -> bytes:
"""Generate a new Onion Service private key and store it."""
phase("Generate Tor OS key")
onion_private_key = csprng(ONION_SERVICE_PRIVATE_KEY_LENGTH)
phase(DONE)
return onion_private_key
def store_onion_service_private_key(self) -> None:
"""Store Onion Service private key to an encrypted database."""
ct_bytes = encrypt_and_sign(self.onion_private_key, self.master_key.master_key)
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(ct_bytes)
def load_onion_service_private_key(self) -> bytes:
"""Load the Onion Service private key from the encrypted database."""
with open(self.file_name, 'rb') as f:
ct_bytes = f.read()
onion_private_key = auth_and_decrypt(ct_bytes, self.master_key.master_key, database=self.file_name)
return onion_private_key
def new_confirmation_code(self) -> None:
"""Generate new confirmation code for Onion Service data."""
self.conf_code = csprng(CONFIRM_CODE_LENGTH)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,11 +16,10 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import serial
import textwrap
import typing
@ -29,10 +29,9 @@ from src.common.crypto import auth_and_decrypt, encrypt_and_sign
from src.common.encoding import bool_to_bytes, double_to_bytes, int_to_bytes
from src.common.encoding import bytes_to_bool, bytes_to_double, bytes_to_int
from src.common.exceptions import CriticalError, FunctionReturn
from src.common.misc import calculate_race_condition_delay, calculate_serial_delays
from src.common.misc import ensure_dir, get_terminal_width, round_up
from src.common.input import yes
from src.common.output import c_print, clear_screen
from src.common.misc import ensure_dir, get_terminal_width, round_up
from src.common.output import clear_screen, m_print
from src.common.statics import *
if typing.TYPE_CHECKING:
@ -43,101 +42,84 @@ if typing.TYPE_CHECKING:
class Settings(object):
"""\
Settings object stores all user adjustable
settings under an encrypted database.
Settings object stores user adjustable settings (excluding those
related to serial interface) under an encrypted database.
"""
def __init__(self,
master_key: 'MasterKey',
operation: str,
local_test: bool,
dd_sockets: bool) -> None:
master_key: 'MasterKey', # MasterKey object
operation: str, # Operation mode of the program (Tx or Rx)
local_test: bool, # Local testing setting from command-line argument
) -> None:
"""Create a new Settings object.
The settings below are altered from within the program itself.
Changes made to the default settings are stored in encrypted
settings database.
:param master_key: MasterKey object
:param operation: Operation mode of the program (tx or rx)
:param local_test: Setting value passed from command-line argument
:param dd_sockets: Setting value passed from command-line argument
The settings below are defaults, and are only to be altered from
within the program itself. Changes made to the default settings
are stored in the encrypted settings database, from which they
are loaded when the program starts.
"""
# Common settings
self.disable_gui_dialog = False
self.max_number_of_group_members = 20
self.max_number_of_groups = 20
self.max_number_of_contacts = 20
self.serial_baudrate = 19200
self.serial_error_correction = 5
self.max_number_of_group_members = 50
self.max_number_of_groups = 50
self.max_number_of_contacts = 50
self.log_messages_by_default = False
self.accept_files_by_default = False
self.show_notifications_by_default = True
self.logfile_masking = False
self.log_file_masking = False
# Transmitter settings
self.txm_usb_serial_adapter = True
self.nh_bypass_messages = True
self.confirm_sent_files = True
self.double_space_exits = False
self.traffic_masking = False
self.traffic_masking_static_delay = 2.0
self.traffic_masking_random_delay = 2.0
self.multi_packet_random_delay = False
self.max_duration_of_random_delay = 10.0
self.nc_bypass_messages = False
self.confirm_sent_files = True
self.double_space_exits = False
self.traffic_masking = False
self.tm_static_delay = 2.0
self.tm_random_delay = 2.0
# Relay Settings
self.allow_contact_requests = True
# Receiver settings
self.rxm_usb_serial_adapter = True
self.new_message_notify_preview = False
self.new_message_notify_duration = 1.0
self.new_message_notify_preview = False
self.new_message_notify_duration = 1.0
self.max_decompress_size = 100_000_000
self.master_key = master_key
self.software_operation = operation
self.local_testing_mode = local_test
self.data_diode_sockets = dd_sockets
self.file_name = f'{DIR_USER_DATA}{operation}_settings'
self.key_list = list(vars(self).keys())
self.key_list = self.key_list[:self.key_list.index('master_key')]
self.all_keys = list(vars(self).keys())
self.key_list = self.all_keys[:self.all_keys.index('master_key')]
self.defaults = {k: self.__dict__[k] for k in self.key_list}
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_settings()
if operation == RX:
# TxM is unable to send serial interface type changing command if
# RxM looks for the type of adapter user doesn't have available.
# Therefore setup() is run every time the Receiver program starts.
self.setup()
else:
self.setup()
self.store_settings()
# Following settings change only when program is restarted
self.session_serial_error_correction = self.serial_error_correction
self.session_serial_baudrate = self.serial_baudrate
self.session_traffic_masking = self.traffic_masking
self.session_usb_serial_adapter = self.rxm_usb_serial_adapter if operation == RX else self.txm_usb_serial_adapter
self.race_condition_delay = calculate_race_condition_delay(self, txm=True)
self.rxm_receive_timeout, self.txm_inter_packet_delay = calculate_serial_delays(self.session_serial_baudrate)
self.store_settings()
def store_settings(self) -> None:
"""Store settings to encrypted database."""
"""Store settings to an encrypted database.
The plaintext in the encrypted database is a constant
length bytestring regardless of stored setting values.
"""
attribute_list = [self.__getattribute__(k) for k in self.key_list]
pt_bytes = b''
bytes_lst = []
for a in attribute_list:
if isinstance(a, bool):
pt_bytes += bool_to_bytes(a)
bytes_lst.append(bool_to_bytes(a))
elif isinstance(a, int):
pt_bytes += int_to_bytes(a)
bytes_lst.append(int_to_bytes(a))
elif isinstance(a, float):
pt_bytes += double_to_bytes(a)
bytes_lst.append(double_to_bytes(a))
else:
raise CriticalError("Invalid attribute type in settings.")
pt_bytes = b''.join(bytes_lst)
ct_bytes = encrypt_and_sign(pt_bytes, self.master_key.master_key)
ensure_dir(DIR_USER_DATA)
@ -145,11 +127,11 @@ class Settings(object):
f.write(ct_bytes)
def load_settings(self) -> None:
"""Load settings from encrypted database."""
"""Load settings from the encrypted database."""
with open(self.file_name, 'rb') as f:
ct_bytes = f.read()
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key)
pt_bytes = auth_and_decrypt(ct_bytes, self.master_key.master_key, database=self.file_name)
# Update settings based on plaintext byte string content
for key in self.key_list:
@ -158,15 +140,15 @@ class Settings(object):
if isinstance(attribute, bool):
value = bytes_to_bool(pt_bytes[0]) # type: Union[bool, int, float]
pt_bytes = pt_bytes[BOOLEAN_SETTING_LEN:]
pt_bytes = pt_bytes[ENCODED_BOOLEAN_LENGTH:]
elif isinstance(attribute, int):
value = bytes_to_int(pt_bytes[:INTEGER_SETTING_LEN])
pt_bytes = pt_bytes[INTEGER_SETTING_LEN:]
value = bytes_to_int(pt_bytes[:ENCODED_INTEGER_LENGTH])
pt_bytes = pt_bytes[ENCODED_INTEGER_LENGTH:]
elif isinstance(attribute, float):
value = bytes_to_double(pt_bytes[:FLOAT_SETTING_LEN])
pt_bytes = pt_bytes[FLOAT_SETTING_LEN:]
value = bytes_to_double(pt_bytes[:ENCODED_FLOAT_LENGTH])
pt_bytes = pt_bytes[ENCODED_FLOAT_LENGTH:]
else:
raise CriticalError("Invalid data type in settings default values.")
@ -174,34 +156,33 @@ class Settings(object):
setattr(self, key, value)
def change_setting(self,
key: str,
value_str: str,
key: str, # Name of the setting
value_str: str, # Value of the setting
contact_list: 'ContactList',
group_list: 'GroupList') -> None:
group_list: 'GroupList'
) -> None:
"""Parse, update and store new setting value."""
attribute = self.__getattribute__(key)
try:
if isinstance(attribute, bool):
value_ = value_str.lower()
if value_ not in ['true', 'false']:
raise ValueError
value = (value_ == 'true') # type: Union[bool, int, float]
value = dict(true=True, false=False)[value_str.lower()] # type: Union[bool, int, float]
elif isinstance(attribute, int):
value = int(value_str)
if value < 0 or value > 2**64-1:
if value < 0 or value > MAX_INT:
raise ValueError
elif isinstance(attribute, float):
value = float(value_str)
if value < 0.0:
raise ValueError
else:
raise CriticalError("Invalid attribute type in settings.")
except ValueError:
raise FunctionReturn(f"Error: Invalid value '{value_str}'")
except (KeyError, ValueError):
raise FunctionReturn(f"Error: Invalid value '{value_str}'.", head_clear=True)
self.validate_key_value_pair(key, value, contact_list, group_list)
@ -209,60 +190,52 @@ class Settings(object):
self.store_settings()
@staticmethod
def validate_key_value_pair(key: str,
value: Union[int, float, bool],
def validate_key_value_pair(key: str, # Name of the setting
value: Union[int, float, bool], # Value of the setting
contact_list: 'ContactList',
group_list: 'GroupList') -> None:
"""\
Perform further evaluation on settings
the values of which have restrictions.
"""
group_list: 'GroupList'
) -> None:
"""Evaluate values for settings that have further restrictions."""
if key in ['max_number_of_group_members', 'max_number_of_groups', 'max_number_of_contacts']:
if value % 10 != 0 or value == 0:
raise FunctionReturn("Error: Database padding settings must be divisible by 10.")
raise FunctionReturn("Error: Database padding settings must be divisible by 10.", head_clear=True)
if key == 'max_number_of_group_members':
min_size = round_up(group_list.largest_group())
if value < min_size:
raise FunctionReturn(f"Error: Can't set max number of members lower than {min_size}.")
raise FunctionReturn(
f"Error: Can't set the max number of members lower than {min_size}.", head_clear=True)
if key == 'max_number_of_groups':
min_size = round_up(len(group_list))
if value < min_size:
raise FunctionReturn(f"Error: Can't set max number of groups lower than {min_size}.")
raise FunctionReturn(
f"Error: Can't set the max number of groups lower than {min_size}.", head_clear=True)
if key == 'max_number_of_contacts':
min_size = round_up(len(contact_list))
if value < min_size:
raise FunctionReturn(f"Error: Can't set max number of contacts lower than {min_size}.")
if key == 'serial_baudrate':
if value not in serial.Serial().BAUDRATES:
raise FunctionReturn("Error: Specified baud rate is not supported.")
c_print("Baud rate will change on restart.", head=1, tail=1)
if key == 'serial_error_correction':
if value < 1:
raise FunctionReturn("Error: Invalid value for error correction ratio.")
c_print("Error correction ratio will change on restart.", head=1, tail=1)
raise FunctionReturn(
f"Error: Can't set the max number of contacts lower than {min_size}.", head_clear=True)
if key == 'new_message_notify_duration' and value < 0.05:
raise FunctionReturn("Error: Too small value for message notify duration.")
raise FunctionReturn("Error: Too small value for message notify duration.", head_clear=True)
if key in ['rxm_usb_serial_adapter', 'txm_usb_serial_adapter']:
c_print("Interface will change on restart.", head=1, tail=1)
if key in ['tm_static_delay', 'tm_random_delay']:
if key in ['traffic_masking', 'traffic_masking_static_delay', 'traffic_masking_random_delay']:
c_print("Traffic masking setting will change on restart.", head=1, tail=1)
for key_, name, min_setting in [('tm_static_delay', 'static', TRAFFIC_MASKING_MIN_STATIC_DELAY),
('tm_random_delay', 'random', TRAFFIC_MASKING_MIN_RANDOM_DELAY)]:
if key == key_ and value < min_setting:
raise FunctionReturn(f"Error: Can't set {name} delay lower than {min_setting}.", head_clear=True)
def setup(self) -> None:
"""Prompt user to enter initial settings."""
clear_screen()
if not self.local_testing_mode:
if self.software_operation == TX:
self.txm_usb_serial_adapter = yes("Does TxM use USB-to-serial/TTL adapter?", head=1, tail=1)
else:
self.rxm_usb_serial_adapter = yes("Does RxM use USB-to-serial/TTL adapter?", head=1, tail=1)
if contact_list.settings.software_operation == TX:
m_print(["WARNING!", "Changing traffic masking delay can make your endpoint and traffic look unique!"],
bold=True, head=1, tail=1)
if not yes("Proceed anyway?"):
raise FunctionReturn("Aborted traffic masking setting change.", head_clear=True)
m_print("Traffic masking setting will change on restart.", head=1, tail=1)
def print_settings(self) -> None:
"""\
@ -271,32 +244,30 @@ class Settings(object):
"""
desc_d = {
# Common settings
"disable_gui_dialog": "True replaces Tkinter dialogs with CLI prompts",
"max_number_of_group_members": "Max members in group (TxM/RxM must have the same value)",
"max_number_of_groups": "Max number of groups (TxM/RxM must have the same value)",
"max_number_of_contacts": "Max number of contacts (TxM/RxM must have the same value)",
"serial_baudrate": "The speed of serial interface in bauds per second",
"serial_error_correction": "Number of byte errors serial datagrams can recover from",
"disable_gui_dialog": "True replaces GUI dialogs with CLI prompts",
"max_number_of_group_members": "Maximum number of members in a group",
"max_number_of_groups": "Maximum number of groups",
"max_number_of_contacts": "Maximum number of contacts",
"log_messages_by_default": "Default logging setting for new contacts/groups",
"accept_files_by_default": "Default file reception setting for new contacts",
"show_notifications_by_default": "Default message notification setting for new contacts/groups",
"logfile_masking": "True hides real size of logfile during traffic masking",
"log_file_masking": "True hides real size of log file during traffic masking",
# Transmitter settings
"txm_usb_serial_adapter": "False uses system's integrated serial interface",
"nh_bypass_messages": "False removes NH bypass interrupt messages",
"nc_bypass_messages": "False removes Networked Computer bypass interrupt messages",
"confirm_sent_files": "False sends files without asking for confirmation",
"double_space_exits": "True exits, False clears screen with double space command",
"traffic_masking": "True enables traffic masking to hide metadata",
"traffic_masking_static_delay": "Static delay between traffic masking packets",
"traffic_masking_random_delay": "Max random delay for traffic masking timing obfuscation",
"multi_packet_random_delay": "True adds IM server spam guard evading delay",
"max_duration_of_random_delay": "Maximum time for random spam guard evasion delay",
"tm_static_delay": "The static delay between traffic masking packets",
"tm_random_delay": "Max random delay for traffic masking timing obfuscation",
# Relay settings
"allow_contact_requests": "When False, does not show TFC contact requests",
# Receiver settings
"rxm_usb_serial_adapter": "False uses system's integrated serial interface",
"new_message_notify_preview": "When True, shows preview of received message",
"new_message_notify_duration": "Number of seconds new message notification appears"}
"new_message_notify_preview": "When True, shows a preview of the received message",
"new_message_notify_duration": "Number of seconds new message notification appears",
"max_decompress_size": "Max size Receiver accepts when decompressing file"}
# Columns
c1 = ['Setting name']
@ -304,38 +275,40 @@ class Settings(object):
c3 = ['Default value']
c4 = ['Description']
terminal_width = get_terminal_width()
desc_line_indent = 64
terminal_width = get_terminal_width()
description_indent = 64
if terminal_width < desc_line_indent + 1:
raise FunctionReturn("Error: Screen width is too small.")
if terminal_width < description_indent + 1:
raise FunctionReturn("Error: Screen width is too small.", head_clear=True)
# Populate columns with setting data
for key in self.defaults:
c1.append(key)
c2.append(str(self.__getattribute__(key)))
c3.append(str(self.defaults[key]))
description = desc_d[key]
wrapper = textwrap.TextWrapper(width=max(1, (terminal_width - desc_line_indent)))
wrapper = textwrap.TextWrapper(width=max(1, (terminal_width - description_indent)))
desc_lines = wrapper.fill(description).split('\n')
desc_string = desc_lines[0]
for l in desc_lines[1:]:
desc_string += '\n' + desc_line_indent * ' ' + l
for line in desc_lines[1:]:
desc_string += '\n' + description_indent * ' ' + line
if len(desc_lines) > 1:
desc_string += '\n'
c4.append(desc_string)
lst = []
for name, current, default, description in zip(c1, c2, c3, c4):
lst.append('{0:{1}} {2:{3}} {4:{5}} {6}'.format(
name, max(len(v) for v in c1) + SETTINGS_INDENT,
current, max(len(v) for v in c2) + SETTINGS_INDENT,
default, max(len(v) for v in c3) + SETTINGS_INDENT,
description))
# Calculate column widths
c1w, c2w, c3w = [max(len(v) for v in column) + SETTINGS_INDENT for column in [c1, c2, c3]]
lst.insert(1, get_terminal_width() * '')
# Align columns by adding whitespace between fields of each line
lines = [f'{f1:{c1w}} {f2:{c2w}} {f3:{c3w}} {f4}' for f1, f2, f3, f4 in zip(c1, c2, c3, c4)]
# Add a terminal-wide line between the column names and the data
lines.insert(1, get_terminal_width() * '')
# Print the settings
clear_screen()
print('\n' + '\n'.join(lst) + '\n')
print('\n' + '\n'.join(lines))

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,36 +16,43 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import base64
import hashlib
import struct
from typing import List, Union
from datetime import datetime
from typing import List, Union
from src.common.statics import *
def sha256d(message: bytes) -> bytes:
"""Chain SHA256 twice for Bitcoin WIF format."""
return hashlib.sha256(hashlib.sha256(message).digest()).digest()
return hashlib.sha256(
hashlib.sha256(message).digest()
).digest()
def b58encode(byte_string: bytes, file_key: bool = False) -> str:
"""Encode byte string to checksummed Base58 string.
def b58encode(byte_string: bytes, public_key: bool = False) -> str:
"""Encode byte string to check-summed Base58 string.
This format is exactly the same as Bitcoin's Wallet
Import Format for mainnet/testnet addresses.
This format is exactly the same as Bitcoin's Wallet Import Format
(WIF) for mainnet and testnet addresses.
https://en.bitcoin.it/wiki/Wallet_import_format
"""
b58_alphabet = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
net_id = b'\xef' if file_key else b'\x80'
mainnet_header = b'\x80'
testnet_header = b'\xef'
net_id = testnet_header if public_key else mainnet_header
byte_string = net_id + byte_string
byte_string += sha256d(byte_string)[:B58_CHKSUM_LEN]
byte_string += sha256d(byte_string)[:B58_CHECKSUM_LENGTH]
orig_len = len(byte_string)
original_len = len(byte_string)
byte_string = byte_string.lstrip(b'\x00')
new_len = len(byte_string)
@ -58,15 +66,19 @@ def b58encode(byte_string: bytes, file_key: bool = False) -> str:
acc, mod = divmod(acc, 58)
encoded += b58_alphabet[mod]
return (encoded + (orig_len - new_len) * '1')[::-1]
return (encoded + (original_len - new_len) * b58_alphabet[0])[::-1]
def b58decode(string: str, file_key: bool = False) -> bytes:
"""Decode a Base58-encoded string and verify checksum."""
def b58decode(string: str, public_key: bool = False) -> bytes:
"""Decode a Base58-encoded string and verify the checksum."""
b58_alphabet = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
mainnet_header = b'\x80'
testnet_header = b'\xef'
net_id = testnet_header if public_key else mainnet_header
orig_len = len(string)
string = string.lstrip('1')
string = string.lstrip(b58_alphabet[0])
new_len = len(string)
p, acc = 1, 0
@ -81,65 +93,103 @@ def b58decode(string: str, file_key: bool = False) -> bytes:
decoded_ = (bytes(decoded) + (orig_len - new_len) * b'\x00')[::-1] # type: Union[bytes, List[int]]
if sha256d(bytes(decoded_[:-B58_CHKSUM_LEN]))[:B58_CHKSUM_LEN] != decoded_[-B58_CHKSUM_LEN:]:
if sha256d(bytes(decoded_[:-B58_CHECKSUM_LENGTH]))[:B58_CHECKSUM_LENGTH] != decoded_[-B58_CHECKSUM_LENGTH:]:
raise ValueError
net_id = b'\xef' if file_key else b'\x80'
if decoded_[:1] != net_id:
if decoded_[:len(net_id)] != net_id:
raise ValueError
return bytes(decoded_[1:-B58_CHKSUM_LEN])
return bytes(decoded_[len(net_id):-B58_CHECKSUM_LENGTH])
def b85encode(data: bytes) -> str:
"""Encode byte string with base85.
The encoding is slightly more inefficient, but allows variable
length transmissions when used together with a delimiter char.
"""
return base64.b85encode(data).decode()
def b10encode(fingerprint: bytes) -> str:
"""Encode bytestring in base10.
Base10 encoding is used in fingerprint comparison to allow distinct
communication:
Base64 has 75% efficiency, but encoding is bad as the user might
confuse uppercase I with lower case l, 0 with O, etc.
Base58 has 73% efficiency and removes the problem of Base64
explained above, but works only when manually typing
strings because the user has to take time to explain which
letters were capitalized etc.
Base16 has 50% efficiency and removes the capitalization problem
with Base58 but the choice is bad as '3', 'b', 'c', 'd'
and 'e' are hard to distinguish in the English language
(fingerprints are usually read aloud over off band call).
Base10 has 41% efficiency but natural languages have evolved in a
way that makes a clear distinction between the way different numbers
are pronounced: reading them is faster and less error-prone.
Compliments to Signal/WA developers for discovering this.
https://signal.org/blog/safety-number-updates/
"""
return str(int(fingerprint.hex(), base=16))
# Database unicode string padding
def unicode_padding(string: str) -> str:
"""Pad unicode string to 255 chars.
"""Pad Unicode string to 255 chars.
Database fields are padded with unicode chars and then encoded
Database fields are padded with Unicode chars and then encoded
with UTF-32 to hide the metadata about plaintext field length.
:param string: String to be padded
:return: Padded string
"""
assert len(string) < PADDING_LEN
assert len(string) < PADDING_LENGTH
length = PADDING_LEN - (len(string) % PADDING_LEN)
length = PADDING_LENGTH - (len(string) % PADDING_LENGTH)
string += length * chr(length)
assert len(string) == PADDING_LEN
assert len(string) == PADDING_LENGTH
return string
def rm_padding_str(string: str) -> str:
"""Remove padding from plaintext.
:param string: String from which padding is removed
:return: String without padding
"""
"""Remove padding from plaintext."""
return string[:-ord(string[-1:])]
# Database constant length encoding
def onion_address_to_pub_key(account: str) -> bytes:
"""Encode TFC account to a public key byte string.
The public key is the most compact possible representation of a TFC
account, so it is useful when storing the address into databases.
"""
return base64.b32decode(account.upper())[:-(ONION_ADDRESS_CHECKSUM_LENGTH + ONION_SERVICE_VERSION_LENGTH)]
def bool_to_bytes(boolean: bool) -> bytes:
"""Convert boolean value to 1-byte byte string."""
"""Convert boolean value to a 1-byte byte string."""
return bytes([boolean])
def int_to_bytes(integer: int) -> bytes:
"""Convert integer to 8-byte byte string."""
"""Convert integer to an 8-byte byte string."""
return struct.pack('!Q', integer)
def double_to_bytes(double_: float) -> bytes:
"""Convert double to 8-byte byte string."""
"""Convert double to an 8-byte byte string."""
return struct.pack('d', double_)
def str_to_bytes(string: str) -> bytes:
"""Pad string with unicode chars and encode it with UTF-32.
"""Pad string with Unicode chars and encode it with UTF-32.
Length of padded string is 255 * 4 + 4 (BOM) = 1024 bytes.
"""
@ -148,26 +198,53 @@ def str_to_bytes(string: str) -> bytes:
# Decoding
def pub_key_to_onion_address(public_key: bytes) -> str:
"""Decode public key byte string to TFC account.
This decoding is exactly the same process as conversion of Ed25519
public key of v3 Onion Service into service ID:
https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt#n2019
"""
checksum = hashlib.sha3_256(ONION_ADDRESS_CHECKSUM_ID
+ public_key
+ ONION_SERVICE_VERSION
).digest()[:ONION_ADDRESS_CHECKSUM_LENGTH]
return base64.b32encode(public_key + checksum + ONION_SERVICE_VERSION).lower().decode()
def pub_key_to_short_address(public_key: bytes) -> str:
"""Decode public key to TFC account and truncate it."""
return pub_key_to_onion_address(public_key)[:TRUNC_ADDRESS_LENGTH]
def bytes_to_bool(byte_string: Union[bytes, int]) -> bool:
"""Convert 1-byte byte string to boolean value."""
"""Convert 1-byte byte string to a boolean value."""
if isinstance(byte_string, bytes):
byte_string = byte_string[0]
return bool(byte_string)
def bytes_to_int(byte_string: bytes) -> int:
"""Convert 8-byte byte string to integer."""
return struct.unpack('!Q', byte_string)[0]
"""Convert 8-byte byte string to an integer."""
int_format = struct.unpack('!Q', byte_string)[0] # type: int
return int_format
def bytes_to_double(byte_string: bytes) -> float:
"""Convert 8-byte byte string to double."""
return struct.unpack('d', byte_string)[0]
float_format = struct.unpack('d', byte_string)[0] # type: float
return float_format
def bytes_to_str(byte_string: bytes) -> str:
"""Convert 1024-byte byte string to unicode string.
"""Convert 1024-byte byte string to Unicode string.
Decode byte string with UTF-32 and remove unicode padding.
Decode byte string with UTF-32 and remove Unicode padding.
"""
return rm_padding_str(byte_string.decode('utf-32'))
def bytes_to_timestamp(byte_string: bytes) -> datetime:
"""Covert 4-byte byte string to datetime object."""
return datetime.fromtimestamp(struct.unpack('<L', byte_string)[0])

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,28 +16,30 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import inspect
import sys
import time
import typing
from datetime import datetime
from typing import Optional
from src.common.output import c_print, clear_screen
from src.common.output import clear_screen, m_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.rx.windows import RxWindow
from src.receiver.windows import RxWindow
class CriticalError(Exception):
"""A variety of errors during which TFC should gracefully exit."""
"""A severe exception that requires TFC to gracefully exit."""
def __init__(self, error_message: str) -> None:
graceful_exit("Critical error in function '{}':\n{}"
.format(inspect.stack()[1][3], error_message), clear=False, exit_code=1)
def __init__(self, error_message: str, exit_code: int = 1) -> None:
"""A severe exception that requires TFC to gracefully exit."""
graceful_exit(f"Critical error in function '{inspect.stack()[1][3]}':\n{error_message}",
clear=False, exit_code=exit_code)
class FunctionReturn(Exception):
@ -44,33 +47,40 @@ class FunctionReturn(Exception):
def __init__(self,
message: str,
output: bool = True,
delay: float = 0,
window: 'RxWindow' = None,
head: int = 1,
tail: int = 1,
head_clear: bool = False,
tail_clear: bool = False) -> None:
window: Optional['RxWindow'] = None, # The window to include the message in
output: bool = True, # When False, doesn't print message when adding it to window
bold: bool = False, # When True, prints the message in bold
head_clear: bool = False, # When True, clears the screen before printing message
tail_clear: bool = False, # When True, clears the screen after message (needs delay)
delay: float = 0, # The delay before continuing
head: int = 1, # The number of new-lines to print before the message
tail: int = 1, # The number of new-lines to print after message
) -> None:
"""Print return message and return to exception handler function."""
self.message = message
if window is None:
if output:
if head_clear:
clear_screen()
c_print(self.message, head=head, tail=tail)
time.sleep(delay)
if tail_clear:
clear_screen()
m_print(self.message,
bold=bold,
head_clear=head_clear,
tail_clear=tail_clear,
delay=delay,
head=head,
tail=tail)
else:
window.add_new(datetime.now(), self.message, output=output)
def graceful_exit(message: str ='', clear: bool = True, exit_code: int = 0) -> None:
def graceful_exit(message: str ='', # Exit message to print
clear: bool = True, # When False, does not clear screen before printing message
exit_code: int = 0 # Value returned to parent process
) -> None:
"""Display a message and exit TFC."""
if clear:
clear_screen()
if message:
print('\n' + message)
print("\nExiting TFC.\n")
print(f"\nExiting {TFC}.\n")
sys.exit(exit_code)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,166 +16,567 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import hashlib
import json
import multiprocessing.connection
import os
import os.path
import serial
import socket
import textwrap
import time
import typing
from serial.serialutil import SerialException
from typing import Any, Dict, Union
from datetime import datetime
from typing import Dict, Optional, Tuple, Union
from src.common.exceptions import CriticalError, graceful_exit
from src.common.misc import ignored
from src.common.output import phase, print_on_previous_line
from src.common.statics import *
from serial.serialutil import SerialException
from src.common.exceptions import CriticalError, FunctionReturn, graceful_exit
from src.common.input import yes
from src.common.misc import calculate_race_condition_delay, ensure_dir, ignored, get_terminal_width
from src.common.misc import separate_trailer
from src.common.output import m_print, phase, print_on_previous_line
from src.common.reed_solomon import ReedSolomonError, RSCodec
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_settings import Settings
from multiprocessing import Queue
def gateway_loop(queues: Dict[bytes, 'Queue'],
gateway: 'Gateway',
unittest: bool = False) -> None:
"""Loop that loads data from NH side gateway to RxM."""
unittest: bool = False
) -> None:
"""Load data from serial interface or socket into a queue.
Also place the current timestamp to queue to be delivered to the
Receiver Program. The timestamp is used both to notify when the sent
message was received by Relay Program, and as part of a commitment
scheme: For more information, see the section on "Covert channel
based on user interaction" under TFC's Security Design wiki article.
"""
queue = queues[GATEWAY_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
queue.put(gateway.read())
queue.put((datetime.now(), gateway.read()))
if unittest:
break
class Gateway(object):
"""Gateway object is a wrapper for interfaces that connect TxM/RxM with NH."""
"""\
Gateway object is a wrapper for interfaces that connect
Source/Destination Computer with the Networked computer.
"""
def __init__(self, settings: 'Settings') -> None:
def __init__(self,
operation: str,
local_test: bool,
dd_sockets: bool
) -> None:
"""Create a new Gateway object."""
self.settings = settings
self.interface = None # type: Union[Any]
self.settings = GatewaySettings(operation, local_test, dd_sockets)
self.tx_serial = None # type: serial.Serial
self.rx_serial = None # type: serial.Serial
self.rx_socket = None # type: Optional[multiprocessing.connection.Connection]
self.tx_socket = None # type: Optional[multiprocessing.connection.Connection]
# Set True when serial adapter is initially found so that further
# serial interface searches know to announce disconnection.
# Initialize Reed-Solomon erasure code handler
self.rs = RSCodec(2 * self.settings.session_serial_error_correction)
# Set True when the serial interface is initially found so that
# further interface searches know to announce disconnection.
self.init_found = False
if self.settings.local_testing_mode:
if self.settings.software_operation == TX:
if self.settings.software_operation in [TX, NC]:
self.client_establish_socket()
else:
if self.settings.software_operation in [NC, RX]:
self.server_establish_socket()
else:
self.establish_serial()
def write(self, packet: bytes) -> None:
"""Output data via socket/serial interface."""
if self.settings.local_testing_mode:
self.interface.send(packet)
def establish_serial(self) -> None:
"""Create a new Serial object.
By setting the Serial object's timeout to 0, the method
`Serial().read_all()` will return 0..N bytes where N is the serial
interface buffer size (496 bytes for FTDI FT232R for example).
This is not enough for large packets. However, in this case,
`read_all` will return
a) immediately when the buffer is full
b) if no bytes are received during the time it would take
to transmit the next byte of the datagram.
This type of behaviour allows us to read 0..N bytes from the
serial interface at a time, and add them to a bytearray buffer.
In our implementation below, if the receiver side stops
receiving data when it calls `read_all`, it starts a timer that
is evaluated with every subsequent call of `read_all` that
returns an empty string. If the timer exceeds the
`settings.rx_receive_timeout` value (twice the time it takes to
send the next byte with given baud rate), the gateway object
will return the received packet.
The timeout timer is triggered intentionally by the transmitter
side Gateway object, that after each transmission sleeps for
`settings.tx_inter_packet_delay` seconds. This value is set to
twice the length of `settings.rx_receive_timeout`, or four times
the time it takes to send one byte with given baud rate.
"""
try:
serial_interface = self.search_serial_interface()
baudrate = self.settings.session_serial_baudrate
self.tx_serial = self.rx_serial = serial.Serial(serial_interface, baudrate, timeout=0)
except SerialException:
raise CriticalError("SerialException. Ensure $USER is in the dialout group by restarting this computer.")
def write(self, orig_packet: bytes) -> None:
"""Add error correction data and output data via socket/serial interface.
After outputting the packet via serial, sleep long enough to
trigger the Rx-side timeout timer, or if local testing is
enabled, add slight delay to simulate that introduced by the
serial interface.
"""
packet = self.add_error_correction(orig_packet)
if self.settings.local_testing_mode and self.tx_socket is not None:
try:
self.tx_socket.send(packet)
time.sleep(LOCAL_TESTING_PACKET_DELAY)
except BrokenPipeError:
raise CriticalError("Relay IPC server disconnected.", exit_code=0)
else:
try:
self.interface.write(packet)
self.interface.flush()
time.sleep(self.settings.txm_inter_packet_delay)
self.tx_serial.write(packet)
self.tx_serial.flush()
time.sleep(self.settings.tx_inter_packet_delay)
except SerialException:
self.establish_serial()
self.write(packet)
self.write(orig_packet)
def read(self) -> bytes:
"""Read data via socket/serial interface."""
if self.settings.local_testing_mode:
"""Read data via socket/serial interface.
Read 0..N bytes from serial interface, where N is the buffer
size of the serial interface. Once `read_buffer` has data, and
the interface hasn't returned data long enough for the timer to
exceed the timeout value, return received data.
"""
if self.settings.local_testing_mode and self.rx_socket is not None:
while True:
try:
return self.interface.recv()
packet = self.rx_socket.recv() # type: bytes
return packet
except KeyboardInterrupt:
pass
except EOFError:
raise CriticalError("IPC client disconnected.")
raise CriticalError("Relay IPC client disconnected.", exit_code=0)
else:
while True:
try:
start_time = 0.0
read_buffer = bytearray()
while True:
read = self.interface.read(1000)
read = self.rx_serial.read_all()
if read:
start_time = time.monotonic()
read_buffer.extend(read)
else:
if read_buffer:
delta = time.monotonic() - start_time
if delta > self.settings.rxm_receive_timeout:
if delta > self.settings.rx_receive_timeout:
return bytes(read_buffer)
else:
time.sleep(0.001)
time.sleep(0.0001)
except KeyboardInterrupt:
except (EOFError, KeyboardInterrupt):
pass
except SerialException:
except (OSError, SerialException):
self.establish_serial()
self.read()
def server_establish_socket(self) -> None:
"""Establish IPC server."""
listener = multiprocessing.connection.Listener(('localhost', RXM_LISTEN_SOCKET))
self.interface = listener.accept()
def add_error_correction(self, packet: bytes) -> bytes:
"""Add error correction to packet that will be output.
def client_establish_socket(self) -> None:
"""Establish IPC client."""
try:
phase("Waiting for connection to NH", offset=11)
while True:
try:
socket_number = TXM_DD_LISTEN_SOCKET if self.settings.data_diode_sockets else NH_LISTEN_SOCKET
self.interface = multiprocessing.connection.Client(('localhost', socket_number))
phase("Established", done=True)
break
except socket.error:
time.sleep(0.1)
If the error correction setting is set to 1 or higher, TFC adds
Reed-Solomon erasure codes to detect and correct errors during
transmission over the serial interface. For more information on
Reed-Solomon, see
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
https://www.cs.cmu.edu/~guyb/realworld/reedsolomon/reed_solomon_codes.html
except KeyboardInterrupt:
graceful_exit()
If error correction is set to 0, errors are only detected. This
is done by using a BLAKE2b based, 128-bit checksum.
"""
if self.settings.session_serial_error_correction:
packet = self.rs.encode(packet)
else:
packet = packet + hashlib.blake2b(packet, digest_size=PACKET_CHECKSUM_LENGTH).digest()
return packet
def establish_serial(self) -> None:
"""Create a new Serial object."""
try:
serial_nh = self.search_serial_interface()
self.interface = serial.Serial(serial_nh, self.settings.session_serial_baudrate, timeout=0)
except SerialException:
raise CriticalError("SerialException. Ensure $USER is in the dialout group.")
def detect_errors(self, packet: bytes) -> bytes:
"""Handle received packet error detection and/or correction."""
if self.settings.session_serial_error_correction:
try:
packet, _ = self.rs.decode(packet)
return bytes(packet)
except ReedSolomonError:
raise FunctionReturn("Error: Reed-Solomon failed to correct errors in the received packet.", bold=True)
else:
packet, checksum = separate_trailer(packet, PACKET_CHECKSUM_LENGTH)
if hashlib.blake2b(packet, digest_size=PACKET_CHECKSUM_LENGTH).digest() != checksum:
raise FunctionReturn("Warning! Received packet had an invalid checksum.", bold=True)
return packet
def search_serial_interface(self) -> str:
"""Search for serial interface."""
"""Search for a serial interface."""
if self.settings.session_usb_serial_adapter:
search_announced = False
if not self.init_found:
print_on_previous_line()
phase("Searching for USB-to-serial interface")
phase("Searching for USB-to-serial interface", offset=len('Found'))
while True:
time.sleep(0.1)
for f in sorted(os.listdir('/dev')):
for f in sorted(os.listdir('/dev/')):
if f.startswith('ttyUSB'):
if self.init_found:
time.sleep(1.5)
time.sleep(1)
phase('Found', done=True)
if self.init_found:
print_on_previous_line(reps=2)
self.init_found = True
return f'/dev/{f}'
else:
if not search_announced:
if self.init_found:
phase("Serial adapter disconnected. Waiting for interface", head=1)
time.sleep(0.1)
if self.init_found and not search_announced:
phase("Serial adapter disconnected. Waiting for interface", head=1, offset=len('Found'))
search_announced = True
else:
f = 'ttyS0'
if f in sorted(os.listdir('/dev/')):
return f'/dev/{f}'
raise CriticalError(f"Error: /dev/{f} was not found.")
if self.settings.built_in_serial_interface in sorted(os.listdir('/dev/')):
return f'/dev/{self.settings.built_in_serial_interface}'
raise CriticalError(f"Error: /dev/{self.settings.built_in_serial_interface} was not found.")
# Local testing
def server_establish_socket(self) -> None:
"""Initialize the receiver (IPC server).
The multiprocessing connection during local test does not
utilize authentication keys* because a MITM-attack against the
connection requires endpoint compromise, and in such situation,
MITM attack is not nearly as effective as key/screen logging or
RAM dump.
* https://docs.python.org/3/library/multiprocessing.html#authentication-keys
Similar to the case of standard mode of operation, all sensitive
data that passes through the socket/serial interface and Relay
Program is encrypted. A MITM attack between the sockets could of
course be used to e.g. inject public keys, but like with all key
exchanges, that would only work if the user neglects fingerprint
verification.
Another reason why the authentication key is useless, is the key
needs to be pre-shared. This means there's two ways to share it:
1) Hard-code the key to source file from where malware could
read it.
2) Force the user to manually copy the PSK from one program
to another. This would change the workflow that the local
test configuration tries to simulate.
To conclude, the local test configuration should never be used
under a threat model where endpoint security is of importance.
"""
try:
socket_number = RP_LISTEN_SOCKET if self.settings.software_operation == NC else DST_LISTEN_SOCKET
listener = multiprocessing.connection.Listener((LOCALHOST, socket_number))
self.rx_socket = listener.accept()
except KeyboardInterrupt:
graceful_exit()
def client_establish_socket(self) -> None:
"""Initialize the transmitter (IPC client)."""
try:
target = RXP if self.settings.software_operation == NC else RP
phase(f"Connecting to {target}")
while True:
try:
if self.settings.software_operation == TX:
socket_number = SRC_DD_LISTEN_SOCKET if self.settings.data_diode_sockets else RP_LISTEN_SOCKET
else:
socket_number = DST_DD_LISTEN_SOCKET if self.settings.data_diode_sockets else DST_LISTEN_SOCKET
try:
self.tx_socket = multiprocessing.connection.Client((LOCALHOST, socket_number))
except ConnectionRefusedError:
time.sleep(0.1)
continue
phase(DONE)
break
except socket.error:
time.sleep(0.1)
except KeyboardInterrupt:
graceful_exit()
class GatewaySettings(object):
"""\
Gateway settings store settings for serial interface in an
unencrypted JSON database.
The reason these settings are in plaintext is it protects the system
from inconsistent state of serial settings: Would the user reconfigure
their serial settings, and would the setting altering packet to
Receiver Program drop, Relay Program could in some situations no
longer communicate with the Receiver Program.
Serial interface settings are not sensitive enough to justify the
inconvenience of encrypting the setting values.
"""
def __init__(self,
operation: str,
local_test: bool,
dd_sockets: bool
) -> None:
"""Create a new Settings object.
The settings below are altered from within the program itself.
Changes made to the default settings are stored in the JSON
file under $HOME/tfc/user_data from where, if needed, they can
be manually altered by the user.
"""
self.serial_baudrate = 19200
self.serial_error_correction = 5
self.use_serial_usb_adapter = True
self.built_in_serial_interface = 'ttyS0'
self.software_operation = operation
self.local_testing_mode = local_test
self.data_diode_sockets = dd_sockets
self.all_keys = list(vars(self).keys())
self.key_list = self.all_keys[:self.all_keys.index('software_operation')]
self.defaults = {k: self.__dict__[k] for k in self.key_list}
self.file_name = f'{DIR_USER_DATA}{self.software_operation}_serial_settings.json'
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_settings()
else:
self.setup()
self.store_settings()
self.session_serial_baudrate = self.serial_baudrate
self.session_serial_error_correction = self.serial_error_correction
self.session_usb_serial_adapter = self.use_serial_usb_adapter
self.tx_inter_packet_delay, self.rx_receive_timeout = self.calculate_serial_delays(self.session_serial_baudrate)
self.race_condition_delay = calculate_race_condition_delay(self.session_serial_error_correction,
self.serial_baudrate)
@classmethod
def calculate_serial_delays(cls, baud_rate: int) -> Tuple[float, float]:
"""Calculate the inter-packet delay and receive timeout.
Although this calculation mainly depends on the baud rate, a
minimal value will be set for rx_receive_timeout. This is to
ensure high baud rates do not cause issues by having shorter
delays than what the `time.sleep()` resolution allows.
"""
bytes_per_sec = baud_rate / BAUDS_PER_BYTE
byte_travel_t = 1 / bytes_per_sec
rx_receive_timeout = max(2 * byte_travel_t, SERIAL_RX_MIN_TIMEOUT)
tx_inter_packet_delay = 2 * rx_receive_timeout
return tx_inter_packet_delay, rx_receive_timeout
def setup(self) -> None:
"""Prompt the user to enter initial serial interface setting.
Ensure that the serial interface is available before proceeding.
"""
if not self.local_testing_mode:
name = {TX: TXP, NC: RP, RX: RXP}[self.software_operation]
self.use_serial_usb_adapter = yes(f"Use USB-to-serial/TTL adapter for {name} Computer?", head=1, tail=1)
if self.use_serial_usb_adapter:
for f in sorted(os.listdir('/dev/')):
if f.startswith('ttyUSB'):
return None
else:
m_print("Error: USB-to-serial/TTL adapter not found.")
self.setup()
else:
if self.built_in_serial_interface in sorted(os.listdir('/dev/')):
return None
else:
m_print(f"Error: Serial interface /dev/{self.built_in_serial_interface} not found.")
self.setup()
def store_settings(self) -> None:
"""Store serial settings in JSON format."""
serialized = json.dumps(self, default=(lambda o: {k: self.__dict__[k] for k in self.key_list}), indent=4)
with open(self.file_name, 'w+') as f:
f.write(serialized)
def invalid_setting(self, key: str, json_dict: Dict[str, Union[bool, int, str]]) -> None:
"""Notify about setting an invalid value to default value."""
m_print([f"Error: Invalid value '{json_dict[key]}' for setting '{key}' in '{self.file_name}'.",
f"The value has been set to default ({self.defaults[key]})."], head=1, tail=1)
setattr(self, key, self.defaults[key])
def load_settings(self) -> None:
"""Load and validate JSON settings for serial interface."""
with open(self.file_name) as f:
try:
json_dict = json.load(f)
except json.decoder.JSONDecodeError:
os.remove(self.file_name)
self.store_settings()
print(f"\nError: Invalid JSON format in '{self.file_name}'."
"\nSerial interface settings have been set to default values.\n")
return None
# Check for missing setting
for key in self.key_list:
if key not in json_dict:
m_print([f"Error: Missing setting '{key}' in '{self.file_name}'.",
f"The value has been set to default ({self.defaults[key]})."], head=1, tail=1)
setattr(self, key, self.defaults[key])
continue
# Closer inspection of each setting value
if key == 'serial_baudrate' and json_dict[key] not in serial.Serial().BAUDRATES:
self.invalid_setting(key, json_dict)
continue
elif key == 'serial_error_correction' and (not isinstance(json_dict[key], int) or json_dict[key] < 0):
self.invalid_setting(key, json_dict)
continue
elif key == 'use_serial_usb_adapter':
if not isinstance(json_dict[key], bool):
self.invalid_setting(key, json_dict)
continue
elif key == 'built_in_serial_interface':
if not isinstance(json_dict[key], str):
self.invalid_setting(key, json_dict)
continue
if not any(json_dict[key] == f for f in os.listdir('/sys/class/tty')):
self.invalid_setting(key, json_dict)
continue
setattr(self, key, json_dict[key])
# Store after loading to add missing, to replace invalid settings,
# and to remove settings that do not belong in the JSON file.
self.store_settings()
def change_setting(self, key: str, value_str: str) -> None:
"""Parse, update and store new setting value."""
attribute = self.__getattribute__(key)
try:
if isinstance(attribute, bool):
value = dict(true=True, false=False)[value_str.lower()] # type: Union[bool, int]
elif isinstance(attribute, int):
value = int(value_str)
if value < 0 or value > MAX_INT:
raise ValueError
else:
raise CriticalError("Invalid attribute type in settings.")
except (KeyError, ValueError):
raise FunctionReturn(f"Error: Invalid value '{value_str}'.", delay=1, tail_clear=True)
self.validate_key_value_pair(key, value)
setattr(self, key, value)
self.store_settings()
@staticmethod
def validate_key_value_pair(key: str, value: Union[int, bool]) -> None:
"""\
Perform further evaluation on settings the values of which have
restrictions.
"""
if key == 'serial_baudrate':
if value not in serial.Serial().BAUDRATES:
raise FunctionReturn("Error: The specified baud rate is not supported.")
m_print("Baud rate will change on restart.", head=1, tail=1)
if key == 'serial_error_correction':
if value < 0:
raise FunctionReturn("Error: Invalid value for error correction ratio.")
m_print("Error correction ratio will change on restart.", head=1, tail=1)
def print_settings(self) -> None:
"""\
Print list of settings, their current and
default values, and setting descriptions.
"""
desc_d = {"serial_baudrate": "The speed of serial interface in bauds per second",
"serial_error_correction": "Number of byte errors serial datagrams can recover from"}
# Columns
c1 = ['Serial interface setting']
c2 = ['Current value']
c3 = ['Default value']
c4 = ['Description']
terminal_width = get_terminal_width()
description_indent = 64
if terminal_width < description_indent + 1:
raise FunctionReturn("Error: Screen width is too small.")
# Populate columns with setting data
for key in desc_d:
c1.append(key)
c2.append(str(self.__getattribute__(key)))
c3.append(str(self.defaults[key]))
description = desc_d[key]
wrapper = textwrap.TextWrapper(width=max(1, (terminal_width - description_indent)))
desc_lines = wrapper.fill(description).split('\n')
desc_string = desc_lines[0]
for line in desc_lines[1:]:
desc_string += '\n' + description_indent * ' ' + line
if len(desc_lines) > 1:
desc_string += '\n'
c4.append(desc_string)
# Calculate column widths
c1w, c2w, c3w = [max(len(v) for v in column) + SETTINGS_INDENT for column in [c1, c2, c3]]
# Align columns by adding whitespace between fields of each line
lines = [f'{f1:{c1w}} {f2:{c2w}} {f3:{c3w}} {f4}' for f1, f2, f3, f4 in zip(c1, c2, c3, c4)]
# Add a terminal-wide line between the column names and the data
lines.insert(1, get_terminal_width() * '')
# Print the settings
print('\n' + '\n'.join(lines) + '\n')

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,195 +16,181 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import getpass
import typing
from typing import Any, Callable
from typing import Any, Callable, Optional
from src.common.encoding import b58decode
from src.common.exceptions import CriticalError
from src.common.misc import get_terminal_width
from src.common.output import box_print, c_print, clear_screen, message_printer, print_on_previous_line
from src.common.misc import get_terminal_width, terminal_width_check
from src.common.output import clear_screen, m_print, print_on_previous_line, print_spacing
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_settings import Settings
def ask_confirmation_code() -> str:
def ask_confirmation_code(source: str # The system the confirmation code is displayed by
) -> str: # The confirmation code entered by the user
"""\
Ask user to input confirmation code from RxM
to verify that local key has been installed.
Input box accommodates room for the 'resend' command.
Ask the user to input confirmation code from Source Computer to
verify local key has been installed.
"""
title = "Enter confirmation code (from RxM): "
space = len(' resend ')
title = f"Enter confirmation code (from {source}): "
input_space = len(' ff ')
upper_line = ('' + (len(title) + space) * '' + '')
title_line = ('' + title + space * ' ' + '')
lower_line = ('' + (len(title) + space) * '' + '')
upper_line = ('' + (len(title) + input_space) * '' + '')
title_line = ('' + title + input_space * ' ' + '')
lower_line = ('' + (len(title) + input_space) * '' + '')
terminal_w = get_terminal_width()
upper_line = upper_line.center(terminal_w)
title_line = title_line.center(terminal_w)
lower_line = lower_line.center(terminal_w)
terminal_width_check(len(upper_line))
print(upper_line)
print(title_line)
print(lower_line)
print(3 * CURSOR_UP_ONE_LINE)
indent = title_line.find('')
return input(indent * ' ' + '{}'.format(title))
return input(indent * ' ' + f'{title}')
def box_input(message: str,
default: str = '',
head: int = 0,
tail: int = 1,
expected_len: int = 0,
validator: Callable = None,
validator_args: Any = None,
key_input: bool = False) -> str:
"""Display boxed input prompt with title.
def box_input(message: str, # Input prompt message
default: str = '', # Default return value
head: int = 0, # Number of new lines to print before the input
tail: int = 1, # Number of new lines to print after input
expected_len: int = 0, # Expected length of the input
key_type: str = '', # When specified, sets input width
guide: bool = False, # When True, prints the guide for key
validator: Optional[Callable] = None, # Input validator function
validator_args: Optional[Any] = None # Arguments required by the validator
) -> str: # Input from user
"""Display boxed input prompt with the title."""
print_spacing(head)
:param message: Input prompt message
:param default: Default return value
:param head: Number of new lines to print before input
:param tail: Number of new lines to print after input
:param expected_len Expected length of input
:param validator: Input validator function
:param validator_args: Arguments required by the validator
:param key_input: When True, prints key input position guide
:return: Input from user
"""
for _ in range(head):
terminal_width = get_terminal_width()
if key_type:
key_guide = {B58_LOCAL_KEY: B58_LOCAL_KEY_GUIDE,
B58_PUBLIC_KEY: B58_PUBLIC_KEY_GUIDE}.get(key_type, '')
if guide:
inner_spc = len(key_guide) + 2
else:
inner_spc = (86 if key_type == B58_PUBLIC_KEY else 53)
else:
key_guide = ''
inner_spc = terminal_width - 2 if expected_len == 0 else expected_len + 2
upper_line = '' + inner_spc * '' + ''
guide_line = '' + key_guide + ''
input_line = '' + inner_spc * ' ' + ''
lower_line = '' + inner_spc * '' + ''
box_indent = (terminal_width - len(upper_line)) // 2 * ' '
terminal_width_check(len(upper_line))
print(box_indent + upper_line)
if guide:
print(box_indent + guide_line)
print(box_indent + input_line)
print(box_indent + lower_line)
print((5 if guide else 4) * CURSOR_UP_ONE_LINE)
print(box_indent + '┌─┤' + message + '')
if guide:
print('')
terminal_w = get_terminal_width()
input_len = terminal_w - 2 if expected_len == 0 else expected_len + 2
if key_input:
input_len += 2
input_top_line = '' + input_len * '' + ''
key_pos_guide = '' + ' '.join('ABCDEFGHIJKLMNOPQ') + ''
input_line = '' + input_len * ' ' + ''
input_bot_line = '' + input_len * '' + ''
input_line_indent = (terminal_w - len(input_line)) // 2
input_box_indent = input_line_indent * ' '
print(input_box_indent + input_top_line)
if key_input:
print(input_box_indent + key_pos_guide)
print(input_box_indent + input_line)
print(input_box_indent + input_bot_line)
print((5 if key_input else 4) * CURSOR_UP_ONE_LINE)
print(input_box_indent + '┌─┤' + message + '')
if key_input:
print('')
user_input = input(input_box_indent + '')
user_input = input(box_indent + '')
if user_input == '':
print(2 * CURSOR_UP_ONE_LINE)
print(input_box_indent + '{}'.format(default))
print(box_indent + '' + default)
user_input = default
if validator is not None:
error_msg = validator(user_input, validator_args)
if error_msg:
c_print("Error: {}".format(error_msg), head=1)
print_on_previous_line(reps=4, delay=1.5)
return box_input(message, default, head, tail, expected_len, validator, validator_args)
m_print(error_msg, head=1)
print_on_previous_line(reps=4, delay=1)
return box_input(message, default, head, tail, expected_len, key_type, guide, validator, validator_args)
for _ in range(tail):
print('')
print_spacing(tail)
return user_input
def get_b58_key(key_type: str, settings: 'Settings') -> bytes:
"""Ask user to input Base58 encoded public key from RxM.
For file keys, use testnet address format instead to
prevent file injected via import from accidentally
being decrypted with public key from adversary.
"""
if key_type == B58_PUB_KEY:
def get_b58_key(key_type: str, # The type of Base58 key to be entered
settings: 'Settings', # Settings object
short_address: str = '' # The contact's short Onion address
) -> bytes: # The B58 decoded key
"""Ask the user to input a Base58 encoded key."""
if key_type == B58_PUBLIC_KEY:
clear_screen()
c_print("Import public key from RxM", head=1, tail=1)
c_print("WARNING")
message_printer("Outside specific requests TxM (this computer) "
"makes, you must never copy any data from "
"NH/RxM to TxM. Doing so could infect TxM, that "
"could then later covertly transmit private "
"keys/messages to attacker.", head=1, tail=1)
message_printer("You can resend your public key by typing 'resend'", tail=1)
box_msg = "Enter contact's public key from RxM"
m_print(f"{ECDHE} key exchange", head=1, tail=1, bold=True)
m_print("If needed, resend your public key to the contact by pressing <Enter>", tail=1)
box_msg = f"Enter public key of {short_address} (from Relay)"
elif key_type == B58_LOCAL_KEY:
box_msg = "Enter local key decryption key from TxM"
elif key_type == B58_FILE_KEY:
box_msg = "Enter file decryption key"
box_msg = "Enter local key decryption key (from Transmitter)"
else:
raise CriticalError("Invalid key type")
while True:
if settings.local_testing_mode or key_type == B58_FILE_KEY:
pub_key = box_input(box_msg, expected_len=51)
small = True
else:
pub_key = box_input(box_msg, expected_len=65, key_input=True)
small = False
pub_key = ''.join(pub_key.split())
rx_pk = box_input(box_msg, key_type=key_type, guide=not settings.local_testing_mode)
rx_pk = ''.join(rx_pk.split())
if key_type == B58_PUB_KEY and pub_key == RESEND:
return pub_key.encode()
if key_type == B58_PUBLIC_KEY and rx_pk == '':
return rx_pk.encode()
try:
return b58decode(pub_key, file_key=(key_type==B58_FILE_KEY))
return b58decode(rx_pk, public_key=(key_type == B58_PUBLIC_KEY))
except ValueError:
c_print("Checksum error - Check that entered key is correct.", head=1)
print_on_previous_line(reps=5 if small else 6, delay=1.5)
m_print("Checksum error - Check that the entered key is correct.")
print_on_previous_line(reps=(4 if settings.local_testing_mode else 5), delay=1)
def nh_bypass_msg(key: str, settings: 'Settings') -> None:
"""Print messages about bypassing NH.
def nc_bypass_msg(key: str, settings: 'Settings') -> None:
"""Print messages about bypassing Networked Computer.
During ciphertext delivery of local key exchange, NH bypass messages
tell user when to bypass and remove bypass of NH. This makes initial
key bootstrap more secure in case key decryption key input is not safe.
During ciphertext delivery of local key exchange, these bypass
messages tell the user when to bypass and remove bypass of Networked
Computer. Bypass of Networked Computer makes initial bootstrap more
secure by denying remote attacker the access to the encrypted local
key. Without the ciphertext, e.g. a visually collected local key
decryption key is useless.
"""
m = {NH_BYPASS_START: "Bypass NH if needed. Press <Enter> to send local key.",
NH_BYPASS_STOP: "Remove bypass of NH. Press <Enter> to continue."}
m = {NC_BYPASS_START: "Bypass Networked Computer if needed. Press <Enter> to send local key.",
NC_BYPASS_STOP: "Remove bypass of Networked Computer. Press <Enter> to continue."}
if settings.nh_bypass_messages:
box_print(m[key], manual_proceed=True, head=(1 if key == NH_BYPASS_STOP else 0))
if settings.nc_bypass_messages:
m_print(m[key], manual_proceed=True, box=True, head=(1 if key == NC_BYPASS_STOP else 0))
def pwd_prompt(message: str, second: bool = False) -> str:
"""Prompt user to enter a password.
def pwd_prompt(message: str, # Prompt message
repeat: bool = False # When True, prints corner chars for the second box
) -> str: # Password from user
"""Prompt the user to enter a password.
:param message: Prompt message
:param second: When True, prints corner chars for second box
:return: Password from user
The getpass library ensures the password is not echoed on screen
when it is typed.
"""
l, r = {False: ('', ''),
True: ('', '')}[second]
l, r = ('', '') if repeat else ('', '')
upper_line = ( l + (len(message) + 3) * '' + r )
title_line = ('' + message + 3 * ' ' + '')
lower_line = ('' + (len(message) + 3) * '' + '')
terminal_w = get_terminal_width()
input_space = len(' c ') # `c` is where the caret sits
terminal_w = get_terminal_width()
upper_line = upper_line.center(terminal_w)
title_line = title_line.center(terminal_w)
lower_line = lower_line.center(terminal_w)
upper_line = ( l + (len(message) + input_space) * '' + r ).center(terminal_w)
title_line = ('' + message + input_space * ' ' + '').center(terminal_w)
lower_line = ('' + (len(message) + input_space) * '' + '').center(terminal_w)
terminal_width_check(len(upper_line))
print(upper_line)
print(title_line)
@ -211,27 +198,25 @@ def pwd_prompt(message: str, second: bool = False) -> str:
print(3 * CURSOR_UP_ONE_LINE)
indent = title_line.find('')
user_input = getpass.getpass(indent * ' ' + '{}'.format(message))
user_input = getpass.getpass(indent * ' ' + f'{message}')
return user_input
def yes(prompt: str, head: int = 0, tail: int = 0) -> bool:
"""Prompt user a question that is answered with yes / no.
def yes(prompt: str, # Question to be asked
abort: Optional[bool] = None, # Determines the return value of ^C and ^D
head: int = 0, # Number of new lines to print before prompt
tail: int = 0 # Number of new lines to print after prompt
) -> bool: # True/False depending on input
"""Prompt the user a question that is answered with yes/no."""
print_spacing(head)
:param prompt: Question to be asked
:param head: Number of new lines to print before prompt
:param tail: Number of new lines to print after prompt
:return: True if user types 'y' or 'yes'
False if user types 'n' or 'no'
"""
for _ in range(head):
print('')
prompt = f"{prompt} (y/n): "
input_space = len(' yes ')
prompt = "{} (y/n): ".format(prompt)
upper_line = ('' + (len(prompt) + 5) * '' + '')
title_line = ('' + prompt + 5 * ' ' + '')
lower_line = ('' + (len(prompt) + 5) * '' + '')
upper_line = ('' + (len(prompt) + input_space) * '' + '')
title_line = ('' + prompt + input_space * ' ' + '')
lower_line = ('' + (len(prompt) + input_space) * '' + '')
terminal_w = get_terminal_width()
upper_line = upper_line.center(terminal_w)
@ -240,25 +225,33 @@ def yes(prompt: str, head: int = 0, tail: int = 0) -> bool:
indent = title_line.find('')
terminal_width_check(len(upper_line))
print(upper_line)
while True:
print(title_line)
print(lower_line)
print(3 * CURSOR_UP_ONE_LINE)
user_input = input(indent * ' ' + '{}'.format(prompt))
try:
user_input = input(indent * ' ' + f'{prompt}')
except (EOFError, KeyboardInterrupt):
if abort is None:
raise
print('')
user_input = 'y' if abort else 'n'
print_on_previous_line()
if user_input == '':
continue
if user_input.lower() in ['y', 'yes']:
print(indent * ' ' + '{}Yes │\n'.format(prompt))
for _ in range(tail):
print('')
print(indent * ' ' + f'{prompt}Yes │\n')
print_spacing(tail)
return True
elif user_input.lower() in ['n', 'no']:
print(indent * ' ' + '{}No │\n'.format(prompt))
for _ in range(tail):
print('')
print(indent * ' ' + f'{prompt}No │\n')
print_spacing(tail)
return False

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,18 +16,25 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import argparse
import base64
import binascii
import hashlib
import math
import os
import re
import shutil
import subprocess
import sys
import time
import typing
import zlib
from contextlib import contextmanager
from typing import Any, Callable, Generator, List, Tuple, Union
from contextlib import contextmanager
from typing import Any, Callable, Dict, Generator, List, Tuple, Union
from multiprocessing import Process, Queue
from src.common.reed_solomon import RSCodec
from src.common.statics import *
@ -35,139 +43,214 @@ if typing.TYPE_CHECKING:
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_settings import Settings
from src.nh.settings import Settings as NHSettings
from src.common.gateway import Gateway
def calculate_race_condition_delay(settings: Union['Settings', 'NHSettings'], txm: bool = False) -> float:
"""Calculate NH race condition delay.
def calculate_race_condition_delay(serial_error_correction: int,
serial_baudrate: int
) -> float:
"""\
Calculate the delay required to prevent Relay Program race condition.
This value is the max time it takes for NH to deliver
command received from TxM all the way to RxM.
When Transmitter Program outputs a command to exit or wipe data,
Relay program will also receive a copy of the command. If Relay
Program acts on the command too early, Receiver Program will not
receive the exit/wipe command at all.
:param settings: Settings object
:param txm: When True, allocate time for command delivery from TxM to NH
:return: Time to wait to prevent race condition
This program calculates the delay Transmitter Program should wait
before outputting command for Relay Program, to ensure Receiver
Program has received the encrypted command.
"""
rs = RSCodec(2 * settings.session_serial_error_correction)
max_account_length = 254
max_message_length = PACKET_LENGTH + 2 * max_account_length
command_length = 365*2 if txm else 365
max_bytes = (len(rs.encode(os.urandom(max_message_length)))
+ len(rs.encode(os.urandom(command_length))))
rs = RSCodec(2 * serial_error_correction)
message_length = PACKET_LENGTH + ONION_ADDRESS_LENGTH
enc_msg_length = len(rs.encode(os.urandom(message_length)))
enc_cmd_length = len(rs.encode(os.urandom(COMMAND_LENGTH)))
max_bytes = enc_msg_length + (2 * enc_cmd_length)
return (max_bytes * BAUDS_PER_BYTE) / settings.serial_baudrate
return (max_bytes * BAUDS_PER_BYTE) / serial_baudrate
def calculate_serial_delays(session_serial_baudrate: int) -> Tuple[float, float]:
"""Calculate transmission delay and receive timeout."""
bytes_per_sec = session_serial_baudrate / BAUDS_PER_BYTE
byte_travel_t = 1 / bytes_per_sec
def decompress(data: bytes, # Data to be decompressed
max_size: int # The maximum size of decompressed data.
) -> bytes: # Decompressed data
"""Decompress received data.
rxm_receive_timeout = max(2 * byte_travel_t, 0.02)
txm_inter_packet_delay = 2 * rxm_receive_timeout
The decompressed data has a maximum size, designed to prevent zip
bombs from filling the drive of an unsuspecting user.
"""
from src.common.exceptions import FunctionReturn # Avoid circular import
return rxm_receive_timeout, txm_inter_packet_delay
dec = zlib.decompressobj()
data = dec.decompress(data, max_size)
if dec.unconsumed_tail:
raise FunctionReturn("Error: Decompression aborted due to possible zip bomb.")
del dec
return data
def ensure_dir(directory: str) -> None:
"""Ensure directory exists."""
"""Ensure directory exists.
This function is run before checking a database exists in the
specified directory, or before storing data into a directory.
It prevents errors in case user has for some reason removed
the directory.
"""
name = os.path.dirname(directory)
if not os.path.exists(name):
os.makedirs(name)
with ignored(FileExistsError):
os.makedirs(name)
def get_tab_complete_list(contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> List[str]:
settings: 'Settings',
gateway: 'Gateway'
) -> List[str]:
"""Return a list of tab-complete words."""
tc_list = ['about', 'add ', 'all', 'clear', 'cmd', 'create ', 'exit', 'export ',
'false', 'file', 'fingerprints', 'group ', 'help', 'history ', 'join ', 'localkey',
'logging ', 'msg ', 'names', 'nick ', 'notify ', 'passwd ', 'psk', 'reset',
'rm', 'rmlogs ', 'set ', 'settings', 'store ', 'true', 'unread', 'whisper ']
commands = ['about', 'add ', 'clear', 'cmd', 'connect', 'exit', 'export ', 'file', 'group ', 'help', 'history ',
'localkey', 'logging ', 'msg ', 'names', 'nick ', 'notify ', 'passwd ', 'psk', 'reset', 'rmlogs ',
'set ', 'settings', 'store ', 'unread', 'verify', 'whisper ', 'whois ']
tc_list += [(c + ' ') for c in contact_list.get_list_of_accounts()]
tc_list = ['all', 'create ', 'false', 'False', 'join ', 'true', 'True']
tc_list += commands
tc_list += [(a + ' ') for a in contact_list.get_list_of_addresses()]
tc_list += [(n + ' ') for n in contact_list.get_list_of_nicks()]
tc_list += [(u + ' ') for u in contact_list.get_list_of_users_accounts()]
tc_list += [(g + ' ') for g in group_list.get_list_of_group_names()]
tc_list += [(i + ' ') for i in group_list.get_list_of_hr_group_ids()]
tc_list += [(s + ' ') for s in settings.key_list]
tc_list += [(s + ' ') for s in gateway.settings.key_list]
return tc_list
def get_tab_completer(contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> Callable:
"""Return tab completer object."""
settings: 'Settings',
gateway: 'Gateway'
) -> Callable:
"""Return the tab completer object."""
def tab_complete(text, state) -> List[str]:
"""Return tab_complete options."""
tab_complete_list = get_tab_complete_list(contact_list, group_list, settings)
options = [t for t in tab_complete_list if t.startswith(text)]
def tab_complete(text: str, state: Any) -> List[str]:
"""Return tab-complete options."""
tab_complete_list = get_tab_complete_list(contact_list, group_list, settings, gateway)
options = [t for t in tab_complete_list if t.startswith(text)] # type: List[str]
with ignored(IndexError):
return options[state]
tc = options[state] # type: List[str]
return tc
return tab_complete
def get_terminal_height() -> int:
"""Return height of terminal."""
return int(shutil.get_terminal_size()[1])
"""Return the height of the terminal."""
return shutil.get_terminal_size()[1]
def get_terminal_width() -> int:
"""Return width of terminal."""
"""Return the width of the terminal."""
return shutil.get_terminal_size()[0]
@contextmanager
def ignored(*exceptions: Any) -> Generator:
"""Ignore exception."""
"""Ignore an exception."""
try:
yield
except exceptions:
pass
def process_arguments() -> Tuple[str, bool, bool]:
"""Load TxM/RxM startup settings from command line arguments."""
parser = argparse.ArgumentParser('python3.6 tfc.py',
usage='%(prog)s [OPTION]',
description='')
def monitor_processes(process_list: List[Process],
software_operation: str,
queues: Dict[bytes, Queue],
error_exit_code: int = 1
) -> None:
"""Monitor the status of `process_list` and EXIT_QUEUE.
parser.add_argument('-rx',
This function monitors a list of processes. If one of them dies, it
terminates the rest and closes TFC with exit code 1.
If EXIT or WIPE signal is received to EXIT_QUEUE, the function
terminates running processes and closes the program with exit code 0
or overwrites existing user data and powers the system off.
"""
while True:
with ignored(EOFError, KeyboardInterrupt):
time.sleep(0.1)
if not all([p.is_alive() for p in process_list]):
for p in process_list:
p.terminate()
sys.exit(error_exit_code)
if queues[EXIT_QUEUE].qsize() > 0:
command = queues[EXIT_QUEUE].get()
for p in process_list:
p.terminate()
if command == EXIT:
sys.exit(0)
if command == WIPE:
if TAILS not in subprocess.check_output('lsb_release -a', shell=True):
if software_operation == RX:
subprocess.Popen("find {} -type f -exec shred -n 3 -z -u {{}} \;"
.format(DIR_RECV_FILES), shell=True).wait()
subprocess.Popen("find {} -name '{}*' -type f -exec shred -n 3 -z -u {{}} \;"
.format(DIR_USER_DATA, software_operation), shell=True).wait()
for d in [DIR_USER_DATA, DIR_RECV_FILES]:
with ignored(FileNotFoundError):
shutil.rmtree(d)
os.system(POWEROFF)
def process_arguments() -> Tuple[str, bool, bool]:
"""Load program-specific settings from command line arguments.
The arguments are determined by the desktop entries and in the
Terminator configuration file for local testing. The descriptions
here are provided for the sake of completeness.
"""
parser = argparse.ArgumentParser(f'python3.6 {sys.argv[0]}',
usage='%(prog)s [OPTION]',
epilog='Full documentation at: <https://github.com/maqp/tfc/wiki>')
parser.add_argument('-r',
action='store_true',
default=False,
dest='operation',
help="Run RxM side program")
help="run Receiver instead of Transmitter Program")
parser.add_argument('-l',
action='store_true',
default=False,
dest='local_test',
help="Enable local testing mode")
help="enable local testing mode")
parser.add_argument('-d',
action='store_true',
default=False,
dest='dd_sockets',
help="Data diode simulator socket configuration for local testing")
dest='data_diode_sockets',
help="use data diode simulator sockets during local testing mode")
args = parser.parse_args()
operation = RX if args.operation else TX
local_test = args.local_test
dd_sockets = args.dd_sockets
args = parser.parse_args()
operation = RX if args.operation else TX
return operation, local_test, dd_sockets
return operation, args.local_test, args.data_diode_sockets
def readable_size(size: int) -> str:
"""Convert file size from bytes to human readable form."""
"""Convert file size from bytes to a human-readable form."""
f_size = float(size)
for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
if abs(f_size) < 1024.0:
return '{:3.1f}{}B'.format(f_size, unit)
return f'{f_size:3.1f}{unit}B'
f_size /= 1024.0
return '{:3.1f}YB'.format(f_size)
return f'{f_size:3.1f}YB'
def round_up(value: Union[int, float]) -> int:
@ -175,110 +258,178 @@ def round_up(value: Union[int, float]) -> int:
return int(math.ceil(value / 10.0)) * 10
def split_byte_string(string: bytes, item_len: int) -> List[bytes]:
"""Split byte string into list of specific length substrings.
def split_byte_string(bytestring: bytes, # Bytestring to split
item_len: int # Length of each substring
) -> List[bytes]: # List of substrings
"""Split a bytestring into a list of specific length substrings."""
return [bytestring[i:i + item_len] for i in range(0, len(bytestring), item_len)]
:param string: String to split
:param item_len: Length of list items
:return: String split to list
"""
def split_string(string: str, # String to split
item_len: int # Length of each substring
) -> List[str]: # List of substrings
"""Split a string into a list of specific length substrings."""
return [string[i:i + item_len] for i in range(0, len(string), item_len)]
def split_string(string: str, item_len: int) -> List[str]:
"""Split string into list of specific length substrings.
def separate_header(bytestring: bytes, # Bytestring to slice
header_length: int # Number of header bytes to separate
) -> Tuple[bytes, bytes]: # Header and payload
"""Separate `header_length` first bytes from a bytestring."""
return bytestring[:header_length], bytestring[header_length:]
:param string: String to split
:param item_len: Length of list items
:return: String split to list
def separate_headers(bytestring: bytes, # Bytestring to slice
header_length_list: List[int], # List of header lengths
) -> List[bytes]: # Header and payload
"""Separate a list of headers from bytestring.
Length of each header is determined in the `header_length_list`.
"""
return [string[i:i + item_len] for i in range(0, len(string), item_len)]
fields = []
for header_length in header_length_list:
field, bytestring = separate_header(bytestring, header_length)
fields.append(field)
fields.append(bytestring)
return fields
def validate_account(account: str, *_: Any) -> str:
"""Validate account name.
def separate_trailer(bytestring: bytes, # Bytestring to slice
trailer_length: int # Number of trailer bytes to separate
) -> Tuple[bytes, bytes]: # Payload and trailer
"""Separate `trailer_length` last bytes from a bytestring.
:param account: Account name to validate
:param _: Unused arguments
:return: Error message if validation failed, else empty string
This saves space and makes trailer separation more readable.
"""
return bytestring[:-trailer_length], bytestring[-trailer_length:]
def terminal_width_check(minimum_width: int) -> None:
"""Wait until user re-sizes their terminal to specified width. """
if get_terminal_width() < minimum_width:
print("Please make the terminal wider.")
while get_terminal_width() < minimum_width:
time.sleep(0.1)
time.sleep(0.1)
print(2*CURSOR_UP_ONE_LINE)
def validate_onion_addr(onion_address_contact: str, # String to slice
onion_address_user: str = '' # Number of header chars to separate
) -> str: # Payload and trailer
"""Validate a v3 Onion Service address."""
error_msg = ''
# Length limited by database's unicode padding
if len(account) >= PADDING_LEN:
error_msg = "Account must be shorter than {} chars.".format(PADDING_LEN)
try:
decoded = base64.b32decode(onion_address_contact.upper())
if not re.match(ACCOUNT_FORMAT, account):
error_msg = "Invalid account format."
public_key, checksum, version \
= separate_headers(decoded, [ONION_SERVICE_PUBLIC_KEY_LENGTH, ONION_ADDRESS_CHECKSUM_LENGTH])
# Avoid delimiter char collision in output packets
if not account.isprintable():
error_msg = "Account must be printable."
if checksum != hashlib.sha3_256(ONION_ADDRESS_CHECKSUM_ID
+ public_key
+ version
).digest()[:ONION_ADDRESS_CHECKSUM_LENGTH]:
error_msg = "Checksum error - Check that the entered account is correct."
except (binascii.Error, ValueError):
return "Error: Invalid account format."
if onion_address_contact in (LOCAL_ID, DUMMY_CONTACT, DUMMY_MEMBER) or public_key == LOCAL_PUBKEY:
error_msg = "Error: Can not add reserved account."
if onion_address_user and onion_address_contact == onion_address_user:
error_msg = "Error: Can not add own account."
return error_msg
def validate_key_exchange(key_ex: str, *_: Any) -> str:
"""Validate specified key exchange.
:param key_ex: Key exchange selection to validate
:param _: Unused arguments
:return: Error message if validation failed, else empty string
"""
def validate_group_name(group_name: str, # Name of the group
contact_list: 'ContactList', # ContactList object
group_list: 'GroupList' # GroupList object
) -> str: # Error message if validation failed, else empty string
"""Validate the specified group name."""
error_msg = ''
if key_ex.lower() not in ['x', 'x25519', 'p', 'psk']:
# Avoids collision with delimiters
if not group_name.isprintable():
error_msg = "Error: Group name must be printable."
# Length is limited by database's Unicode padding
if len(group_name) >= PADDING_LENGTH:
error_msg = f"Error: Group name must be less than {PADDING_LENGTH} chars long."
if group_name == DUMMY_GROUP:
error_msg = "Error: Group name cannot use the name reserved for database padding."
if not validate_onion_addr(group_name):
error_msg = "Error: Group name cannot have the format of an account."
if group_name in contact_list.get_list_of_nicks():
error_msg = "Error: Group name cannot be a nick of contact."
if group_name in group_list.get_list_of_group_names():
error_msg = f"Error: Group with name '{group_name}' already exists."
return error_msg
def validate_key_exchange(key_ex: str, # Key exchange selection to validate
*_: Any # Unused arguments
) -> str: # Error message if validation failed, else empty string
"""Validate the specified key exchange."""
error_msg = ''
if key_ex.upper() not in [ECDHE, ECDHE[:1], PSK, PSK[:1]]:
error_msg = "Invalid key exchange selection."
return error_msg
def validate_nick(nick: str, args: Tuple['ContactList', 'GroupList', str]) -> str:
"""Validate nickname for account.
:param nick: Nick to validate
:param args: Contact list and group list databases
:return: Error message if validation failed, else empty string
"""
contact_list, group_list, account = args
def validate_nick(nick: str, # Nick to validate
args: Tuple['ContactList', 'GroupList', bytes] # Contact list and group list databases
) -> str: # Error message if validation failed, else ''
"""Validate the specified nickname."""
contact_list, group_list, onion_pub_key = args
error_msg = ''
# Length limited by database's unicode padding
if len(nick) >= PADDING_LEN:
error_msg = "Nick must be shorter than {} chars.".format(PADDING_LEN)
# Length is limited by database's Unicode padding
if len(nick) >= PADDING_LENGTH:
error_msg = f"Error: Nick must be shorter than {PADDING_LENGTH} chars."
# Avoid delimiter char collision in output packets
if not nick.isprintable():
error_msg = "Nick must be printable."
error_msg = "Error: Nick must be printable."
if nick == '':
error_msg = "Nick can't be empty."
error_msg = "Error: Nick cannot be empty."
# RxM displays sent messages under 'Me'
if nick.lower() == 'me':
error_msg = "'Me' is a reserved nick."
# Receiver displays sent messages under 'Me'
if nick.lower() == ME.lower():
error_msg = f"Error: '{ME}' is a reserved nick."
# RxM displays system notifications under '-!-'
if nick.lower() == '-!-':
error_msg = "'-!-' is a reserved nick."
# Receiver displays system notifications under reserved notification symbol
if nick == EVENT:
error_msg = f"Error: '{EVENT}' is a reserved nick."
# Ensure that nicks, accounts and group names are UIDs in recipient selection
if nick == 'local':
error_msg = "Nick can't refer to local keyfile."
if validate_onion_addr(nick) == '': # If no error message was received, nick had format of account
error_msg = "Error: Nick cannot have the format of an account."
if re.match(ACCOUNT_FORMAT, nick):
error_msg = "Nick can't have format of an account."
if nick in (LOCAL_ID, DUMMY_CONTACT, DUMMY_MEMBER):
error_msg = "Error: Nick cannot have the format of an account."
if nick in contact_list.get_list_of_nicks():
error_msg = "Nick already in use."
error_msg = "Error: Nick already in use."
# Allow if nick matches the account the key is being re-exchanged for
if contact_list.has_contact(account):
if nick == contact_list.get_contact(account).nick:
# Allow existing nick if it matches the account being replaced.
if contact_list.has_pub_key(onion_pub_key):
if nick == contact_list.get_contact_by_pub_key(onion_pub_key).nick:
error_msg = ''
if nick in group_list.get_list_of_group_names():
error_msg = "Nick can't be a group name."
error_msg = "Error: Nick cannot be a group name."
return error_msg

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,98 +16,40 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import binascii
import textwrap
import time
import typing
import sys
from typing import List, Union
from datetime import datetime
from typing import List, Optional, Union
from src.common.encoding import b58encode
from src.common.encoding import b10encode, b58encode, pub_key_to_onion_address
from src.common.misc import get_terminal_width, split_string
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_contacts import ContactList
from src.common.db_settings import Settings
def box_print(msg_list: Union[str, list],
manual_proceed: bool = False,
head: int = 0,
tail: int = 0) -> None:
"""Print message inside a box.
:param msg_list: List of lines to print
:param manual_proceed: Wait for user input before continuing
:param head: Number of new lines to print before box
:param tail: Number of new lines to print after box
:return: None
"""
for _ in range(head):
print('')
if isinstance(msg_list, str):
msg_list = [msg_list]
len_widest = max(len(m) for m in msg_list)
msg_list = ['{:^{}}'.format(m, len_widest) for m in msg_list]
top_line = '' + (len(msg_list[0]) + 2) * '' + ''
bot_line = '' + (len(msg_list[0]) + 2) * '' + ''
msg_list = ['{}'.format(m) for m in msg_list]
terminal_w = get_terminal_width()
top_line = top_line.center(terminal_w)
msg_list = [m.center(terminal_w) for m in msg_list]
bot_line = bot_line.center(terminal_w)
print(top_line)
for m in msg_list:
print(m)
print(bot_line)
for _ in range(tail):
print('')
if manual_proceed:
input('')
print_on_previous_line()
def c_print(string: str, head: int = 0, tail: int = 0) -> None:
"""Print string to center of screen.
:param string: String to print
:param head: Number of new lines to print before string
:param tail: Number of new lines to print after string
:return: None
"""
for _ in range(head):
print('')
print(string.center(get_terminal_width()))
for _ in range(tail):
print('')
from src.common.gateway import GatewaySettings as GWSettings
def clear_screen(delay: float = 0.0) -> None:
"""Clear terminal window."""
"""Clear the terminal window."""
time.sleep(delay)
sys.stdout.write(CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER)
sys.stdout.flush()
def group_management_print(key: str,
members: List[str],
contact_list: 'ContactList',
group_name: str = '') -> None:
"""List purported member status during group management."""
def group_management_print(key: str, # Group management message identifier
members: List[bytes], # List of members' Onion public keys
contact_list: 'ContactList', # ContactList object
group_name: str = '' # Name of the group
) -> None:
"""Print group management command results."""
m = {NEW_GROUP: "Created new group '{}' with following members:".format(group_name),
ADDED_MEMBERS: "Added following accounts to group '{}':" .format(group_name),
ALREADY_MEMBER: "Following accounts were already in group '{}':".format(group_name),
@ -115,155 +58,194 @@ def group_management_print(key: str,
UNKNOWN_ACCOUNTS: "Following unknown accounts were ignored:"}[key]
if members:
m_list = ([contact_list.get_contact(m).nick for m in members if contact_list.has_contact(m)]
+ [m for m in members if not contact_list.has_contact(m)])
m_list = ([contact_list.get_contact_by_pub_key(m).nick for m in members if contact_list.has_pub_key(m)]
+ [pub_key_to_onion_address(m) for m in members if not contact_list.has_pub_key(m)])
just_len = max(len(m) for m in m_list)
justified = [m] + [" * {}".format(m.ljust(just_len)) for m in m_list]
box_print(justified, head=1, tail=1)
justified = [m] + [f" * {m.ljust(just_len)}" for m in m_list]
m_print(justified, box=True)
def message_printer(message: str, head: int = 0, tail: int = 0) -> None:
"""Print long message in the middle of the screen.
def m_print(msg_list: Union[str, list], # List of lines to print
manual_proceed: bool = False, # Wait for user input before continuing
bold: bool = False, # When True, prints the message in bold style
center: bool = True, # When False, does not center message
box: bool = False, # When True, prints a box around the message
head_clear: bool = False, # When True, clears screen before printing message
tail_clear: bool = False, # When True, clears screen after printing message (requires delay)
delay: float = 0, # Delay before continuing
max_width: int = 0, # Maximum width of message
head: int = 0, # Number of new lines to print before the message
tail: int = 0, # Number of new lines to print after the message
) -> None:
"""Print message to screen.
:param message: Message to print
:param head: Number of new lines to print before message
:param tail: Number of new lines to print after message
:return: None
The message automatically wraps if the terminal is too narrow to
display the message.
"""
for _ in range(head):
print('')
if isinstance(msg_list, str):
msg_list = [msg_list]
line_list = (textwrap.fill(message, min(49, (get_terminal_width() - 6))).split('\n'))
for l in line_list:
c_print(l)
terminal_width = get_terminal_width()
len_widest_msg = max(len(m) for m in msg_list)
spc_around_msg = 4 if box else 2
max_msg_width = terminal_width - spc_around_msg
for _ in range(tail):
print('')
if max_width:
max_msg_width = min(max_width, max_msg_width)
# Split any message too wide on separate lines
if len_widest_msg > max_msg_width:
new_msg_list = []
for msg in msg_list:
if len(msg) > max_msg_width:
new_msg_list.extend(textwrap.fill(msg, max_msg_width).split('\n'))
else:
new_msg_list.append(msg)
msg_list = new_msg_list
len_widest_msg = max(len(m) for m in msg_list)
if box or center:
# Insert whitespace around every line to make them equally long
msg_list = [f'{m:^{len_widest_msg}}' for m in msg_list]
if box:
# Add box chars around the message
msg_list = [f'{m}' for m in msg_list]
msg_list.insert(0, '' + (len_widest_msg + 2) * '' + '')
msg_list.append( '' + (len_widest_msg + 2) * '' + '')
# Print the message
if head_clear:
clear_screen()
print_spacing(head)
for message in msg_list:
if center:
message = message.center(terminal_width)
if bold:
message = BOLD_ON + message + NORMAL_TEXT
print(message)
print_spacing(tail)
time.sleep(delay)
if tail_clear:
clear_screen()
# Check if message needs to be manually dismissed
if manual_proceed:
input('')
print_on_previous_line()
def phase(string: str,
done: bool = False,
head: int = 0,
offset: int = 2) -> None:
"""Print name of next phase.
def phase(string: str, # Description of the phase
done: bool = False, # When True, uses string as the phase completion message
head: int = 0, # Number of inserted new lines before print
offset: int = 4, # Offset of phase string from center to left
delay: float = 0.5 # Duration of phase completion message
) -> None:
"""Print the name of the next phase.
Message about completion will be printed on same line.
:param string: String to be printed
:param done: When True, allows custom string to notify completion
:param head: Number of inserted new lines before print
:param offset: Offset of message from center to left
:return: None
The notification of completion of the phase is printed on the same
line as the phase message.
"""
for _ in range(head):
print('')
print_spacing(head)
if string == DONE or done:
print(string)
time.sleep(0.5)
time.sleep(delay)
else:
string = '{}... '.format(string)
indent = ((get_terminal_width() - (len(string) + offset)) // 2) * ' '
string += '... '
indent = ((get_terminal_width() - (len(string) + offset)) // 2) * ' '
print(indent + string, end='', flush=True)
def print_fingerprint(fp: bytes, msg: str = '') -> None:
"""Print formatted message and fingerprint inside box.
def print_fingerprint(fp: bytes, # Contact's fingerprint
msg: str = '' # Title message
) -> None:
"""Print a formatted message and fingerprint inside the box.
:param fp: Contact's fingerprint
:param msg: Title message
:return: None
Truncate fingerprint for clean layout with three rows that have
five groups of five numbers. The resulting fingerprint has
249.15 bits of entropy which is more than the symmetric security
of X448.
"""
def base10encode(fingerprint: bytes) -> str:
"""Encode fingerprint to decimals for distinct communication.
Base64 has 75% efficiency but encoding is bad as user might
confuse upper case I with lower case l, 0 with O etc.
Base58 has 73% efficiency and removes the problem of Base64
explained above, but works only when manually typing
strings because user has to take time to explain which
letters were capitalized etc.
Base16 has 50% efficiency and removes the capitalisation problem
with Base58 but the choice is bad as '3', 'b', 'c', 'd'
and 'e' are hard to distinguish in English language
(fingerprints are usually read aloud over off band call).
Base10 has 41% efficiency but as languages have evolved in a
way that makes clear distinction between the way different
numbers are pronounced: reading them is faster and less
error prone. Compliments to OWS/WA developers for
discovering this.
Truncate fingerprint for clean layout with three rows that each
have five groups of five numbers. The resulting fingerprint has
249.15 bits of entropy.
"""
hex_representation = binascii.hexlify(fingerprint)
dec_representation = str(int(hex_representation, base=16))
return dec_representation[:75]
p_lst = [msg, ''] if msg else []
parts = split_string(base10encode(fp), item_len=25)
p_lst += [' '.join(p[i:i + 5] for i in range(0, len(p), 5)) for p in parts]
b10fp = b10encode(fp)[:(3*5*5)]
parts = split_string(b10fp, item_len=(5*5))
p_lst += [' '.join(split_string(p, item_len=5)) for p in parts]
box_print(p_lst)
m_print(p_lst, box=True)
def print_key(message: str,
key_bytes: bytes,
settings: 'Settings',
no_split: bool = False,
file_key: bool = False) -> None:
"""Print symmetric key.
def print_key(message: str, # Instructive message
key_bytes: bytes, # 32-byte key to be displayed
settings: Union['Settings', 'GWSettings'], # Settings object
public_key: bool = False # When True, uses Testnet address WIF format
) -> None:
"""Print a symmetric key in WIF format.
If local testing is not enabled, this function will add spacing in the
middle of the key to help user keep track of typing progress. The ideal
substring length in Cowan's `focus of attention` is four digits:
If local testing is not enabled, this function adds spacing in the
middle of the key, as well as guide letters to help the user keep
track of typing progress:
https://en.wikipedia.org/wiki/Working_memory#Working_memory_as_part_of_long-term_memory
Local key encryption keys:
The 51 char KDK is however not divisible by 4, and remembering which
symbols are letters and if they are capitalized is harder than remembering
just digits. 51 is divisible by 3. The 17 segments are displayed with guide
letter A..Q to help keep track when typing:
A B C D E F G H I J K L M N O P Q
5Ka 52G yNz vjF nM4 2jw Duu rWo 7di zgi Y8g iiy yGd 78L cCx mwQ mWV
A B C D E F G H I J K L M N O P Q
5Ka 52G yNz vjF nM4 2jw Duu rWo 7di zgi Y8g iiy yGd 78L cCx mwQ mWV
X448 public keys:
:param message: Message to print
:param key_bytes: Decryption key
:param settings: Settings object
:param no_split: When True, does not split decryption key to chunks
:param file_key When True, uses testnet address format
:return: None
A B C D E F H H I J K L
4EcuqaD ddsdsuc gBX2PY2 qR8hReA aeSN2oh JB9w5Cv q6BQjDa PPgzSvW 932aHio sT42SKJ Gu2PpS1 Za3Xrao
"""
b58key = b58encode(key_bytes, file_key)
if settings.local_testing_mode or no_split:
box_print([message, b58key])
b58key = b58encode(key_bytes, public_key)
if settings.local_testing_mode:
m_print([message, b58key], box=True)
else:
box_print([message,
' '.join('ABCDEFGHIJKLMNOPQ'),
' '.join(split_string(b58key, item_len=3))])
guide, chunk_len = (B58_PUBLIC_KEY_GUIDE, 7) if public_key else (B58_LOCAL_KEY_GUIDE, 3)
key = ' '.join(split_string(b58key, item_len=chunk_len))
m_print([message, guide, key], box=True)
def print_on_previous_line(reps: int = 1,
delay: float = 0.0,
flush: bool = False) -> None:
"""Next message will be printed on upper line.
def print_title(operation: str) -> None:
"""Print the TFC title."""
operation_name = {TX: TXP, RX: RXP, NC: RP}[operation]
m_print(f"{TFC} - {operation_name} {VERSION}", bold=True, head_clear=True, head=1, tail=1)
:param reps: Number of times to repeat action
:param delay: Time to sleep before clearing lines above
:param flush: Flush stdout when true
:return: None
"""
def print_on_previous_line(reps: int = 1, # Number of times to repeat the action
delay: float = 0.0, # Time to sleep before clearing lines above
flush: bool = False # Flush stdout when true
) -> None:
"""Next message is printed on upper line."""
time.sleep(delay)
for _ in range(reps):
sys.stdout.write(CURSOR_UP_ONE_LINE + CLEAR_ENTIRE_LINE)
if flush:
sys.stdout.flush()
def print_spacing(count: int = 0) -> None:
"""Print `count` many new-lines."""
for _ in range(count):
print()
def rp_print(message: str, # Message to print
ts: Optional['datetime'] = None, # Timestamp for displayed event
bold: bool = False # When True, prints the message in bold style
) -> None:
"""Print an event in Relay Program."""
if ts is None:
ts = datetime.now()
ts_fmt = ts.strftime('%b %d - %H:%M:%S.%f')[:-4]
if bold:
print(f"{BOLD_ON}{ts_fmt} - {message}{NORMAL_TEXT}")
else:
print(f"{ts_fmt} - {message}")

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,52 +16,47 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import readline
import time
import _tkinter
import typing
from tkinter import filedialog, Tk
from typing import Union
from typing import Any, List, Optional
import tkinter
from tkinter import filedialog
from src.common.exceptions import FunctionReturn
from src.common.output import c_print, print_on_previous_line
from src.common.output import m_print, print_on_previous_line
if typing.TYPE_CHECKING:
from src.common.db_settings import Settings
from src.nh.settings import Settings as nhSettings
def ask_path_gui(prompt_msg: str,
settings: Union['Settings', 'nhSettings'],
get_file: bool = False) -> str:
"""Prompt (file) path with Tkinter / CLI prompt.
:param prompt_msg: Directory selection prompt
:param settings: Settings object
:param get_file: When True, prompts for path to file instead of directory
:return: Selected directory / file
"""
def ask_path_gui(prompt_msg: str, # Directory selection prompt
settings: 'Settings', # Settings object
get_file: bool = False # When True, prompts for a path to file instead of a directory
) -> str: # Selected directory or file
"""Prompt (file) path with Tkinter / CLI prompt."""
try:
if settings.disable_gui_dialog:
raise _tkinter.TclError
root = Tk()
root = tkinter.Tk()
root.withdraw()
if get_file:
file_path = filedialog.askopenfilename(title=prompt_msg)
file_path = filedialog.askopenfilename(title=prompt_msg) # type: str
else:
file_path = filedialog.askdirectory(title=prompt_msg)
root.destroy()
if not file_path:
raise FunctionReturn(("File" if get_file else "Path") + " selection aborted.")
raise FunctionReturn(("File" if get_file else "Path") + " selection aborted.", head_clear=True)
return file_path
@ -71,12 +67,12 @@ def ask_path_gui(prompt_msg: str,
class Completer(object):
"""readline tab-completer for paths and files."""
def __init__(self, get_file):
def __init__(self, get_file: bool) -> None:
"""Create new completer object."""
self.get_file = get_file
def listdir(self, root):
"""List directory 'root' appending the path separator to subdirs."""
def listdir(self, root: str) -> Any:
"""List directory 'root' appending the path separator to sub-dirs."""
res = []
for name in os.listdir(root):
path = os.path.join(root, name)
@ -88,18 +84,18 @@ class Completer(object):
res.append(name)
return res
def complete_path(self, path=None):
"""Perform completion of filesystem path."""
def complete_path(self, path: Optional[str] = None) -> Any:
"""Perform completion of the filesystem path."""
if not path:
return self.listdir('.')
dirname, rest = os.path.split(path)
tmp = dirname if dirname else '.'
res = [os.path.join(dirname, p) for p in self.listdir(tmp) if p.startswith(rest)]
dir_name, rest = os.path.split(path)
tmp = dir_name if dir_name else '.'
matches = [os.path.join(dir_name, p) for p in self.listdir(tmp) if p.startswith(rest)]
# More than one match, or single match which does not exist (typo)
if len(res) > 1 or not os.path.exists(path):
return res
if len(matches) > 1 or not os.path.exists(path):
return matches
# Resolved to a single directory: return list of files below it
if os.path.isdir(path):
@ -108,75 +104,81 @@ class Completer(object):
# Exact file match terminates this completion
return [path + ' ']
def path_complete(self, args=None):
"""Return list of directories from current directory."""
def path_complete(self, args: Optional[List[str]] = None) -> Any:
"""Return the list of directories from the current directory."""
if not args:
return self.complete_path('.')
# Treat the last arg as a path and complete it
return self.complete_path(args[-1])
def complete(self, _, state):
def complete(self, _: str, state: int) -> Any:
"""Generic readline completion entry point."""
line = readline.get_line_buffer().split()
return self.path_complete(line)[state]
def ask_path_cli(prompt_msg: str, get_file: bool = False) -> str:
def ask_path_cli(prompt_msg: str, # File selection prompt
get_file: bool = False # When True, prompts for a file instead of a directory
) -> str: # Selected directory or file
"""\
Prompt file location / store dir for
file with tab-complete supported CLI.
:param prompt_msg: File selection prompt
:param get_file: When True, prompts for file instead of directory
:return: Selected directory
Prompt file location or store directory for a file with tab-complete
supported CLI.
"""
comp = Completer(get_file)
readline.set_completer_delims(' \t\n;')
readline.parse_and_bind('tab: complete')
readline.set_completer(comp.complete)
readline.set_completer(Completer(get_file).complete)
print('')
if get_file:
while True:
try:
path_to_file = input(prompt_msg + ": ")
if not path_to_file:
print_on_previous_line()
raise KeyboardInterrupt
if os.path.isfile(path_to_file):
if path_to_file.startswith('./'):
path_to_file = path_to_file[2:]
print('')
return path_to_file
c_print("File selection error.", head=1, tail=1)
time.sleep(1.5)
print_on_previous_line(reps=4)
except KeyboardInterrupt:
print_on_previous_line()
raise FunctionReturn("File selection aborted.")
return cli_get_file(prompt_msg)
else:
while True:
try:
directory = input(prompt_msg + ": ")
return cli_get_path(prompt_msg)
if directory.startswith('./'):
directory = directory[2:]
if not directory.endswith(os.sep):
directory += os.sep
def cli_get_file(prompt_msg: str) -> str:
"""Ask the user to specify file to load."""
while True:
try:
path_to_file = input(prompt_msg + ": ")
if not os.path.isdir(directory):
c_print("Error: Invalid directory.", head=1, tail=1)
print_on_previous_line(reps=4, delay=1.5)
continue
if not path_to_file:
print_on_previous_line()
raise KeyboardInterrupt
return directory
if os.path.isfile(path_to_file):
if path_to_file.startswith('./'):
path_to_file = path_to_file[len('./'):]
print('')
return path_to_file
except KeyboardInterrupt:
raise FunctionReturn("File path selection aborted.")
m_print("File selection error.", head=1, tail=1)
print_on_previous_line(reps=4, delay=1)
except (EOFError, KeyboardInterrupt):
print_on_previous_line()
raise FunctionReturn("File selection aborted.", head_clear=True)
def cli_get_path(prompt_msg: str) -> str:
"""Ask the user to specify path for file."""
while True:
try:
directory = input(prompt_msg + ": ")
if directory.startswith('./'):
directory = directory[len('./'):]
if not directory.endswith(os.sep):
directory += os.sep
if not os.path.isdir(directory):
m_print("Error: Invalid directory.", head=1, tail=1)
print_on_previous_line(reps=4, delay=1)
continue
return directory
except (EOFError, KeyboardInterrupt):
print_on_previous_line()
raise FunctionReturn("File path selection aborted.", head_clear=True)

1542
src/common/reed_solomon.py Executable file → Normal file

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,32 +16,44 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
"""Program details"""
TFC = 'TFC'
VERSION = '1.17.08'
TXP = 'Transmitter'
RXP = 'Receiver'
RP = 'Relay'
VERSION = '1.19.01'
"""Identifiers"""
LOCAL_ID = 'local_id'
DUMMY_CONTACT = 'dummy_contact'
DUMMY_USER = 'dummy_user'
DUMMY_STR = 'dummy_str'
DUMMY_MEMBER = 'dummy_member'
"""Identifiers
Placeholder accounts for databases need to be valid v3 Onion addresses.
"""
LOCAL_ID = 'localidlocalidlocalidlocalidlocalidlocalidlocalidloj7uyd'
LOCAL_PUBKEY = b'[\x84\x05\xa0kp\x80\xb4\rn\x10\x16\x81\xad\xc2\x02\xd05\xb8@Z\x06\xb7\x08\x0b@\xd6\xe1\x01h\x1a\xdc'
LOCAL_NICK = 'local Source Computer'
DUMMY_CONTACT = 'dummycontactdummycontactdummycontactdummycontactdumhsiid'
DUMMY_MEMBER = 'dummymemberdummymemberdummymemberdummymemberdummymedakad'
DUMMY_NICK = 'dummy_nick'
DUMMY_GROUP = 'dummy_group'
TX = 'tx'
RX = 'rx'
NH = 'nh'
NC = 'nc'
TAILS = b'Tails'
"""Window identifiers (string)"""
WIN_TYPE_COMMAND = 'win_type_command'
WIN_TYPE_FILE = 'win_type_file'
WIN_TYPE_CONTACT = 'win_type_contact'
WIN_TYPE_GROUP = 'win_type_group'
"""Window identifiers"""
WIN_TYPE_COMMAND = 'system messages'
WIN_TYPE_FILE = 'incoming files'
WIN_TYPE_CONTACT = 'contact'
WIN_TYPE_GROUP = 'group'
"""Window UIDs"""
WIN_UID_LOCAL = b'win_uid_local'
WIN_UID_FILE = b'win_uid_file'
"""Packet types"""
@ -59,14 +72,18 @@ UNKNOWN_ACCOUNTS = 'unknown_accounts'
"""Base58 key types"""
B58_PUB_KEY = 'b58_pub_key'
B58_LOCAL_KEY = 'b58_local_key'
B58_FILE_KEY = 'b58_file_key'
B58_PUBLIC_KEY = 'b58_public_key'
B58_LOCAL_KEY = 'b58_local_key'
"""Key input guides"""
B58_PUBLIC_KEY_GUIDE = ' A B C D E F H H I J K L '
B58_LOCAL_KEY_GUIDE = ' A B C D E F G H I J K L M N O P Q '
"""Key exchange types"""
X25519 = 'x25519'
PSK = 'psk'
ECDHE = 'X448'
PSK = 'PSK'
"""Contact setting types"""
@ -76,30 +93,32 @@ NOTIFY = 'notify'
"""Command identifiers"""
CLEAR = 'clear'
RESET = 'reset'
CLEAR = 'clear'
RESET = 'reset'
POWEROFF = 'poweroff'
"""Contact setting management"""
ENABLE = b'es'
DISABLE = b'ds'
ALL = 'all'
CONTACT_SETTING_HEADER_LENGTH = 2
ENABLE = b'es'
DISABLE = b'ds'
ALL = 'all'
"""NH bypass states"""
NH_BYPASS_START = 'nh_bypass_start'
NH_BYPASS_STOP = 'nh_bypass_stop'
RESEND = 'resend'
"""Networked Computer bypass states"""
NC_BYPASS_START = 'nc_bypass_start'
NC_BYPASS_STOP = 'nc_bypass_stop'
"""Phase messages"""
DONE = 'DONE'
"""Status messages"""
DONE = 'DONE'
EVENT = '-!-'
ME = 'Me'
"""VT100 codes
VT100 codes are used to control printing to terminals. These
make building functions like text box drawers possible.
VT100 codes are used to control printing to the terminal. These make
building functions like textbox drawers possible.
"""
CURSOR_UP_ONE_LINE = '\x1b[1A'
CURSOR_RIGHT_ONE_COLUMN = '\x1b[1C'
@ -112,182 +131,247 @@ NORMAL_TEXT = '\033[0m'
"""Separators
Separator byte/char is a non-printable byte used
to separate fields in serialized data structures.
Separator byte is a non-printable byte used to separate fields in
serialized data structures.
"""
US_BYTE = b'\x1f'
US_STR = '\x1f'
"""Datagram headers
These headers are prepended to datagrams that are transmitted over
Serial or over the network. They tell receiving device what type of
packet is in question.
serial or over the network. They tell the receiving device what type of
datagram is in question.
Local key packets are only accepted by NH from local TxM. Even if NH is
compromised, the worst case scenario is a denial of service attack
where RxM receives new local keys. As user does not know the correct
decryption key, they would have to manually cancel packets.
Datagrams with local key header contain the encrypted local key, used to
encrypt commands and data transferred between local Source and
Destination computers. Packets with the header are only accepted by the
Relay Program when they originate from the user's Source Computer. Even
if the Networked Computer is compromised and the local key datagram is
injected to the Destination Computer, the injected key could not be
accepted by the user as they don't know the decryption key for it. The
worst case scenario is a DoS attack where the Receiver Program receives
new local keys continuously. Such an attack would, however, reveal the
user they are under a sophisticated attack, and that their Networked
Computer has been compromised.
Public keys are delivered from contact all the way to RxM provided they
are of correct format.
Datagrams with Public key header contain TCB-level public keys that
originate from the sender's Source Computer, and are displayed by the
recipient's Networked Computer, from where they are manually typed to
recipient's Destination Computer.
Message and command packet headers tell RxM whether to parse trailing
fields that determine which XSalsa20-Poly1305 decryption keys it should
load. Contacts can alter their packets to deliver COMMAND_PACKET_HEADER
header, but NH will by design drop them and even if it somehow couldn't,
RxM would drop the packet after MAC verification of encrypted harac
fails.
Message and command type datagrams tell the Receiver Program whether to
parse the trailing fields that determine which XChaCha20-Poly1305
decryption keys it should load. Contacts can of course try to alter
their datagrams to contain a COMMAND_DATAGRAM_HEADER header, but Relay
Program will by design drop them. Even if a compromised Networked
Computer injects such a datagram to Destination Computer, the Receiver
Program will drop the datagram when the MAC verification of the
encrypted hash ratchet counter value fails.
Unencrypted packet headers are intended to notify NH that the packet
is intended for it. These commands are not delivered to RxM, but a
standard encrypted command is sent to RxM before any unencrypted command
is sent to NH. During traffic masking connection, unencrypted commands
are disabled to hide the quantity and schedule of communication even if
NH is compromised and monitoring the user. Unencrypted commands do not
cause issues in security because if adversary can compromise NH to the
point it can issue commands to NH, they could DoS NH anyway.
File type datagram contains an encrypted file that the Receiver Program
caches until its decryption key arrives from the sender inside a
special, automated key delivery message.
File CT headers are for file export from TxM to NH and in receiving end,
import from NH to RxM.
Unencrypted type datagrams contain commands intended for the Relay
Program. These commands are in some cases preceded by an encrypted
version of the command, that the Relay Program forwards to Receiver
Program on Destination Computer. The unencrypted Relay commands are
disabled during traffic masking to hide the quantity and schedule of
communication even from the Networked Computer (in case it's compromised
and monitoring the user). The fact these commands are unencrypted, do
not cause security issues because if an adversary can compromise the
Networked Computer to the point it can issue commands to the Relay
Program, they could DoS the Relay Program, and thus TFC, anyway.
"""
LOCAL_KEY_PACKET_HEADER = b'L'
PUBLIC_KEY_PACKET_HEADER = b'P'
MESSAGE_PACKET_HEADER = b'M'
COMMAND_PACKET_HEADER = b'Y'
UNENCRYPTED_PACKET_HEADER = b'U'
EXPORTED_FILE_HEADER = b'O'
IMPORTED_FILE_HEADER = b'I'
DATAGRAM_TIMESTAMP_LENGTH = 8
DATAGRAM_HEADER_LENGTH = 1
LOCAL_KEY_DATAGRAM_HEADER = b'L'
PUBLIC_KEY_DATAGRAM_HEADER = b'P'
MESSAGE_DATAGRAM_HEADER = b'M'
COMMAND_DATAGRAM_HEADER = b'K'
FILE_DATAGRAM_HEADER = b'F'
UNENCRYPTED_DATAGRAM_HEADER = b'U'
"""Group management headers
Group management datagrams are are automatic messages that the
Transmitter Program recommends the user to send when they make changes
to the member list of a group, or when they add or remove groups. These
messages are displayed by the Relay Program.
"""
GROUP_ID_LENGTH = 4
GROUP_ID_ENC_LENGTH = 13
GROUP_MSG_ID_LENGTH = 16
GROUP_MGMT_HEADER_LENGTH = 1
GROUP_MSG_INVITE_HEADER = b'I'
GROUP_MSG_JOIN_HEADER = b'J'
GROUP_MSG_MEMBER_ADD_HEADER = b'N'
GROUP_MSG_MEMBER_REM_HEADER = b'R'
GROUP_MSG_EXIT_GROUP_HEADER = b'X'
"""Assembly packet headers
These one byte assembly packet headers are not part of the padded
These one-byte assembly packet headers are not part of the padded
message parsed from assembly packets. They are however the very first
plaintext byte, prepended to every padded assembly packet delivered to
recipient or local RxM. They deliver information about if and when to
process the packet and when to drop previously collected assembly
packets.
plaintext byte, prepended to every padded assembly packet that is
delivered to the recipient/local Destination Computer. The header
delivers the information about if and when to assemble the packet,
as well as when to drop any previously collected assembly packets.
"""
FILE_PACKET_CTR_LENGTH = 8
ASSEMBLY_PACKET_HEADER_LENGTH = 1
M_S_HEADER = b'a' # Short message packet
M_L_HEADER = b'b' # First packet of multi-packet message
M_L_HEADER = b'b' # First packet of multi-packet message
M_A_HEADER = b'c' # Appended packet of multi-packet message
M_E_HEADER = b'd' # Last packet of multi-packet message
M_C_HEADER = b'e' # Cancelled multi-packet message
M_E_HEADER = b'd' # Last packet of multi-packet message
M_C_HEADER = b'e' # Cancelled multi-packet message
P_N_HEADER = b'f' # Noise message packet
F_S_HEADER = b'A' # Short file packet
F_L_HEADER = b'B' # First packet of multi-packet file
F_L_HEADER = b'B' # First packet of multi-packet file
F_A_HEADER = b'C' # Appended packet of multi-packet file
F_E_HEADER = b'D' # Last packet of multi-packet file
F_C_HEADER = b'E' # Cancelled multi-packet file
F_E_HEADER = b'D' # Last packet of multi-packet file
F_C_HEADER = b'E' # Cancelled multi-packet file
C_S_HEADER = b'0' # Short command packet
C_L_HEADER = b'1' # First packet of multi-packet command
C_L_HEADER = b'1' # First packet of multi-packet command
C_A_HEADER = b'2' # Appended packet of multi-packet command
C_E_HEADER = b'3' # Last packet of multi-packet command
C_C_HEADER = b'4' # Cancelled multi-packet command (not implemented)
C_E_HEADER = b'3' # Last packet of multi-packet command
C_C_HEADER = b'4' # Cancelled multi-packet command (reserved but not in use)
C_N_HEADER = b'5' # Noise command packet
"""Unencrypted command headers
These two-byte headers are only used to control NH. These commands will
not be used during traffic masking to hide when TFC is being used. These
commands are not encrypted because if attacker is able to inject
commands from within NH, they could also access any keys stored on NH.
These two-byte headers are only used to control the Relay Program on
Networked Computer. These commands will not be used during traffic
masking, as they would reveal when TFC is being used. These commands do
not require encryption, because if an attacker can compromise the
Networked Computer to the point it could inject commands to Relay
Program, it could most likely also access any decryption keys used by
the Relay Program.
"""
UNENCRYPTED_SCREEN_CLEAR = b'UC'
UNENCRYPTED_SCREEN_RESET = b'UR'
UNENCRYPTED_EXIT_COMMAND = b'UX'
UNENCRYPTED_IMPORT_COMMAND = b'UI'
UNENCRYPTED_EC_RATIO = b'UE'
UNENCRYPTED_BAUDRATE = b'UB'
UNENCRYPTED_GUI_DIALOG = b'UD'
UNENCRYPTED_WIPE_COMMAND = b'UW'
UNENCRYPTED_COMMAND_HEADER_LENGTH = 2
UNENCRYPTED_SCREEN_CLEAR = b'UC'
UNENCRYPTED_SCREEN_RESET = b'UR'
UNENCRYPTED_EXIT_COMMAND = b'UX'
UNENCRYPTED_EC_RATIO = b'UE'
UNENCRYPTED_BAUDRATE = b'UB'
UNENCRYPTED_WIPE_COMMAND = b'UW'
UNENCRYPTED_ADD_NEW_CONTACT = b'UN'
UNENCRYPTED_ADD_EXISTING_CONTACT = b'UA'
UNENCRYPTED_REM_CONTACT = b'UD'
UNENCRYPTED_ONION_SERVICE_DATA = b'UO'
UNENCRYPTED_MANAGE_CONTACT_REQ = b'UM'
"""Encrypted command headers
These two-byte headers are prepended to each command delivered to local
RxM. The header is evaluated after RxM has received all assembly packets
of one transmission. These headers tell RxM to what function the command
must be redirected to.
These two-byte headers determine the type of command for Receiver
Program on local Destination Computer. The header is evaluated after the
Receiver Program has received all assembly packets and assembled the
command. These headers tell the Receiver Program to which function the
provided parameters (if any) must be redirected.
"""
LOCAL_KEY_INSTALLED_HEADER = b'LI'
SHOW_WINDOW_ACTIVITY_HEADER = b'SA'
WINDOW_SELECT_HEADER = b'WS'
CLEAR_SCREEN_HEADER = b'SC'
RESET_SCREEN_HEADER = b'SR'
EXIT_PROGRAM_HEADER = b'EX'
LOG_DISPLAY_HEADER = b'LD'
LOG_EXPORT_HEADER = b'LE'
LOG_REMOVE_HEADER = b'LR'
CHANGE_MASTER_K_HEADER = b'MK'
CHANGE_NICK_HEADER = b'NC'
CHANGE_SETTING_HEADER = b'CS'
CHANGE_LOGGING_HEADER = b'CL'
CHANGE_FILE_R_HEADER = b'CF'
CHANGE_NOTIFY_HEADER = b'CN'
GROUP_CREATE_HEADER = b'GC'
GROUP_ADD_HEADER = b'GA'
GROUP_REMOVE_M_HEADER = b'GR'
GROUP_DELETE_HEADER = b'GD'
KEY_EX_X25519_HEADER = b'KE'
KEY_EX_PSK_TX_HEADER = b'KT'
KEY_EX_PSK_RX_HEADER = b'KR'
CONTACT_REMOVE_HEADER = b'CR'
WIPE_USER_DATA_HEADER = b'WD'
ENCRYPTED_COMMAND_HEADER_LENGTH = 2
LOCAL_KEY_RDY = b'LI'
WIN_ACTIVITY = b'SA'
WIN_SELECT = b'WS'
CLEAR_SCREEN = b'SC'
RESET_SCREEN = b'SR'
EXIT_PROGRAM = b'EX'
LOG_DISPLAY = b'LD'
LOG_EXPORT = b'LE'
LOG_REMOVE = b'LR'
CH_MASTER_KEY = b'MK'
CH_NICKNAME = b'NC'
CH_SETTING = b'CS'
CH_LOGGING = b'CL'
CH_FILE_RECV = b'CF'
CH_NOTIFY = b'CN'
GROUP_CREATE = b'GC'
GROUP_ADD = b'GA'
GROUP_REMOVE = b'GR'
GROUP_DELETE = b'GD'
GROUP_RENAME = b'GN'
KEY_EX_ECDHE = b'KE'
KEY_EX_PSK_TX = b'KT'
KEY_EX_PSK_RX = b'KR'
CONTACT_REM = b'CR'
WIPE_USR_DATA = b'WD'
"""Origin headers
This one byte header notifies RxM whether the account
included in the packet is the source or destination.
This one-byte header tells the Relay and Receiver Programs whether the
account included in the packet is the source or the destination of the
transmission. The user origin header is used when the Relay Program
forwards the message packets from user's Source Computer to user's
Destination Computer. The contact origin header is used when the program
forwards packets that are loaded from servers of contacts to the user's
Destination Computer.
On Destination Computer, the Receiver Program uses the origin header to
determine which unidirectional keys it should load to decrypt the
datagram payload.
"""
ORIGIN_HEADER_LENGTH = 1
ORIGIN_USER_HEADER = b'o'
ORIGIN_CONTACT_HEADER = b'i'
"""Message headers
This one byte header will be prepended to each plaintext message prior
to padding and splitting the message. It will be evaluated once RxM has
received all assembly packets. It allows RxM to detect whether the
message should be displayed on private or group window. This does not
allow spoofing of messages in unauthorized group windows, because the
(group configuration managed personally by the recipient) white lists
accounts who are authorized to display the message under the group
window.
This one-byte header will be prepended to each plaintext message before
padding and splitting the message. It will be evaluated once the Relay
Program has received all assembly packets and assembled the message.
Whisper message header is message with "sender based control". Unless
contact is malicious, these messages are not logged.
The private and group message headers allow the Receiver Program to
determine whether the message should be displayed in a private or in a
group window. This does not allow re-direction of messages to
unauthorized group windows, because TFC's manually managed group
configuration is also a whitelist for accounts that are authorized to
display messages under the group's window.
Messages with the whisper message header have "sender-based control".
Unless the contact maliciously alters their Receiver Program's behavior,
whispered messages are not logged regardless of in-program controlled
settings.
Messages with file key header contain the hash of the file ciphertext
that was sent to the user earlier. It also contains the symmetric
decryption key for that file.
"""
MESSAGE_HEADER_LENGTH = 1
WHISPER_FIELD_LENGTH = 1
PRIVATE_MESSAGE_HEADER = b'p'
GROUP_MESSAGE_HEADER = b'g'
WHISPER_MESSAGE_HEADER = b'w'
"""Group management headers
Group messages are automatically parsed messages that TxM recommends
user to send when they make changes to group members or add/remove
groups. These messages are displayed temporarily on whatever active
window and later in command window.
"""
GROUP_MSG_INVITEJOIN_HEADER = b'T'
GROUP_MSG_MEMBER_ADD_HEADER = b'N'
GROUP_MSG_MEMBER_REM_HEADER = b'R'
GROUP_MSG_EXIT_GROUP_HEADER = b'X'
FILE_KEY_HEADER = b'k'
"""Delays
Traffic masking packet queue check delay ensures that
the lookup time for packet queue is obfuscated.
Traffic masking packet queue check delay ensures that the lookup time
for the packet queue is obfuscated.
The local testing packet delay is an arbitrary delay that simulates the
slight delay caused by data transmission over a serial interface.
The Relay client delays are values that determine the delays between
checking the online status of the contact (and the state of their
ephemeral URL token public key).
"""
TRAFFIC_MASKING_QUEUE_CHECK_DELAY = 0.1
TRAFFIC_MASKING_MIN_STATIC_DELAY = 0.1
TRAFFIC_MASKING_MIN_RANDOM_DELAY = 0.1
LOCAL_TESTING_PACKET_DELAY = 0.1
RELAY_CLIENT_MAX_DELAY = 16
RELAY_CLIENT_MIN_DELAY = 0.125
CLIENT_OFFLINE_THRESHOLD = 4.0
"""Constant time delay types"""
@ -296,144 +380,202 @@ TRAFFIC_MASKING = 'traffic_masking'
"""Default folders"""
DIR_USER_DATA = 'user_data/'
DIR_RX_FILES = 'received_files/'
DIR_IMPORTED = 'imported_files/'
DIR_USER_DATA = 'user_data/'
DIR_RECV_FILES = 'received_files/'
DIR_TFC = 'tfc/'
"""Regular expressions
These are used to specify exact format of some inputs.
"""
ACCOUNT_FORMAT = '(^.[^/:,]*@.[^/:,]*\.[^/:,]*.$)' # <something>@<something>.<something>
"""Key exchange status states"""
KEX_STATUS_NONE = b'\xa0'
KEX_STATUS_PENDING = b'\xa1'
KEX_STATUS_UNVERIFIED = b'\xa2'
KEX_STATUS_VERIFIED = b'\xa3'
KEX_STATUS_NO_RX_PSK = b'\xa4'
KEX_STATUS_HAS_RX_PSK = b'\xa5'
KEX_STATUS_LOCAL_KEY = b'\xa6'
"""Queue dictionary keys"""
# Common
EXIT_QUEUE = b'exit'
GATEWAY_QUEUE = b'gateway'
UNITTEST_QUEUE = b'unittest_queue'
UNITTEST_QUEUE = b'unittest'
# Transmitter
MESSAGE_PACKET_QUEUE = b'message_packet'
FILE_PACKET_QUEUE = b'file_packet'
COMMAND_PACKET_QUEUE = b'command_packet'
NH_PACKET_QUEUE = b'nh_packet'
LOG_PACKET_QUEUE = b'log_packet'
NOISE_PACKET_QUEUE = b'noise_packet'
NOISE_COMMAND_QUEUE = b'noise_command'
KEY_MANAGEMENT_QUEUE = b'key_management'
WINDOW_SELECT_QUEUE = b'window_select'
MESSAGE_PACKET_QUEUE = b'message_packet'
COMMAND_PACKET_QUEUE = b'command_packet'
TM_MESSAGE_PACKET_QUEUE = b'tm_message_packet'
TM_FILE_PACKET_QUEUE = b'tm_file_packet'
TM_COMMAND_PACKET_QUEUE = b'tm_command_packet'
TM_NOISE_PACKET_QUEUE = b'tm_noise_packet'
TM_NOISE_COMMAND_QUEUE = b'tm_noise_command'
RELAY_PACKET_QUEUE = b'relay_packet'
LOG_PACKET_QUEUE = b'log_packet'
LOG_SETTING_QUEUE = b'log_setting'
TRAFFIC_MASKING_QUEUE = b'traffic_masking'
LOGFILE_MASKING_QUEUE = b'logfile_masking'
KEY_MANAGEMENT_QUEUE = b'key_management'
SENDER_MODE_QUEUE = b'sender_mode'
WINDOW_SELECT_QUEUE = b'window_select'
# NH
TXM_INCOMING_QUEUE = b'txm_incoming'
RXM_OUTGOING_QUEUE = b'rxm_outgoing'
TXM_TO_IM_QUEUE = b'txm_to_im'
TXM_TO_NH_QUEUE = b'txm_to_nh'
TXM_TO_RXM_QUEUE = b'txm_to_rxm'
NH_TO_IM_QUEUE = b'nh_to_im'
# Relay
DST_COMMAND_QUEUE = b'dst_command'
DST_MESSAGE_QUEUE = b'dst_message'
M_TO_FLASK_QUEUE = b'm_to_flask'
F_TO_FLASK_QUEUE = b'f_to_flask'
SRC_TO_RELAY_QUEUE = b'src_to_relay'
URL_TOKEN_QUEUE = b'url_token'
GROUP_MGMT_QUEUE = b'group_mgmt'
GROUP_MSG_QUEUE = b'group_msg'
CONTACT_REQ_QUEUE = b'contact_req'
F_REQ_MGMT_QUEUE = b'f_req_mgmt'
CONTACT_KEY_QUEUE = b'contact_key'
C_REQ_MGR_QUEUE = b'c_req_mgr'
ONION_KEY_QUEUE = b'onion_key'
ONION_CLOSE_QUEUE = b'close_onion'
TOR_DATA_QUEUE = b'tor_data'
"""Queue signals"""
KDB_ADD_ENTRY_HEADER = 'ADD'
KDB_REMOVE_ENTRY_HEADER = 'REM'
KDB_CHANGE_MASTER_KEY_HEADER = 'KEY'
KDB_UPDATE_SIZE_HEADER = 'STO'
RP_ADD_CONTACT_HEADER = 'RAC'
RP_REMOVE_CONTACT_HEADER = 'RRC'
EXIT = 'EXIT'
WIPE = 'WIPE'
"""Static values
"""Static values"""
These values are not settings but descriptive integer values.
"""
# Serial interface
BAUDS_PER_BYTE = 10
SERIAL_RX_MIN_TIMEOUT = 0.05
# CLI indents
CONTACT_LIST_INDENT = 4
FILE_TRANSFER_INDENT = 4
SETTINGS_INDENT = 2
# Compression
COMPRESSION_LEVEL = 9
MAX_MESSAGE_SIZE = 100_000 # bytes
# Traffic masking
NOISE_PACKET_BUFFER = 100
# Local testing
LOCALHOST = 'localhost'
SRC_DD_LISTEN_SOCKET = 5005
RP_LISTEN_SOCKET = 5006
DST_DD_LISTEN_SOCKET = 5007
DST_LISTEN_SOCKET = 5008
# Field lengths
ENCODED_BOOLEAN_LENGTH = 1
ENCODED_BYTE_LENGTH = 1
TIMESTAMP_LENGTH = 4
ENCODED_INTEGER_LENGTH = 8
ENCODED_FLOAT_LENGTH = 8
FILE_ETA_FIELD_LENGTH = 8
FILE_SIZE_FIELD_LENGTH = 8
GROUP_DB_HEADER_LENGTH = 32
PADDED_UTF32_STR_LENGTH = 1024
CONFIRM_CODE_LENGTH = 1
PACKET_CHECKSUM_LENGTH = 16
# Onion address format
ONION_ADDRESS_CHECKSUM_ID = b".onion checksum"
ONION_SERVICE_VERSION = b'\x03'
ONION_SERVICE_VERSION_LENGTH = 1
ONION_ADDRESS_CHECKSUM_LENGTH = 2
ONION_ADDRESS_LENGTH = 56
# Misc
BAUDS_PER_BYTE = 10
COMPRESSION_LEVEL = 9
ENTROPY_THRESHOLD = 512
MAX_INT = 2 ** 64 - 1
B58_CHECKSUM_LENGTH = 4
TRUNC_ADDRESS_LENGTH = 5
# Key derivation
ARGON2_SALT_LENGTH = 32
ARGON2_ROUNDS = 25
ARGON2_MIN_MEMORY = 64000 # bytes
MIN_KEY_DERIVATION_TIME = 3.0 # seconds
# Cryptographic field sizes
TFC_PRIVATE_KEY_LENGTH = 56
TFC_PUBLIC_KEY_LENGTH = 56
FINGERPRINT_LENGTH = 32
ONION_SERVICE_PRIVATE_KEY_LENGTH = 32
ONION_SERVICE_PUBLIC_KEY_LENGTH = 32
XCHACHA20_NONCE_LENGTH = 24
SYMMETRIC_KEY_LENGTH = 32
POLY1305_TAG_LENGTH = 16
BLAKE2_DIGEST_LENGTH = 32
BLAKE2_DIGEST_LENGTH_MAX = 64
ENTROPY_THRESHOLD = 512
HARAC_LENGTH = 8
PADDING_LENGTH = 255
# Forward secrecy
INITIAL_HARAC = 0
HARAC_WARN_THRESHOLD = 1000
# CLI indents
CONTACT_LIST_INDENT = 4
SETTINGS_INDENT = 2
# Local testing
TXM_DD_LISTEN_SOCKET = 5000
NH_LISTEN_SOCKET = 5001
RXM_DD_LISTEN_SOCKET = 5002
RXM_LISTEN_SOCKET = 5003
LOCAL_TESTING_PACKET_DELAY = 0.1
# Field lengths
BOOLEAN_SETTING_LEN = 1
ORIGIN_HEADER_LEN = 1
TIMESTAMP_LEN = 4
INTEGER_SETTING_LEN = 8
FLOAT_SETTING_LEN = 8
FILE_PACKET_CTR_LEN = 8
FILE_ETA_FIELD_LEN = 8
FILE_SIZE_FIELD_LEN = 8
GROUP_MSG_ID_LEN = 16
GROUP_DB_HEADER_LEN = 32
PADDED_UTF32_STR_LEN = 1024
ARGON2_SALT_LEN = 32
ARGON2_ROUNDS = 25
ARGON2_MIN_MEMORY = 64000
XSALSA20_NONCE_LEN = 24
POLY1305_TAG_LEN = 16
FINGERPRINT_LEN = 32
KEY_LENGTH = 32
HARAC_LEN = 8
B58_CHKSUM_LEN = 4
PADDING_LEN = 255
ASSEMBLY_PACKET_LEN = 256
HARAC_WARN_THRESHOLD = 100_000
# Special messages
PLACEHOLDER_DATA = P_N_HEADER + bytes(PADDING_LEN)
PLACEHOLDER_DATA = P_N_HEADER + bytes(PADDING_LENGTH)
# Field lengths
MESSAGE_LENGTH = (XSALSA20_NONCE_LEN
+ HARAC_LEN
+ POLY1305_TAG_LEN
ASSEMBLY_PACKET_LENGTH = ASSEMBLY_PACKET_HEADER_LENGTH + PADDING_LENGTH
+ XSALSA20_NONCE_LEN
+ ASSEMBLY_PACKET_LEN
+ POLY1305_TAG_LEN)
HARAC_CT_LENGTH = (XCHACHA20_NONCE_LENGTH
+ HARAC_LENGTH
+ POLY1305_TAG_LENGTH)
PACKET_LENGTH = (len(MESSAGE_PACKET_HEADER)
+ MESSAGE_LENGTH
+ ORIGIN_HEADER_LEN)
ASSEMBLY_PACKET_CT_LENGTH = (XCHACHA20_NONCE_LENGTH
+ ASSEMBLY_PACKET_LENGTH
+ POLY1305_TAG_LENGTH)
CONTACT_LENGTH = (3*PADDED_UTF32_STR_LEN
+ 2*FINGERPRINT_LEN
+ 3*BOOLEAN_SETTING_LEN)
MESSAGE_LENGTH = HARAC_CT_LENGTH + ASSEMBLY_PACKET_CT_LENGTH
KEYSET_LENGTH = (PADDED_UTF32_STR_LEN
+ 4*KEY_LENGTH
+ 2*HARAC_LEN)
COMMAND_LENGTH = (DATAGRAM_HEADER_LENGTH
+ MESSAGE_LENGTH)
PSK_FILE_SIZE = (XSALSA20_NONCE_LEN
+ ARGON2_SALT_LEN
+ 2*KEY_LENGTH
+ POLY1305_TAG_LEN)
PACKET_LENGTH = (DATAGRAM_HEADER_LENGTH
+ MESSAGE_LENGTH
+ ORIGIN_HEADER_LENGTH)
LOG_ENTRY_LENGTH = (XSALSA20_NONCE_LEN
+ PADDED_UTF32_STR_LEN
+ TIMESTAMP_LEN
+ ORIGIN_HEADER_LEN
+ ASSEMBLY_PACKET_LEN
+ POLY1305_TAG_LEN)
GROUP_STATIC_LENGTH = (PADDED_UTF32_STR_LENGTH
+ GROUP_ID_LENGTH
+ 2 * ENCODED_BOOLEAN_LENGTH)
SETTING_LENGTH = (XSALSA20_NONCE_LEN
+ 5*INTEGER_SETTING_LEN
+ 4*FLOAT_SETTING_LEN
+ 13*BOOLEAN_SETTING_LEN
+ POLY1305_TAG_LEN)
CONTACT_LENGTH = (ONION_SERVICE_PUBLIC_KEY_LENGTH
+ 2 * FINGERPRINT_LENGTH
+ 4 * ENCODED_BOOLEAN_LENGTH
+ PADDED_UTF32_STR_LENGTH)
KEYSET_LENGTH = (ONION_SERVICE_PUBLIC_KEY_LENGTH
+ 4 * SYMMETRIC_KEY_LENGTH
+ 2 * HARAC_LENGTH)
PSK_FILE_SIZE = (XCHACHA20_NONCE_LENGTH
+ ARGON2_SALT_LENGTH
+ 2 * SYMMETRIC_KEY_LENGTH
+ POLY1305_TAG_LENGTH)
LOG_ENTRY_LENGTH = (XCHACHA20_NONCE_LENGTH
+ ONION_SERVICE_PUBLIC_KEY_LENGTH
+ TIMESTAMP_LENGTH
+ ORIGIN_HEADER_LENGTH
+ ASSEMBLY_PACKET_LENGTH
+ POLY1305_TAG_LENGTH)
MASTERKEY_DB_SIZE = (ARGON2_SALT_LENGTH
+ BLAKE2_DIGEST_LENGTH
+ 2 * ENCODED_INTEGER_LENGTH)
SETTING_LENGTH = (XCHACHA20_NONCE_LENGTH
+ 4 * ENCODED_INTEGER_LENGTH
+ 3 * ENCODED_FLOAT_LENGTH
+ 11 * ENCODED_BOOLEAN_LENGTH
+ POLY1305_TAG_LENGTH)

View File

@ -1,173 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import serial
import sys
import time
import typing
from typing import Any, Dict
from src.common.exceptions import FunctionReturn
from src.common.misc import ignored
from src.common.output import c_print, clear_screen
from src.common.path import ask_path_gui
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.nh.settings import Settings
def nh_command(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
stdin_fd: int,
unittest: bool = False) -> None:
"""Loop that processes NH side commands."""
sys.stdin = os.fdopen(stdin_fd)
queue_from_txm = queues[TXM_TO_NH_QUEUE]
while True:
with ignored(EOFError, FunctionReturn, KeyboardInterrupt):
while queue_from_txm.qsize() == 0:
time.sleep(0.01)
command = queue_from_txm.get()
process_command(settings, command, queues)
if unittest:
break
def process_command(settings: 'Settings', command: bytes, queues: Dict[bytes, 'Queue']) -> None:
"""Process received command."""
# Keyword Function to run ( Parameters )
# -----------------------------------------------------------------------------------------------
function_d = {UNENCRYPTED_SCREEN_CLEAR: (clear_windows, settings, command, queues[NH_TO_IM_QUEUE] ),
UNENCRYPTED_SCREEN_RESET: (reset_windows, settings, command, queues[NH_TO_IM_QUEUE] ),
UNENCRYPTED_EXIT_COMMAND: (exit_tfc, settings, queues[EXIT_QUEUE] ),
UNENCRYPTED_WIPE_COMMAND: (wipe, settings, queues[EXIT_QUEUE] ),
UNENCRYPTED_IMPORT_COMMAND: (rxm_import, settings, queues[RXM_OUTGOING_QUEUE] ),
UNENCRYPTED_EC_RATIO: (change_ec_ratio, settings, command ),
UNENCRYPTED_BAUDRATE: (change_baudrate, settings, command ),
UNENCRYPTED_GUI_DIALOG: (change_gui_dialog, settings, command )} # type: Dict[bytes, Any]
header = command[:2]
if header not in function_d:
raise FunctionReturn("Error: Received an invalid command.")
from_dict = function_d[header]
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def race_condition_delay(settings: 'Settings') -> None:
"""Handle race condition with RxM command notification."""
if settings.local_testing_mode:
time.sleep(0.1)
if settings.data_diode_sockets:
time.sleep(1)
def clear_windows(settings: 'Settings', command: bytes, queue_to_im: 'Queue') -> None:
"""Clear NH screen and IM client window."""
race_condition_delay(settings)
queue_to_im.put(command)
clear_screen()
def reset_windows(settings: 'Settings', command: bytes, queue_to_im: 'Queue') -> None:
"""Reset NH screen and clear IM client window."""
race_condition_delay(settings)
queue_to_im.put(command)
os.system('reset')
def exit_tfc(settings: 'Settings', queue_exit: 'Queue') -> None:
"""Exit TFC."""
race_condition_delay(settings)
queue_exit.put(EXIT)
def rxm_import(settings: 'Settings', queue_to_rxm: 'Queue') -> None:
"""Import encrypted file to RxM."""
f_path = ask_path_gui("Select file to import...", settings, get_file=True)
with open(f_path, 'rb') as f:
f_data = f.read()
queue_to_rxm.put(IMPORTED_FILE_HEADER + f_data)
def change_ec_ratio(settings: 'Settings', command: bytes) -> None:
"""Change Reed-Solomon erasure code correction ratio setting on NH."""
try:
value = int(command[2:])
if value < 1 or value > 2 ** 64 - 1:
raise ValueError
except ValueError:
raise FunctionReturn("Error: Received invalid EC ratio value from TxM.")
settings.serial_error_correction = value
settings.store_settings()
c_print("Error correction ratio will change on restart.", head=1, tail=1)
def change_baudrate(settings: 'Settings', command: bytes) -> None:
"""Change serial interface baud rate setting on NH."""
try:
value = int(command[2:])
if value not in serial.Serial.BAUDRATES:
raise ValueError
except ValueError:
raise FunctionReturn("Error: Received invalid baud rate value from TxM.")
settings.serial_baudrate = value
settings.store_settings()
c_print("Baud rate will change on restart.", head=1, tail=1)
def change_gui_dialog(settings: 'Settings', command: bytes) -> None:
"""Change file selection (GUI/CLI prompt) setting on NH."""
try:
value_bytes = command[2:].lower()
if value_bytes not in [b'true', b'false']:
raise ValueError
value = (value_bytes == b'true')
except ValueError:
raise FunctionReturn("Error: Received invalid GUI dialog setting value from TxM.")
settings.disable_gui_dialog = value
settings.store_settings()
c_print("Changed setting disable_gui_dialog to {}.".format(value), head=1, tail=1)
def wipe(settings: 'Settings', queue_exit: 'Queue') -> None:
"""Reset terminal, wipe all user data from NH and power off system.
No effective RAM overwriting tool currently exists, so as long as TxM/RxM
use FDE and DDR3 memory, recovery of user data becomes impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
"""
os.system('reset')
race_condition_delay(settings)
queue_exit.put(WIPE)

View File

@ -1,169 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import multiprocessing.connection
import os.path
import serial
import time
import typing
from serial.serialutil import SerialException
from typing import Any, Dict
from src.common.exceptions import CriticalError, graceful_exit
from src.common.misc import ignored
from src.common.output import phase, print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.nh.settings import Settings
def gateway_loop(queues: Dict[bytes, 'Queue'],
gateway: 'Gateway',
unittest: bool = False) -> None:
"""Loop that loads data from TxM side gateway to NH."""
queue = queues[TXM_INCOMING_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
queue.put(gateway.read())
if unittest:
break
class Gateway(object):
"""Gateway object is a wrapper for interfaces that connect NH with TxM/RxM."""
def __init__(self, settings: 'Settings') -> None:
"""Create a new Gateway object."""
self.settings = settings
self.txm_interface = None # type: Any
self.rxm_interface = None # type: Any
# Set True when serial adapter is initially found so that further
# serial interface searches know to announce disconnection.
self.init_found = False
if settings.local_testing_mode:
self.establish_socket()
else:
self.txm_interface = self.rxm_interface = self.establish_serial()
def write(self, packet: bytes) -> None:
"""Output data via socket/serial interface."""
if self.settings.local_testing_mode:
self.rxm_interface.send(packet)
else:
try:
self.rxm_interface.write(packet)
self.rxm_interface.flush()
time.sleep(self.settings.transmit_delay)
except SerialException:
self.rxm_interface = self.establish_serial()
self.write(packet)
def read(self) -> bytes:
"""Read data via socket/serial interface."""
if self.settings.local_testing_mode:
while True:
try:
return self.txm_interface.recv()
except KeyboardInterrupt:
pass
except EOFError:
graceful_exit("IPC client disconnected.")
else:
while True:
try:
start_time = 0.0
read_buffer = bytearray()
while True:
read = self.txm_interface.read(1000)
if read:
start_time = time.monotonic()
read_buffer.extend(read)
else:
if read_buffer:
delta = time.monotonic() - start_time
if delta > self.settings.receive_timeout:
return bytes(read_buffer)
else:
time.sleep(0.001)
except KeyboardInterrupt:
pass
except SerialException:
self.txm_interface = self.establish_serial()
self.read()
def establish_socket(self) -> None:
"""Establish local testing socket connections."""
listener = multiprocessing.connection.Listener(('localhost', NH_LISTEN_SOCKET))
self.txm_interface = listener.accept()
while True:
try:
rxm_socket = RXM_DD_LISTEN_SOCKET if self.settings.data_diode_sockets else RXM_LISTEN_SOCKET
self.rxm_interface = multiprocessing.connection.Client(('localhost', rxm_socket))
break
except ConnectionRefusedError:
time.sleep(0.1)
def establish_serial(self) -> Any:
"""Create a new Serial object."""
try:
serial_nh = self.search_serial_interface()
return serial.Serial(serial_nh, self.settings.session_serial_baudrate, timeout=0)
except SerialException:
graceful_exit("SerialException. Ensure $USER is in the dialout group.")
def search_serial_interface(self) -> str:
"""Search for serial interface."""
if self.settings.serial_usb_adapter:
search_announced = False
if not self.init_found:
print_on_previous_line()
phase("Searching for USB-to-serial interface")
while True:
time.sleep(0.1)
for f in sorted(os.listdir('/dev')):
if f.startswith('ttyUSB'):
if self.init_found:
time.sleep(1.5)
phase('Found', done=True)
if self.init_found:
print_on_previous_line(reps=2)
self.init_found = True
return '/dev/{}'.format(f)
else:
if not search_announced:
if self.init_found:
phase("Serial adapter disconnected. Waiting for interface", head=1)
search_announced = True
else:
f = 'ttyS0'
if f in sorted(os.listdir('/dev/')):
return '/dev/{}'.format(f)
raise CriticalError("Error: /dev/{} was not found.".format(f))

View File

@ -1,46 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import argparse
from typing import Tuple
def process_arguments() -> Tuple[bool, bool]:
"""Define nh.py settings from arguments passed from command line."""
parser = argparse.ArgumentParser("python3.6 nh.py",
usage="%(prog)s [OPTION]",
description="More options inside nh.py")
parser.add_argument('-l',
action='store_true',
default=False,
dest='local_test',
help="Enable local testing mode")
parser.add_argument('-d',
action='store_true',
default=False,
dest='dd_sockets',
help="Enable data diode simulator sockets")
args = parser.parse_args()
return args.local_test, args.dd_sockets

View File

@ -1,174 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import base64
import dbus
import dbus.exceptions
import time
import typing
from datetime import datetime
from typing import Any, Dict, Tuple
from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GObject
from src.common.misc import ignored
from src.common.output import box_print, c_print, clear_screen, phase
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.nh.settings import Settings
def ensure_im_connection() -> None:
"""\
Check that nh.py has connection to Pidgin
before launching other processes.
"""
phase("Waiting for enabled account in Pidgin", offset=1)
while True:
try:
bus = dbus.SessionBus(private=True)
obj = bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject")
purple = dbus.Interface(obj, "im.pidgin.purple.PurpleInterface")
while not purple.PurpleAccountsGetAllActive():
time.sleep(0.01)
phase('OK', done=True)
accounts = []
for a in purple.PurpleAccountsGetAllActive():
accounts.append(purple.PurpleAccountGetUsername(a)[:-1])
just_len = len(max(accounts, key=len))
justified = ["Active accounts in Pidgin:"] + ["* {}".format(a.ljust(just_len)) for a in accounts]
box_print(justified, head=1, tail=1)
return None
except (IndexError, dbus.exceptions.DBusException):
continue
except (EOFError, KeyboardInterrupt):
clear_screen()
exit()
def im_command(queues: Dict[bytes, 'Queue']) -> None:
"""Loop that executes commands on IM client."""
bus = dbus.SessionBus(private=True)
obj = bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject")
purple = dbus.Interface(obj, "im.pidgin.purple.PurpleInterface")
account = purple.PurpleAccountsGetAllActive()[0]
queue = queues[NH_TO_IM_QUEUE]
while True:
with ignored(dbus.exceptions.DBusException, EOFError, KeyboardInterrupt):
while queue.qsize() == 0:
time.sleep(0.01)
command = queue.get()
if command[:2] in [UNENCRYPTED_SCREEN_CLEAR, UNENCRYPTED_SCREEN_RESET]:
contact = command[2:]
new_conv = purple.PurpleConversationNew(1, account, contact)
purple.PurpleConversationClearMessageHistory(new_conv)
def im_incoming(queues: Dict[bytes, 'Queue']) -> None:
"""Loop that maintains signal receiver process."""
def pidgin_to_rxm(account: str, sender: str, message: str, *_: Any) -> None:
"""Signal receiver process that receives packets from Pidgin."""
sender = sender.split('/')[0]
ts = datetime.now().strftime("%m-%d / %H:%M:%S")
d_bus = dbus.SessionBus(private=True)
obj = d_bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject")
purple = dbus.Interface(obj, "im.pidgin.purple.PurpleInterface")
user = ''
for a in purple.PurpleAccountsGetAllActive():
if a == account:
user = purple.PurpleAccountGetUsername(a)[:-1]
if not message.startswith(TFC):
return None
try:
__, header, payload = message.split('|') # type: Tuple[str, str, str]
except ValueError:
return None
if header.encode() == PUBLIC_KEY_PACKET_HEADER:
print("{} - pub key {} > {} > RxM".format(ts, sender, user))
elif header.encode() == MESSAGE_PACKET_HEADER:
print("{} - message {} > {} > RxM".format(ts, sender, user))
else:
print("Received invalid packet from {}".format(sender))
return None
decoded = base64.b64decode(payload)
packet = header.encode() + decoded + ORIGIN_CONTACT_HEADER + sender.encode()
queues[RXM_OUTGOING_QUEUE].put(packet)
while True:
with ignored(dbus.exceptions.DBusException, EOFError, KeyboardInterrupt):
bus = dbus.SessionBus(private=True, mainloop=DBusGMainLoop())
bus.add_signal_receiver(pidgin_to_rxm, dbus_interface="im.pidgin.purple.PurpleInterface", signal_name="ReceivedImMsg")
GObject.MainLoop().run()
def im_outgoing(queues: Dict[bytes, 'Queue'], settings: 'Settings') -> None:
"""\
Loop that outputs messages and public keys from
queue and sends them to contacts over Pidgin.
"""
bus = dbus.SessionBus(private=True)
obj = bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject")
purple = dbus.Interface(obj, "im.pidgin.purple.PurpleInterface")
queue = queues[TXM_TO_IM_QUEUE]
while True:
with ignored(dbus.exceptions.DBusException, EOFError, KeyboardInterrupt):
while queue.qsize() == 0:
time.sleep(0.01)
header, payload, user, contact = queue.get()
b64_str = base64.b64encode(payload).decode()
payload = '|'.join([TFC, header.decode(), b64_str])
user = user.decode()
contact = contact.decode()
user_found = False
for u in purple.PurpleAccountsGetAllActive():
if user == purple.PurpleAccountGetUsername(u)[:-1]:
user_found = True
if settings.relay_to_im_client:
new_conv = purple.PurpleConversationNew(1, u, contact)
sel_conv = purple.PurpleConvIm(new_conv)
purple.PurpleConvImSend(sel_conv, payload)
continue
if not user_found:
c_print("Error: No user {} found.".format(user), head=1, tail=1)

View File

@ -1,95 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os.path
from src.common.encoding import bool_to_bytes, int_to_bytes
from src.common.encoding import bytes_to_bool, bytes_to_int
from src.common.input import yes
from src.common.misc import calculate_race_condition_delay, calculate_serial_delays, ensure_dir
from src.common.statics import *
class Settings(object):
"""Settings object stores NH side persistent settings.
NH-side settings are not encrypted because NH is assumed to be in
control of the adversary. Encryption would require password and
because some users might use same password for NH and TxM/RxM,
sensitive passwords might leak to remote attacker who might later
physically compromise the endpoint.
"""
def __init__(self, local_testing: bool, dd_sockets: bool, operation=NH) -> None:
# Fixed settings
self.relay_to_im_client = True # False stops forwarding messages to IM client
# Controllable settings
self.serial_usb_adapter = True # False uses system's integrated serial interface
self.disable_gui_dialog = False # True replaces Tkinter dialogs with CLI prompts
self.serial_baudrate = 19200 # The speed of serial interface in bauds per second
self.serial_error_correction = 5 # Number of byte errors serial datagrams can recover from
self.software_operation = operation
self.file_name = '{}{}_settings'.format(DIR_USER_DATA, operation)
# Settings from launcher / CLI arguments
self.local_testing_mode = local_testing
self.data_diode_sockets = dd_sockets
ensure_dir(DIR_USER_DATA)
if os.path.isfile(self.file_name):
self.load_settings()
else:
self.setup()
self.store_settings()
# Following settings change only when program is restarted
self.session_serial_error_correction = self.serial_error_correction
self.session_serial_baudrate = self.serial_baudrate
self.race_condition_delay = calculate_race_condition_delay(self)
self.receive_timeout, self.transmit_delay = calculate_serial_delays(self.session_serial_baudrate)
def store_settings(self) -> None:
"""Store persistent settings to file."""
setting_data = int_to_bytes(self.serial_baudrate)
setting_data += int_to_bytes(self.serial_error_correction)
setting_data += bool_to_bytes(self.serial_usb_adapter)
setting_data += bool_to_bytes(self.disable_gui_dialog)
ensure_dir(DIR_USER_DATA)
with open(self.file_name, 'wb+') as f:
f.write(setting_data)
def load_settings(self) -> None:
"""Load persistent settings from file."""
with open(self.file_name, 'rb') as f:
settings = f.read()
self.serial_baudrate = bytes_to_int(settings[0:8])
self.serial_error_correction = bytes_to_int(settings[8:16])
self.serial_usb_adapter = bytes_to_bool(settings[16:17])
self.disable_gui_dialog = bytes_to_bool(settings[17:18])
def setup(self) -> None:
"""Prompt user to enter initial settings."""
if not self.local_testing_mode:
self.serial_usb_adapter = yes("Does NH use USB-to-serial/TTL adapter?", tail=1)

View File

@ -1,130 +0,0 @@
#!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import time
import typing
from datetime import datetime
from typing import Dict
from src.common.misc import ignored
from src.common.output import box_print
from src.common.reed_solomon import ReedSolomonError, RSCodec
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.nh.gateway import Gateway
from src.nh.settings import Settings
def txm_incoming(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
unittest: bool = False) -> None:
"""Loop that places messages received from TxM to appropriate queues."""
rs = RSCodec(2 * settings.session_serial_error_correction)
q_to_tip = queues[TXM_INCOMING_QUEUE]
m_to_rxm = queues[RXM_OUTGOING_QUEUE]
c_to_rxm = queues[TXM_TO_RXM_QUEUE]
q_to_im = queues[TXM_TO_IM_QUEUE]
q_to_nh = queues[TXM_TO_NH_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
while q_to_tip.qsize() == 0:
time.sleep(0.01)
packet = q_to_tip.get()
try:
packet = bytes(rs.decode(packet))
except ReedSolomonError:
box_print("Warning! Failed to correct errors in received packet.", head=1, tail=1)
continue
ts = datetime.now().strftime("%m-%d / %H:%M:%S")
header = packet[:1]
if header == UNENCRYPTED_PACKET_HEADER:
q_to_nh.put(packet[1:])
elif header in [LOCAL_KEY_PACKET_HEADER, COMMAND_PACKET_HEADER]:
p_type = 'local key' if header == LOCAL_KEY_PACKET_HEADER else 'command'
print("{} - {} TxM > RxM".format(ts, p_type))
c_to_rxm.put(packet)
elif header in [MESSAGE_PACKET_HEADER, PUBLIC_KEY_PACKET_HEADER]:
payload_len, p_type = {PUBLIC_KEY_PACKET_HEADER: (KEY_LENGTH, 'pub key'),
MESSAGE_PACKET_HEADER: (MESSAGE_LENGTH, 'message')}[header]
payload = packet[1:1 + payload_len]
trailer = packet[1 + payload_len:]
user, contact = trailer.split(US_BYTE)
print("{} - {} TxM > {} > {}".format(ts, p_type, user.decode(), contact.decode()))
q_to_im.put((header, payload, user, contact))
m_to_rxm.put(header + payload + ORIGIN_USER_HEADER + contact)
elif header == EXPORTED_FILE_HEADER:
payload = packet[1:]
file_name = os.urandom(8).hex()
while os.path.isfile(file_name):
file_name = os.urandom(8).hex()
with open(file_name, 'wb+') as f:
f.write(payload)
print("{} - Exported file from TxM as {}".format(ts, file_name))
if unittest:
break
def rxm_outgoing(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
gateway: 'Gateway',
unittest: bool = False) -> None:
"""Loop that outputs packets from queues to RxM.
Commands (and local keys) from TxM to RxM have higher priority
than messages and public keys from contacts. This prevents
contact from doing DoS on RxM by filling queue with packets.
"""
rs = RSCodec(2 * settings.session_serial_error_correction)
c_queue = queues[TXM_TO_RXM_QUEUE]
m_queue = queues[RXM_OUTGOING_QUEUE]
while True:
try:
time.sleep(0.01)
while c_queue.qsize() != 0:
packet = rs.encode(bytearray(c_queue.get()))
gateway.write(packet)
if m_queue.qsize() != 0:
packet = rs.encode(bytearray(m_queue.get()))
gateway.write(packet)
if unittest:
break
except (EOFError, KeyboardInterrupt):
pass

0
src/nh/__init__.py → src/receiver/__init__.py Normal file → Executable file
View File

389
src/receiver/commands.py Normal file
View File

@ -0,0 +1,389 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import typing
from typing import Any, Dict, Union
from src.common.db_logs import access_logs, change_log_db_key, remove_logs
from src.common.encoding import bytes_to_int, pub_key_to_short_address
from src.common.exceptions import FunctionReturn
from src.common.misc import ensure_dir, separate_header
from src.common.output import clear_screen, m_print, phase, print_on_previous_line
from src.common.statics import *
from src.receiver.commands_g import group_add, group_create, group_delete, group_remove, group_rename
from src.receiver.key_exchanges import key_ex_ecdhe, key_ex_psk_rx, key_ex_psk_tx, local_key_rdy
from src.receiver.packet import decrypt_assembly_packet
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import Group, GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.receiver.packet import PacketList
from src.receiver.windows import WindowList
def process_command(ts: 'datetime',
assembly_ct: bytes,
window_list: 'WindowList',
packet_list: 'PacketList',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey',
gateway: 'Gateway',
exit_queue: 'Queue'
) -> None:
"""Decrypt command assembly packet and process command."""
assembly_packet = decrypt_assembly_packet(assembly_ct, LOCAL_PUBKEY, ORIGIN_USER_HEADER,
window_list, contact_list, key_list)
cmd_packet = packet_list.get_packet(LOCAL_PUBKEY, ORIGIN_USER_HEADER, COMMAND)
cmd_packet.add_packet(assembly_packet)
if not cmd_packet.is_complete:
raise FunctionReturn("Incomplete command.", output=False)
header, cmd = separate_header(cmd_packet.assemble_command_packet(), ENCRYPTED_COMMAND_HEADER_LENGTH)
no = None
# Keyword Function to run ( Parameters )
# --------------------------------------------------------------------------------------------------------------
d = {LOCAL_KEY_RDY: (local_key_rdy, ts, window_list, contact_list ),
WIN_ACTIVITY: (win_activity, window_list ),
WIN_SELECT: (win_select, cmd, window_list ),
CLEAR_SCREEN: (clear_screen, ),
RESET_SCREEN: (reset_screen, cmd, window_list ),
EXIT_PROGRAM: (exit_tfc, exit_queue),
LOG_DISPLAY: (log_command, cmd, no, window_list, contact_list, group_list, settings, master_key),
LOG_EXPORT: (log_command, cmd, ts, window_list, contact_list, group_list, settings, master_key),
LOG_REMOVE: (remove_log, cmd, contact_list, group_list, settings, master_key),
CH_MASTER_KEY: (ch_master_key, ts, window_list, contact_list, group_list, key_list, settings, master_key),
CH_NICKNAME: (ch_nick, cmd, ts, window_list, contact_list, ),
CH_SETTING: (ch_setting, cmd, ts, window_list, contact_list, group_list, key_list, settings, gateway ),
CH_LOGGING: (ch_contact_s, cmd, ts, window_list, contact_list, group_list, header ),
CH_FILE_RECV: (ch_contact_s, cmd, ts, window_list, contact_list, group_list, header ),
CH_NOTIFY: (ch_contact_s, cmd, ts, window_list, contact_list, group_list, header ),
GROUP_CREATE: (group_create, cmd, ts, window_list, contact_list, group_list, settings ),
GROUP_ADD: (group_add, cmd, ts, window_list, contact_list, group_list, settings ),
GROUP_REMOVE: (group_remove, cmd, ts, window_list, contact_list, group_list ),
GROUP_DELETE: (group_delete, cmd, ts, window_list, group_list ),
GROUP_RENAME: (group_rename, cmd, ts, window_list, contact_list, group_list ),
KEY_EX_ECDHE: (key_ex_ecdhe, cmd, ts, window_list, contact_list, key_list, settings ),
KEY_EX_PSK_TX: (key_ex_psk_tx, cmd, ts, window_list, contact_list, key_list, settings ),
KEY_EX_PSK_RX: (key_ex_psk_rx, cmd, ts, window_list, contact_list, key_list, settings ),
CONTACT_REM: (contact_rem, cmd, ts, window_list, contact_list, group_list, key_list, settings, master_key),
WIPE_USR_DATA: (wipe, exit_queue)
} # type: Dict[bytes, Any]
try:
from_dict = d[header]
except KeyError:
raise FunctionReturn("Error: Received an invalid command.")
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def win_activity(window_list: 'WindowList') -> None:
"""Show number of unread messages in each window."""
unread_wins = [w for w in window_list if (w.uid != WIN_UID_LOCAL and w.unread_messages > 0)]
print_list = ["Window activity"] if unread_wins else ["No window activity"]
print_list += [f"{w.name}: {w.unread_messages}" for w in unread_wins]
m_print(print_list, box=True)
print_on_previous_line(reps=(len(print_list) + 2), delay=1)
def win_select(window_uid: bytes, window_list: 'WindowList') -> None:
"""Select window specified by the Transmitter Program."""
if window_uid == WIN_UID_FILE:
clear_screen()
window_list.set_active_rx_window(window_uid)
def reset_screen(win_uid: bytes, window_list: 'WindowList') -> None:
"""Reset window specified by the Transmitter Program."""
window = window_list.get_window(win_uid)
window.reset_window()
os.system(RESET)
def exit_tfc(exit_queue: 'Queue') -> None:
"""Exit TFC."""
exit_queue.put(EXIT)
def log_command(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey'
) -> None:
"""Display or export log file for the active window."""
export = ts is not None
ser_no_msg, uid = separate_header(cmd_data, ENCODED_INTEGER_LENGTH)
no_messages = bytes_to_int(ser_no_msg)
window = window_list.get_window(uid)
access_logs(window, contact_list, group_list, settings, master_key, msg_to_load=no_messages, export=export)
if export:
local_win = window_list.get_local_window()
local_win.add_new(ts, f"Exported log file of {window.type} '{window.name}'.", output=True)
def remove_log(cmd_data: bytes,
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey'
) -> None:
"""Remove log entries for contact or group."""
remove_logs(contact_list, group_list, settings, master_key, selector=cmd_data)
def ch_master_key(ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
key_list: 'KeyList',
settings: 'Settings',
master_key: 'MasterKey'
) -> None:
"""Prompt the user for a new master password and derive a new master key from that."""
try:
old_master_key = master_key.master_key[:]
master_key.master_key = master_key.new_master_key()
phase("Re-encrypting databases")
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
if os.path.isfile(file_name):
change_log_db_key(old_master_key, master_key.master_key, settings)
key_list.store_keys()
settings.store_settings()
contact_list.store_contacts()
group_list.store_groups()
phase(DONE)
m_print("Master password successfully changed.", bold=True, tail_clear=True, delay=1, head=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, "Changed Receiver master password.")
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Password change aborted.", tail_clear=True, delay=1, head=2)
def ch_nick(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList'
) -> None:
"""Change nickname of contact."""
onion_pub_key, nick_bytes = separate_header(cmd_data, header_length=ONION_SERVICE_PUBLIC_KEY_LENGTH)
nick = nick_bytes.decode()
short_addr = pub_key_to_short_address(onion_pub_key)
try:
contact = contact_list.get_contact_by_pub_key(onion_pub_key)
except StopIteration:
raise FunctionReturn(f"Error: Receiver has no contact '{short_addr}' to rename.")
contact.nick = nick
contact_list.store_contacts()
window = window_list.get_window(onion_pub_key)
window.name = nick
window.handle_dict[onion_pub_key] = nick
if window.type == WIN_TYPE_CONTACT:
window.redraw()
cmd_win = window_list.get_local_window()
cmd_win.add_new(ts, f"Changed {short_addr} nick to '{nick}'.", output=True)
def ch_setting(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
key_list: 'KeyList',
settings: 'Settings',
gateway: 'Gateway'
) -> None:
"""Change TFC setting."""
try:
setting, value = [f.decode() for f in cmd_data.split(US_BYTE)]
except ValueError:
raise FunctionReturn("Error: Received invalid setting data.")
if setting in settings.key_list:
settings.change_setting(setting, value, contact_list, group_list)
elif setting in gateway.settings.key_list:
gateway.settings.change_setting(setting, value)
else:
raise FunctionReturn(f"Error: Invalid setting '{setting}'.")
local_win = window_list.get_local_window()
local_win.add_new(ts, f"Changed setting '{setting}' to '{value}'.", output=True)
if setting == 'max_number_of_contacts':
contact_list.store_contacts()
key_list.store_keys()
if setting in ['max_number_of_group_members', 'max_number_of_groups']:
group_list.store_groups()
def ch_contact_s(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
header: bytes
) -> None:
"""Change contact/group related setting."""
setting, win_uid = separate_header(cmd_data, CONTACT_SETTING_HEADER_LENGTH)
attr, desc, file_cmd = {CH_LOGGING: ('log_messages', "Logging of messages", False),
CH_FILE_RECV: ('file_reception', "Reception of files", True),
CH_NOTIFY: ('notifications', "Message notifications", False)}[header]
action, b_value = {ENABLE: ('enabled', True),
DISABLE: ('disabled', False)}[setting.lower()]
if setting.isupper():
# Change settings for all contacts (and groups)
enabled = [getattr(c, attr) for c in contact_list.get_list_of_contacts()]
enabled += [getattr(g, attr) for g in group_list] if not file_cmd else []
status = "was already" if ((all(enabled) and b_value) or (not any(enabled) and not b_value)) else "has been"
specifier = "every "
w_type = "contact"
w_name = "." if file_cmd else " and group."
# Set values
for c in contact_list.get_list_of_contacts():
setattr(c, attr, b_value)
contact_list.store_contacts()
if not file_cmd:
for g in group_list:
setattr(g, attr, b_value)
group_list.store_groups()
else:
# Change setting for contacts in specified window
if not window_list.has_window(win_uid):
raise FunctionReturn(f"Error: Found no window for '{pub_key_to_short_address(win_uid)}'.")
window = window_list.get_window(win_uid)
group_window = window.type == WIN_TYPE_GROUP
contact_window = window.type == WIN_TYPE_CONTACT
if contact_window:
target = contact_list.get_contact_by_pub_key(win_uid) # type: Union[Contact, Group]
else:
target = group_list.get_group_by_id(win_uid)
if file_cmd:
enabled = [getattr(m, attr) for m in window.window_contacts]
changed = not all(enabled) if b_value else any(enabled)
else:
changed = getattr(target, attr) != b_value
status = "has been" if changed else "was already"
specifier = "members in " if (file_cmd and group_window) else ''
w_type = window.type
w_name = f" {window.name}."
# Set values
if contact_window or (group_window and file_cmd):
for c in window.window_contacts:
setattr(c, attr, b_value)
contact_list.store_contacts()
elif group_window:
setattr(group_list.get_group_by_id(win_uid), attr, b_value)
group_list.store_groups()
message = f"{desc} {status} {action} for {specifier}{w_type}{w_name}"
local_win = window_list.get_local_window()
local_win.add_new(ts, message, output=True)
def contact_rem(onion_pub_key: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
key_list: 'KeyList',
settings: 'Settings',
master_key: 'MasterKey'
) -> None:
"""Remove contact from Receiver Program."""
key_list.remove_keyset(onion_pub_key)
window_list.remove_window(onion_pub_key)
short_addr = pub_key_to_short_address(onion_pub_key)
try:
contact = contact_list.get_contact_by_pub_key(onion_pub_key)
except StopIteration:
raise FunctionReturn(f"Receiver has no account '{short_addr}' to remove.")
nick = contact.nick
in_group = any([g.remove_members([onion_pub_key]) for g in group_list])
contact_list.remove_contact_by_pub_key(onion_pub_key)
message = f"Removed {nick} ({short_addr}) from contacts{' and groups' if in_group else ''}."
m_print(message, bold=True, head=1, tail=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
remove_logs(contact_list, group_list, settings, master_key, onion_pub_key)
def wipe(exit_queue: 'Queue') -> None:
"""\
Reset terminals, wipe all TFC user data on Destination Computer and
power off the system.
No effective RAM overwriting tool currently exists, so as long as
Source and Destination Computers use FDE and DDR3 memory, recovery
of user data becomes impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
"""
os.system(RESET)
exit_queue.put(WIPE)

218
src/receiver/commands_g.py Normal file
View File

@ -0,0 +1,218 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import typing
from src.common.encoding import b58encode
from src.common.exceptions import FunctionReturn
from src.common.misc import separate_header, split_byte_string, validate_group_name
from src.common.output import group_management_print, m_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_settings import Settings
from src.receiver.windows import WindowList
def group_create(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings'
) -> None:
"""Create a new group."""
group_id, variable_len_data = separate_header(cmd_data, GROUP_ID_LENGTH)
group_name_bytes, ser_members = variable_len_data.split(US_BYTE, 1)
group_name = group_name_bytes.decode()
purp_pub_keys = set(split_byte_string(ser_members, ONION_SERVICE_PUBLIC_KEY_LENGTH))
pub_keys = set(contact_list.get_list_of_pub_keys())
accepted = list(purp_pub_keys & pub_keys)
rejected = list(purp_pub_keys - pub_keys)
if len(accepted) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} "
f"members per group.")
if len(group_list) == settings.max_number_of_groups:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_groups} groups.")
accepted_contacts = [contact_list.get_contact_by_pub_key(k) for k in accepted]
group_list.add_group(group_name,
group_id,
settings.log_messages_by_default,
settings.show_notifications_by_default,
accepted_contacts)
group = group_list.get_group(group_name)
window = window_list.get_window(group.group_id)
window.window_contacts = accepted_contacts
window.message_log = []
window.unread_messages = 0
window.create_handle_dict()
group_management_print(NEW_GROUP, accepted, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(WIN_UID_LOCAL)
local_win.add_new(ts, f"Created new group {group_name}.")
def group_add(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings'
) -> None:
"""Add member(s) to group."""
group_id, ser_members = separate_header(cmd_data, GROUP_ID_LENGTH)
purp_pub_keys = set(split_byte_string(ser_members, ONION_SERVICE_PUBLIC_KEY_LENGTH))
try:
group_name = group_list.get_group_by_id(group_id).name
except StopIteration:
raise FunctionReturn(f"Error: No group with ID '{b58encode(group_id)}' found.")
pub_keys = set(contact_list.get_list_of_pub_keys())
before_adding = set(group_list.get_group(group_name).get_list_of_member_pub_keys())
ok_accounts = set(pub_keys & purp_pub_keys)
new_in_group_set = set(ok_accounts - before_adding)
end_assembly = list(before_adding | new_in_group_set)
already_in_g = list(purp_pub_keys & before_adding)
rejected = list(purp_pub_keys - pub_keys)
new_in_group = list(new_in_group_set)
if len(end_assembly) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} "
f"members per group.")
group = group_list.get_group(group_name)
group.add_members([contact_list.get_contact_by_pub_key(k) for k in new_in_group])
window = window_list.get_window(group.group_id)
window.add_contacts(new_in_group)
window.create_handle_dict()
group_management_print(ADDED_MEMBERS, new_in_group, contact_list, group_name)
group_management_print(ALREADY_MEMBER, already_in_g, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(WIN_UID_LOCAL)
local_win.add_new(ts, f"Added members to group {group_name}.")
def group_remove(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList'
) -> None:
"""Remove member(s) from the group."""
group_id, ser_members = separate_header(cmd_data, GROUP_ID_LENGTH)
purp_pub_keys = set(split_byte_string(ser_members, ONION_SERVICE_PUBLIC_KEY_LENGTH))
try:
group_name = group_list.get_group_by_id(group_id).name
except StopIteration:
raise FunctionReturn(f"Error: No group with ID '{b58encode(group_id)}' found.")
pub_keys = set(contact_list.get_list_of_pub_keys())
before_removal = set(group_list.get_group(group_name).get_list_of_member_pub_keys())
ok_accounts_set = set(purp_pub_keys & pub_keys)
removable_set = set(before_removal & ok_accounts_set)
not_in_group = list(ok_accounts_set - before_removal)
rejected = list(purp_pub_keys - pub_keys)
removable = list(removable_set)
group = group_list.get_group(group_name)
group.remove_members(removable)
window = window_list.get_window(group.group_id)
window.remove_contacts(removable)
group_management_print(REMOVED_MEMBERS, removable, contact_list, group_name)
group_management_print(NOT_IN_GROUP, not_in_group, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(WIN_UID_LOCAL)
local_win.add_new(ts, f"Removed members from group {group_name}.")
def group_delete(group_id: bytes,
ts: 'datetime',
window_list: 'WindowList',
group_list: 'GroupList'
) -> None:
"""Remove the group."""
if not group_list.has_group_id(group_id):
raise FunctionReturn(f"Error: No group with ID '{b58encode(group_id)}' found.")
name = group_list.get_group_by_id(group_id).name
window_list.remove_window(group_id)
group_list.remove_group_by_id(group_id)
message = f"Removed group '{name}'."
m_print(message, bold=True, head=1, tail=1)
local_win = window_list.get_window(WIN_UID_LOCAL)
local_win.add_new(ts, message)
def group_rename(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList'
) -> None:
"""Rename the group."""
group_id, new_name_bytes = separate_header(cmd_data, GROUP_ID_LENGTH)
try:
group = group_list.get_group_by_id(group_id)
except StopIteration:
raise FunctionReturn(f"Error: No group with ID '{b58encode(group_id)}' found.")
try:
new_name = new_name_bytes.decode()
except UnicodeError:
raise FunctionReturn(f"Error: New name for group '{group.name}' was invalid.")
error_msg = validate_group_name(new_name, contact_list, group_list)
if error_msg:
raise FunctionReturn(error_msg)
old_name = group.name
group.name = new_name
group_list.store_groups()
window = window_list.get_window(group.group_id)
window.name = new_name
message = f"Renamed group '{old_name}' to '{new_name}'."
local_win = window_list.get_window(WIN_UID_LOCAL)
local_win.add_new(ts, message, output=True)

185
src/receiver/files.py Normal file
View File

@ -0,0 +1,185 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os.path
import typing
import zlib
from typing import Dict, Tuple
import nacl.exceptions
from src.common.crypto import auth_and_decrypt, blake2b
from src.common.encoding import bytes_to_str
from src.common.exceptions import FunctionReturn
from src.common.misc import decompress, ensure_dir, separate_headers, separate_trailer
from src.common.output import phase, print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_settings import Settings
from src.receiver.windows import WindowList
def store_unique(file_data: bytes, # File data to store
file_dir: str, # Directory to store file
file_name: str # Preferred name for the file.
) -> str:
"""Store file under a unique filename.
If file exists, add trailing counter .# with value as large as
needed to ensure existing file is not overwritten.
"""
ensure_dir(file_dir)
if os.path.isfile(file_dir + file_name):
ctr = 1
while os.path.isfile(file_dir + file_name + f'.{ctr}'):
ctr += 1
file_name += f'.{ctr}'
with open(file_dir + file_name, 'wb+') as f:
f.write(file_data)
return file_name
def process_assembled_file(ts: 'datetime', # Timestamp last received packet
payload: bytes, # File name and content
onion_pub_key: bytes, # Onion Service pubkey of sender
nick: str, # Nickname of sender
settings: 'Settings', # Settings object
window_list: 'WindowList', # WindowList object
) -> None:
"""Process received file assembly packets."""
try:
file_name_b, file_data = payload.split(US_BYTE, 1)
except ValueError:
raise FunctionReturn("Error: Received file had an invalid structure.")
try:
file_name = file_name_b.decode()
except UnicodeError:
raise FunctionReturn("Error: Received file name had invalid encoding.")
if not file_name.isprintable() or not file_name or '/' in file_name:
raise FunctionReturn("Error: Received file had an invalid name.")
file_ct, file_key = separate_trailer(file_data, SYMMETRIC_KEY_LENGTH)
if len(file_key) != SYMMETRIC_KEY_LENGTH:
raise FunctionReturn("Error: Received file had an invalid key.")
try:
file_pt = auth_and_decrypt(file_ct, file_key)
except nacl.exceptions.CryptoError:
raise FunctionReturn("Error: Decryption of file data failed.")
try:
file_dc = decompress(file_pt, settings.max_decompress_size)
except zlib.error:
raise FunctionReturn("Error: Decompression of file data failed.")
file_dir = f'{DIR_RECV_FILES}{nick}/'
final_name = store_unique(file_dc, file_dir, file_name)
message = f"Stored file from {nick} as '{final_name}'."
if settings.traffic_masking and window_list.active_win is not None:
window = window_list.active_win
else:
window = window_list.get_window(onion_pub_key)
window.add_new(ts, message, onion_pub_key, output=True, event_msg=True)
def new_file(ts: 'datetime',
packet: bytes, # Sender of file and file ciphertext
file_keys: Dict[bytes, bytes], # Dictionary for file decryption keys
file_buf: Dict[bytes, Tuple['datetime', bytes]], # Dictionary for cached file ciphertexts
contact_list: 'ContactList', # ContactList object
window_list: 'WindowList', # WindowList object
settings: 'Settings' # Settings object
) -> None:
"""Validate received file and process or cache it."""
onion_pub_key, _, file_ct = separate_headers(packet, [ONION_SERVICE_PUBLIC_KEY_LENGTH, ORIGIN_HEADER_LENGTH])
if not contact_list.has_pub_key(onion_pub_key):
raise FunctionReturn("File from an unknown account.", output=False)
nick = contact_list.get_contact_by_pub_key(onion_pub_key).nick
if not contact_list.get_contact_by_pub_key(onion_pub_key).file_reception:
raise FunctionReturn(f"Alert! Discarded file from {nick} as file reception for them is disabled.", bold=True)
k = onion_pub_key + blake2b(file_ct) # Dictionary key
if k in file_keys:
decryption_key = file_keys[k]
process_file(ts, onion_pub_key, file_ct, decryption_key, contact_list, window_list, settings)
file_keys.pop(k)
else:
file_buf[k] = (ts, file_ct)
def process_file(ts: 'datetime', # Timestamp of received_packet
onion_pub_key: bytes, # Onion Service pubkey of sender
file_ct: bytes, # File ciphertext
file_key: bytes, # File decryption key
contact_list: 'ContactList', # ContactList object
window_list: 'WindowList', # WindowList object
settings: 'Settings' # Settings object
) -> None:
"""Store file received from a contact."""
nick = contact_list.get_contact_by_pub_key(onion_pub_key).nick
phase("Processing received file", head=1)
try:
file_pt = auth_and_decrypt(file_ct, file_key)
except nacl.exceptions.CryptoError:
raise FunctionReturn(f"Error: Decryption key for file from {nick} was invalid.")
try:
file_dc = decompress(file_pt, settings.max_decompress_size)
except zlib.error:
raise FunctionReturn(f"Error: Failed to decompress file from {nick}.")
phase(DONE)
print_on_previous_line(reps=2)
try:
file_name = bytes_to_str(file_dc[:PADDED_UTF32_STR_LENGTH])
except UnicodeError:
raise FunctionReturn(f"Error: Name of file from {nick} had invalid encoding.")
if not file_name.isprintable() or not file_name or '/' in file_name:
raise FunctionReturn(f"Error: Name of file from {nick} was invalid.")
f_data = file_dc[PADDED_UTF32_STR_LENGTH:]
file_dir = f'{DIR_RECV_FILES}{nick}/'
final_name = store_unique(f_data, file_dir, file_name)
message = f"Stored file from {nick} as '{final_name}'."
if settings.traffic_masking and window_list.active_win is not None:
window = window_list.active_win
else:
window = window_list.get_window(onion_pub_key)
window.add_new(ts, message, onion_pub_key, output=True, event_msg=True)

View File

@ -0,0 +1,334 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os.path
import pipes
import readline
import struct
import subprocess
import tkinter
import typing
from typing import List, Tuple
import nacl.exceptions
from src.common.crypto import argon2_kdf, auth_and_decrypt, blake2b, csprng
from src.common.db_masterkey import MasterKey
from src.common.encoding import b58encode, bytes_to_str, pub_key_to_short_address
from src.common.exceptions import FunctionReturn
from src.common.input import get_b58_key
from src.common.misc import separate_header, separate_headers
from src.common.output import m_print, phase, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.receiver.windows import WindowList
# Local key
def process_local_key(ts: 'datetime',
packet: bytes,
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings',
kdk_hashes: List[bytes],
packet_hashes: List[bytes],
l_queue: 'Queue'
) -> None:
"""Decrypt local key packet and add local contact/keyset."""
bootstrap = not key_list.has_local_keyset()
try:
packet_hash = blake2b(packet)
# Check if the packet is an old one
if packet_hash in packet_hashes:
raise FunctionReturn("Error: Received old local key packet.", output=False)
while True:
m_print("Local key setup", bold=True, head_clear=True, head=1, tail=1)
kdk = get_b58_key(B58_LOCAL_KEY, settings)
kdk_hash = blake2b(kdk)
try:
plaintext = auth_and_decrypt(packet, kdk)
break
except nacl.exceptions.CryptoError:
# Check if key was an old one
if kdk_hash in kdk_hashes:
m_print("Error: Entered an old local key decryption key.", delay=1)
continue
# Check if the kdk was for a packet further ahead in the queue
buffer = [] # type: List[Tuple[datetime, bytes]]
while l_queue.qsize() > 0:
tup = l_queue.get() # type: Tuple[datetime, bytes]
if tup not in buffer:
buffer.append(tup)
for i, tup in enumerate(buffer):
try:
plaintext = auth_and_decrypt(tup[1], kdk)
# If we reach this point, decryption was successful.
for unexamined in buffer[i+1:]:
l_queue.put(unexamined)
buffer = []
ts = tup[0]
break
except nacl.exceptions.CryptoError:
continue
else:
# Finished the buffer without finding local key CT
# for the kdk. Maybe the kdk is from another session.
raise FunctionReturn("Error: Incorrect key decryption key.", delay=1)
break
# Add local contact to contact list database
contact_list.add_contact(LOCAL_PUBKEY,
LOCAL_NICK,
KEX_STATUS_LOCAL_KEY,
bytes(FINGERPRINT_LENGTH),
bytes(FINGERPRINT_LENGTH),
False, False, True)
tx_mk, tx_hk, c_code = separate_headers(plaintext, 2 * [SYMMETRIC_KEY_LENGTH])
# Add local keyset to keyset database
key_list.add_keyset(onion_pub_key=LOCAL_PUBKEY,
tx_mk=tx_mk,
rx_mk=csprng(),
tx_hk=tx_hk,
rx_hk=csprng())
# Cache hashes needed to recognize reissued local key packets and key decryption keys.
packet_hashes.append(packet_hash)
kdk_hashes.append(kdk_hash)
# Prevent leak of KDK via terminal history / clipboard
readline.clear_history()
os.system(RESET)
root = tkinter.Tk()
root.withdraw()
try:
if root.clipboard_get() == b58encode(kdk):
root.clipboard_clear()
except tkinter.TclError:
pass
root.destroy()
m_print(["Local key successfully installed.",
f"Confirmation code (to Transmitter): {c_code.hex()}"], box=True, head=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, "Added new local key.")
if bootstrap:
window_list.active_win = local_win
except (EOFError, KeyboardInterrupt):
m_print("Local key setup aborted.", bold=True, tail_clear=True, delay=1, head=2)
if window_list.active_win is not None and not bootstrap:
window_list.active_win.redraw()
raise FunctionReturn("Local key setup aborted.", output=False)
def local_key_rdy(ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList') -> None:
"""Clear local key bootstrap process from the screen."""
message = "Successfully completed the local key setup."
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
m_print(message, bold=True, tail_clear=True, delay=1)
if contact_list.has_contacts():
if window_list.active_win is not None and window_list.active_win.type in [WIN_TYPE_CONTACT, WIN_TYPE_GROUP]:
window_list.active_win.redraw()
else:
m_print("Waiting for new contacts", bold=True, head=1, tail=1)
# ECDHE
def key_ex_ecdhe(packet: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings'
) -> None:
"""Add contact and symmetric keys derived from X448 shared key."""
onion_pub_key, tx_mk, rx_mk, tx_hk, rx_hk, nick_bytes \
= separate_headers(packet, [ONION_SERVICE_PUBLIC_KEY_LENGTH] + 4*[SYMMETRIC_KEY_LENGTH])
try:
nick = bytes_to_str(nick_bytes)
except (struct.error, UnicodeError):
raise FunctionReturn("Error: Received invalid contact data")
contact_list.add_contact(onion_pub_key, nick,
bytes(FINGERPRINT_LENGTH),
bytes(FINGERPRINT_LENGTH),
KEX_STATUS_NONE,
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
key_list.add_keyset(onion_pub_key, tx_mk, rx_mk, tx_hk, rx_hk)
message = f"Successfully added {nick}."
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
c_code = blake2b(onion_pub_key, digest_size=CONFIRM_CODE_LENGTH)
m_print([message, f"Confirmation code (to Transmitter): {c_code.hex()}"], box=True)
# PSK
def key_ex_psk_tx(packet: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings'
) -> None:
"""Add contact and Tx-PSKs."""
onion_pub_key, tx_mk, _, tx_hk, _, nick_bytes \
= separate_headers(packet, [ONION_SERVICE_PUBLIC_KEY_LENGTH] + 4*[SYMMETRIC_KEY_LENGTH])
try:
nick = bytes_to_str(nick_bytes)
except (struct.error, UnicodeError):
raise FunctionReturn("Error: Received invalid contact data")
contact_list.add_contact(onion_pub_key, nick,
bytes(FINGERPRINT_LENGTH),
bytes(FINGERPRINT_LENGTH),
KEX_STATUS_NO_RX_PSK,
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
# The Rx-side keys are set as null-byte strings to indicate they have not
# been added yet. The zero-keys do not allow existential forgeries as
# `decrypt_assembly_packet`does not allow the use of zero-keys for decryption.
key_list.add_keyset(onion_pub_key=onion_pub_key,
tx_mk=tx_mk,
rx_mk=bytes(SYMMETRIC_KEY_LENGTH),
tx_hk=tx_hk,
rx_hk=bytes(SYMMETRIC_KEY_LENGTH))
message = f"Added Tx-side PSK for {nick} ({pub_key_to_short_address(onion_pub_key)})."
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
m_print(message, bold=True, tail_clear=True, delay=1)
def key_ex_psk_rx(packet: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings'
) -> None:
"""Import Rx-PSK of contact."""
c_code, onion_pub_key = separate_header(packet, CONFIRM_CODE_LENGTH)
short_addr = pub_key_to_short_address(onion_pub_key)
if not contact_list.has_pub_key(onion_pub_key):
raise FunctionReturn(f"Error: Unknown account '{short_addr}'.", head_clear=True)
contact = contact_list.get_contact_by_pub_key(onion_pub_key)
psk_file = ask_path_gui(f"Select PSK for {contact.nick} ({short_addr})", settings, get_file=True)
try:
with open(psk_file, 'rb') as f:
psk_data = f.read()
except PermissionError:
raise FunctionReturn("Error: No read permission for the PSK file.")
if len(psk_data) != PSK_FILE_SIZE:
raise FunctionReturn("Error: The PSK data in the file was invalid.", head_clear=True)
salt, ct_tag = separate_header(psk_data, ARGON2_SALT_LENGTH)
while True:
try:
password = MasterKey.get_password("PSK password")
phase("Deriving the key decryption key", head=2)
kdk = argon2_kdf(password, salt, rounds=ARGON2_ROUNDS, memory=ARGON2_MIN_MEMORY)
psk = auth_and_decrypt(ct_tag, kdk)
phase(DONE)
break
except nacl.exceptions.CryptoError:
print_on_previous_line()
m_print("Invalid password. Try again.", head=1)
print_on_previous_line(reps=5, delay=1)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("PSK import aborted.", head=2, delay=1, tail_clear=True)
rx_mk, rx_hk = separate_header(psk, SYMMETRIC_KEY_LENGTH)
if any(k == bytes(SYMMETRIC_KEY_LENGTH) for k in [rx_mk, rx_hk]):
raise FunctionReturn("Error: Received invalid keys from contact.", head_clear=True)
contact.kex_status = KEX_STATUS_HAS_RX_PSK
contact_list.store_contacts()
keyset = key_list.get_keyset(onion_pub_key)
keyset.rx_mk = rx_mk
keyset.rx_hk = rx_hk
key_list.store_keys()
# Pipes protects against shell injection. Source of command's parameter is
# the program itself, and therefore trusted, but it's still good practice.
subprocess.Popen(f"shred -n 3 -z -u {pipes.quote(psk_file)}", shell=True).wait()
if os.path.isfile(psk_file):
m_print(f"Warning! Overwriting of PSK ({psk_file}) failed. Press <Enter> to continue.",
manual_proceed=True, box=True)
message = f"Added Rx-side PSK for {contact.nick} ({short_addr})."
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
m_print([message, '', "Warning!",
"Physically destroy the keyfile transmission media ",
"to ensure it does not steal data from this computer!", '',
f"Confirmation code (to Transmitter): {c_code.hex()}"], box=True, head=1, tail=1)

203
src/receiver/messages.py Normal file
View File

@ -0,0 +1,203 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import base64
import typing
from typing import Dict
from src.common.db_logs import write_log_entry
from src.common.encoding import bytes_to_bool
from src.common.exceptions import FunctionReturn
from src.common.misc import separate_header, separate_headers
from src.common.statics import *
from src.receiver.packet import decrypt_assembly_packet
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.receiver.packet import PacketList
from src.receiver.windows import WindowList
def process_message(ts: 'datetime',
assembly_packet_ct: bytes,
window_list: 'WindowList',
packet_list: 'PacketList',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey',
file_keys: Dict[bytes, bytes]
) -> None:
"""Process received private / group message."""
local_window = window_list.get_local_window()
onion_pub_key, origin, assembly_packet_ct \
= separate_headers(assembly_packet_ct, [ONION_SERVICE_PUBLIC_KEY_LENGTH, ORIGIN_HEADER_LENGTH])
if onion_pub_key == LOCAL_PUBKEY:
raise FunctionReturn("Warning! Received packet masqueraded as a command.", window=local_window)
if origin not in [ORIGIN_USER_HEADER, ORIGIN_CONTACT_HEADER]:
raise FunctionReturn("Error: Received packet had an invalid origin-header.", window=local_window)
assembly_packet = decrypt_assembly_packet(assembly_packet_ct, onion_pub_key, origin,
window_list, contact_list, key_list)
p_type = FILE if assembly_packet[:ASSEMBLY_PACKET_HEADER_LENGTH].isupper() else MESSAGE
packet = packet_list.get_packet(onion_pub_key, origin, p_type)
logging = contact_list.get_contact_by_pub_key(onion_pub_key).log_messages
def log_masking_packets(completed: bool = False) -> None:
"""Add masking packets to log file.
If logging and log file masking are enabled, this function will
in case of erroneous transmissions, store the correct number of
placeholder data packets to log file to hide the quantity of
communication that log file observation would otherwise reveal.
"""
if logging and settings.log_file_masking and (packet.log_masking_ctr or completed):
no_masking_packets = len(packet.assembly_pt_list) if completed else packet.log_masking_ctr
for _ in range(no_masking_packets):
write_log_entry(PLACEHOLDER_DATA, onion_pub_key, settings, master_key, origin)
packet.log_masking_ctr = 0
try:
packet.add_packet(assembly_packet)
except FunctionReturn:
log_masking_packets()
raise
log_masking_packets()
if not packet.is_complete:
return None
try:
if p_type == FILE:
packet.assemble_and_store_file(ts, onion_pub_key, window_list)
raise FunctionReturn("File storage complete.", output=False) # Raising allows calling log_masking_packets
elif p_type == MESSAGE:
whisper_byte, header, assembled = separate_headers(packet.assemble_message_packet(),
[WHISPER_FIELD_LENGTH, MESSAGE_HEADER_LENGTH])
if len(whisper_byte) != WHISPER_FIELD_LENGTH:
raise FunctionReturn("Error: Message from contact had an invalid whisper header.")
whisper = bytes_to_bool(whisper_byte)
if header == GROUP_MESSAGE_HEADER:
logging = process_group_message(assembled, ts, onion_pub_key, origin, whisper, group_list, window_list)
elif header == PRIVATE_MESSAGE_HEADER:
window = window_list.get_window(onion_pub_key)
window.add_new(ts, assembled.decode(), onion_pub_key, origin, output=True, whisper=whisper)
elif header == FILE_KEY_HEADER:
nick = process_file_key_message(assembled, onion_pub_key, origin, contact_list, file_keys)
raise FunctionReturn(f"Received file decryption key from {nick}", window=local_window)
else:
raise FunctionReturn("Error: Message from contact had an invalid header.")
if whisper:
raise FunctionReturn("Whisper message complete.", output=False)
if logging:
for p in packet.assembly_pt_list:
write_log_entry(p, onion_pub_key, settings, master_key, origin)
except (FunctionReturn, UnicodeError):
log_masking_packets(completed=True)
raise
finally:
packet.clear_assembly_packets()
def process_group_message(assembled: bytes,
ts: 'datetime',
onion_pub_key: bytes,
origin: bytes,
whisper: bool,
group_list: 'GroupList',
window_list: 'WindowList'
) -> bool:
"""Process a group message."""
group_id, assembled = separate_header(assembled, GROUP_ID_LENGTH)
if not group_list.has_group_id(group_id):
raise FunctionReturn("Error: Received message to an unknown group.", output=False)
group = group_list.get_group_by_id(group_id)
if not group.has_member(onion_pub_key):
raise FunctionReturn("Error: Account is not a member of the group.", output=False)
group_msg_id, group_message = separate_header(assembled, GROUP_MSG_ID_LENGTH)
try:
group_message_str = group_message.decode()
except UnicodeError:
raise FunctionReturn("Error: Received an invalid group message.")
window = window_list.get_window(group.group_id)
# All copies of group messages the user sends to members contain
# the same message ID. This allows the Receiver Program to ignore
# duplicates of outgoing messages sent by the user to each member.
if origin == ORIGIN_USER_HEADER:
if window.group_msg_id != group_msg_id:
window.group_msg_id = group_msg_id
window.add_new(ts, group_message_str, onion_pub_key, origin, output=True, whisper=whisper)
elif origin == ORIGIN_CONTACT_HEADER:
window.add_new(ts, group_message_str, onion_pub_key, origin, output=True, whisper=whisper)
return group.log_messages
def process_file_key_message(assembled: bytes,
onion_pub_key: bytes,
origin: bytes,
contact_list: 'ContactList',
file_keys: Dict[bytes, bytes]
) -> str:
"""Process received file key delivery message."""
if origin == ORIGIN_USER_HEADER:
raise FunctionReturn("File key message from the user.", output=False)
try:
decoded = base64.b85decode(assembled)
except ValueError:
raise FunctionReturn("Error: Received an invalid file key message.")
ct_hash, file_key = separate_header(decoded, BLAKE2_DIGEST_LENGTH)
if len(ct_hash) != BLAKE2_DIGEST_LENGTH or len(file_key) != SYMMETRIC_KEY_LENGTH:
raise FunctionReturn("Error: Received an invalid file key message.")
file_keys[onion_pub_key + ct_hash] = file_key
nick = contact_list.get_contact_by_pub_key(onion_pub_key).nick
return nick

153
src/receiver/output_loop.py Executable file
View File

@ -0,0 +1,153 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import sys
import time
import typing
from typing import Dict, List, Tuple
from src.common.exceptions import FunctionReturn
from src.common.output import clear_screen
from src.common.statics import *
from src.receiver.commands import process_command
from src.receiver.files import new_file, process_file
from src.receiver.key_exchanges import process_local_key
from src.receiver.messages import process_message
from src.receiver.packet import PacketList
from src.receiver.windows import WindowList
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.common.gateway import Gateway
def output_loop(queues: Dict[bytes, 'Queue'],
gateway: 'Gateway',
settings: 'Settings',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
master_key: 'MasterKey',
stdin_fd: int,
unittest: bool = False
) -> None:
"""Process packets in message queues according to their priority."""
l_queue = queues[LOCAL_KEY_DATAGRAM_HEADER]
m_queue = queues[MESSAGE_DATAGRAM_HEADER]
f_queue = queues[FILE_DATAGRAM_HEADER]
c_queue = queues[COMMAND_DATAGRAM_HEADER]
e_queue = queues[EXIT_QUEUE]
sys.stdin = os.fdopen(stdin_fd)
packet_buf = dict() # type: Dict[bytes, List[Tuple[datetime, bytes]]]
file_buf = dict() # type: Dict[bytes, Tuple[datetime, bytes]]
file_keys = dict() # type: Dict[bytes, bytes]
kdk_hashes = [] # type: List[bytes]
packet_hashes = [] # type: List[bytes]
packet_list = PacketList(settings, contact_list)
window_list = WindowList(settings, contact_list, group_list, packet_list)
clear_screen()
while True:
try:
if l_queue.qsize() != 0:
ts, packet = l_queue.get()
process_local_key(ts, packet, window_list, contact_list, key_list,
settings, kdk_hashes, packet_hashes, l_queue)
continue
if not contact_list.has_local_contact():
time.sleep(0.1)
continue
# Commands
if c_queue.qsize() != 0:
ts, packet = c_queue.get()
process_command(ts, packet, window_list, packet_list, contact_list, key_list,
group_list, settings, master_key, gateway, e_queue)
continue
# File window refresh
if window_list.active_win is not None and window_list.active_win.uid == WIN_UID_FILE:
window_list.active_win.redraw_file_win()
# Cached message packets
for onion_pub_key in packet_buf:
if (contact_list.has_pub_key(onion_pub_key)
and key_list.has_rx_mk(onion_pub_key)
and packet_buf[onion_pub_key]):
ts, packet = packet_buf[onion_pub_key].pop(0)
process_message(ts, packet, window_list, packet_list, contact_list, key_list,
group_list, settings, master_key, file_keys)
continue
# New messages
if m_queue.qsize() != 0:
ts, packet = m_queue.get()
onion_pub_key = packet[:ONION_SERVICE_PUBLIC_KEY_LENGTH]
if contact_list.has_pub_key(onion_pub_key) and key_list.has_rx_mk(onion_pub_key):
process_message(ts, packet, window_list, packet_list, contact_list, key_list,
group_list, settings, master_key, file_keys)
else:
packet_buf.setdefault(onion_pub_key, []).append((ts, packet))
continue
# Cached files
if file_buf:
for k in file_buf:
key_to_remove = b''
try:
if k in file_keys:
key_to_remove = k
ts_, file_ct = file_buf[k]
dec_key = file_keys[k]
onion_pub_key = k[:ONION_SERVICE_PUBLIC_KEY_LENGTH]
process_file(ts_, onion_pub_key, file_ct, dec_key, contact_list, window_list, settings)
finally:
if key_to_remove:
file_buf.pop(k)
file_keys.pop(k)
break
# New files
if f_queue.qsize() != 0:
ts, packet = f_queue.get()
new_file(ts, packet, file_keys, file_buf, contact_list, window_list, settings)
time.sleep(0.01)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
except (FunctionReturn, KeyError, KeyboardInterrupt):
pass

429
src/receiver/packet.py Normal file
View File

@ -0,0 +1,429 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import struct
import typing
import zlib
from datetime import datetime, timedelta
from typing import Any, Callable, Dict, Generator, Iterable, List, Optional, Sized
import nacl.exceptions
from src.common.crypto import auth_and_decrypt, blake2b, rm_padding_bytes
from src.common.encoding import bytes_to_int, int_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import decompress, readable_size, separate_header, separate_headers, separate_trailer
from src.common.output import m_print
from src.common.statics import *
from src.receiver.files import process_assembled_file
if typing.TYPE_CHECKING:
from src.common.db_contacts import Contact, ContactList
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.receiver.windows import RxWindow, WindowList
def process_offset(offset: int, # Number of dropped packets
origin: bytes, # "to/from" preposition
direction: str, # Direction of packet
nick: str, # Nickname of associated contact
window: 'RxWindow' # RxWindow object
) -> None:
"""Display warnings about increased offsets.
If the offset has increased over the threshold, ask the user to
confirm hash ratchet catch up.
"""
if offset > HARAC_WARN_THRESHOLD and origin == ORIGIN_CONTACT_HEADER:
m_print([f"Warning! {offset} packets from {nick} were not received.",
f"This might indicate that {offset} most recent packets were ",
f"lost during transmission, or that the contact is attempting ",
f"a DoS attack. You can wait for TFC to attempt to decrypt the ",
"packet, but it might take a very long time or even forever."])
if not yes("Proceed with the decryption?", abort=False, tail=1):
raise FunctionReturn(f"Dropped packet from {nick}.", window=window)
elif offset:
m_print(f"Warning! {offset} packet{'s' if offset > 1 else ''} {direction} {nick} were not received.")
def decrypt_assembly_packet(packet: bytes, # Assembly packet ciphertext
onion_pub_key: bytes, # Onion Service pubkey of associated contact
origin: bytes, # Direction of packet
window_list: 'WindowList', # WindowList object
contact_list: 'ContactList', # ContactList object
key_list: 'KeyList' # Keylist object
) -> bytes: # Decrypted assembly packet
"""Decrypt assembly packet from contact/local Transmitter."""
ct_harac, ct_assemby_packet = separate_header(packet, header_length=HARAC_CT_LENGTH)
local_window = window_list.get_local_window()
command = onion_pub_key == LOCAL_PUBKEY
p_type = "command" if command else "packet"
direction = "from" if command or (origin == ORIGIN_CONTACT_HEADER) else "sent to"
nick = contact_list.get_contact_by_pub_key(onion_pub_key).nick
# Load keys
keyset = key_list.get_keyset(onion_pub_key)
key_dir = TX if origin == ORIGIN_USER_HEADER else RX
header_key = getattr(keyset, f'{key_dir}_hk') # type: bytes
message_key = getattr(keyset, f'{key_dir}_mk') # type: bytes
if any(k == bytes(SYMMETRIC_KEY_LENGTH) for k in [header_key, message_key]):
raise FunctionReturn("Warning! Loaded zero-key for packet decryption.")
# Decrypt hash ratchet counter
try:
harac_bytes = auth_and_decrypt(ct_harac, header_key)
except nacl.exceptions.CryptoError:
raise FunctionReturn(
f"Warning! Received {p_type} {direction} {nick} had an invalid hash ratchet MAC.", window=local_window)
# Catch up with hash ratchet offset
purp_harac = bytes_to_int(harac_bytes)
stored_harac = getattr(keyset, f'{key_dir}_harac')
offset = purp_harac - stored_harac
if offset < 0:
raise FunctionReturn(
f"Warning! Received {p_type} {direction} {nick} had an expired hash ratchet counter.", window=local_window)
process_offset(offset, origin, direction, nick, local_window)
for harac in range(stored_harac, stored_harac + offset):
message_key = blake2b(message_key + int_to_bytes(harac), digest_size=SYMMETRIC_KEY_LENGTH)
# Decrypt packet
try:
assembly_packet = auth_and_decrypt(ct_assemby_packet, message_key)
except nacl.exceptions.CryptoError:
raise FunctionReturn(f"Warning! Received {p_type} {direction} {nick} had an invalid MAC.", window=local_window)
# Update message key and harac
keyset.update_mk(key_dir,
blake2b(message_key + int_to_bytes(stored_harac + offset), digest_size=SYMMETRIC_KEY_LENGTH),
offset + 1)
return assembly_packet
class Packet(object):
"""Packet objects collect and keep track of received assembly packets."""
def __init__(self,
onion_pub_key: bytes, # Public key of the contact associated with the packet <─┐
origin: bytes, # Origin of packet (user, contact) <─┼─ Form packet UID
p_type: str, # Packet type (message, file, command) <─┘
contact: 'Contact', # Contact object of contact associated with the packet
settings: 'Settings' # Settings object
) -> None:
"""Create a new Packet object."""
self.onion_pub_key = onion_pub_key
self.contact = contact
self.origin = origin
self.type = p_type
self.settings = settings
# File transmission metadata
self.packets = None # type: Optional[int]
self.time = None # type: Optional[str]
self.size = None # type: Optional[str]
self.name = None # type: Optional[str]
self.sh = {MESSAGE: M_S_HEADER, FILE: F_S_HEADER, COMMAND: C_S_HEADER}[self.type]
self.lh = {MESSAGE: M_L_HEADER, FILE: F_L_HEADER, COMMAND: C_L_HEADER}[self.type]
self.ah = {MESSAGE: M_A_HEADER, FILE: F_A_HEADER, COMMAND: C_A_HEADER}[self.type]
self.eh = {MESSAGE: M_E_HEADER, FILE: F_E_HEADER, COMMAND: C_E_HEADER}[self.type]
self.ch = {MESSAGE: M_C_HEADER, FILE: F_C_HEADER, COMMAND: C_C_HEADER}[self.type]
self.nh = {MESSAGE: P_N_HEADER, FILE: P_N_HEADER, COMMAND: C_N_HEADER}[self.type]
self.log_masking_ctr = 0 # type: int
self.assembly_pt_list = [] # type: List[bytes]
self.log_ct_list = [] # type: List[bytes]
self.long_active = False
self.is_complete = False
def add_masking_packet_to_log_file(self, increase: int = 1) -> None:
"""Increase `log_masking_ctr` for message and file packets."""
if self.type in [MESSAGE, FILE]:
self.log_masking_ctr += increase
def clear_file_metadata(self) -> None:
"""Clear file metadata."""
self.packets = None
self.time = None
self.size = None
self.name = None
def clear_assembly_packets(self) -> None:
"""Clear packet state."""
self.assembly_pt_list = []
self.log_ct_list = []
self.long_active = False
self.is_complete = False
def new_file_packet(self) -> None:
"""New file transmission handling logic."""
name = self.name
was_active = self.long_active
self.clear_file_metadata()
self.clear_assembly_packets()
if self.origin == ORIGIN_USER_HEADER:
self.add_masking_packet_to_log_file()
raise FunctionReturn("Ignored file from the user.", output=False)
if not self.contact.file_reception:
self.add_masking_packet_to_log_file()
raise FunctionReturn(f"Alert! File transmission from {self.contact.nick} but reception is disabled.")
if was_active:
m_print(f"Alert! File '{name}' from {self.contact.nick} never completed.", head=1, tail=1)
def check_long_packet(self) -> None:
"""Check if the long packet has permission to be extended."""
if not self.long_active:
self.add_masking_packet_to_log_file()
raise FunctionReturn("Missing start packet.", output=False)
if self.type == FILE and not self.contact.file_reception:
self.add_masking_packet_to_log_file(increase=len(self.assembly_pt_list) + 1)
self.clear_assembly_packets()
raise FunctionReturn("Alert! File reception disabled mid-transfer.")
def process_short_header(self,
packet: bytes,
packet_ct: Optional[bytes] = None
) -> None:
"""Process short packet."""
if self.long_active:
self.add_masking_packet_to_log_file(increase=len(self.assembly_pt_list))
if self.type == FILE:
self.new_file_packet()
sh, _, packet = separate_headers(packet, [ASSEMBLY_PACKET_HEADER_LENGTH] + [2*ENCODED_INTEGER_LENGTH])
packet = sh + packet
self.assembly_pt_list = [packet]
self.long_active = False
self.is_complete = True
if packet_ct is not None:
self.log_ct_list = [packet_ct]
def process_long_header(self,
packet: bytes,
packet_ct: Optional[bytes] = None
) -> None:
"""Process first packet of long transmission."""
if self.long_active:
self.add_masking_packet_to_log_file(increase=len(self.assembly_pt_list))
if self.type == FILE:
self.new_file_packet()
try:
lh, no_p_bytes, time_bytes, size_bytes, packet \
= separate_headers(packet, [ASSEMBLY_PACKET_HEADER_LENGTH] + 3*[ENCODED_INTEGER_LENGTH])
self.packets = bytes_to_int(no_p_bytes) # added by transmitter.packet.split_to_assembly_packets
self.time = str(timedelta(seconds=bytes_to_int(time_bytes)))
self.size = readable_size(bytes_to_int(size_bytes))
self.name = packet.split(US_BYTE)[0].decode()
packet = lh + packet
m_print([f'Receiving file from {self.contact.nick}:',
f'{self.name} ({self.size})',
f'ETA {self.time} ({self.packets} packets)'], bold=True)
except (struct.error, UnicodeError, ValueError):
self.add_masking_packet_to_log_file()
raise FunctionReturn("Error: Received file packet had an invalid header.")
self.assembly_pt_list = [packet]
self.long_active = True
self.is_complete = False
if packet_ct is not None:
self.log_ct_list = [packet_ct]
def process_append_header(self,
packet: bytes,
packet_ct: Optional[bytes] = None
) -> None:
"""Process consecutive packet(s) of long transmission."""
self.check_long_packet()
self.assembly_pt_list.append(packet)
if packet_ct is not None:
self.log_ct_list.append(packet_ct)
def process_end_header(self,
packet: bytes,
packet_ct: Optional[bytes] = None
) -> None:
"""Process last packet of long transmission."""
self.check_long_packet()
self.assembly_pt_list.append(packet)
self.long_active = False
self.is_complete = True
if packet_ct is not None:
self.log_ct_list.append(packet_ct)
def abort_packet(self, cancel: bool = False) -> None:
"""Process cancel/noise packet."""
if self.type == FILE and self.origin == ORIGIN_CONTACT_HEADER and self.long_active:
if cancel:
message = f"{self.contact.nick} cancelled file."
else:
message = f"Alert! File '{self.name}' from {self.contact.nick} never completed."
m_print(message, head=1, tail=1)
self.clear_file_metadata()
self.add_masking_packet_to_log_file(increase=len(self.assembly_pt_list) + 1)
self.clear_assembly_packets()
def process_cancel_header(self, *_: Any) -> None:
"""Process cancel packet for long transmission."""
self.abort_packet(cancel=True)
def process_noise_header(self, *_: Any) -> None:
"""Process traffic masking noise packet."""
self.abort_packet()
def add_packet(self,
packet: bytes,
packet_ct: Optional[bytes] = None
) -> None:
"""Add a new assembly packet to the object."""
try:
func_d = {self.sh: self.process_short_header,
self.lh: self.process_long_header,
self.ah: self.process_append_header,
self.eh: self.process_end_header,
self.ch: self.process_cancel_header,
self.nh: self.process_noise_header
} # type: Dict[bytes, Callable]
func = func_d[packet[:ASSEMBLY_PACKET_HEADER_LENGTH]]
except KeyError:
# Erroneous headers are ignored but stored as placeholder data.
self.add_masking_packet_to_log_file()
raise FunctionReturn("Error: Received packet had an invalid assembly packet header.")
func(packet, packet_ct)
def assemble_message_packet(self) -> bytes:
"""Assemble message packet."""
padded = b''.join([p[ASSEMBLY_PACKET_HEADER_LENGTH:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
if len(self.assembly_pt_list) > 1:
msg_ct, msg_key = separate_trailer(payload, SYMMETRIC_KEY_LENGTH)
try:
payload = auth_and_decrypt(msg_ct, msg_key)
except nacl.exceptions.CryptoError:
raise FunctionReturn("Error: Decryption of message failed.")
try:
return decompress(payload, MAX_MESSAGE_SIZE)
except zlib.error:
raise FunctionReturn("Error: Decompression of message failed.")
def assemble_and_store_file(self,
ts: 'datetime',
onion_pub_key: bytes,
window_list: 'WindowList'
) -> None:
"""Assemble file packet and store it."""
padded = b''.join([p[ASSEMBLY_PACKET_HEADER_LENGTH:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
process_assembled_file(ts, payload, onion_pub_key, self.contact.nick, self.settings, window_list)
def assemble_command_packet(self) -> bytes:
"""Assemble command packet."""
padded = b''.join([p[ASSEMBLY_PACKET_HEADER_LENGTH:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
if len(self.assembly_pt_list) > 1:
payload, cmd_hash = separate_trailer(payload, BLAKE2_DIGEST_LENGTH)
if blake2b(payload) != cmd_hash:
raise FunctionReturn("Error: Received an invalid command.")
try:
return decompress(payload, self.settings.max_decompress_size)
except zlib.error:
raise FunctionReturn("Error: Decompression of command failed.")
class PacketList(Iterable, Sized):
"""PacketList manages all file, message, and command packets."""
def __init__(self,
settings: 'Settings',
contact_list: 'ContactList'
) -> None:
"""Create a new PacketList object."""
self.settings = settings
self.contact_list = contact_list
self.packets = [] # type: List[Packet]
def __iter__(self) -> Generator:
"""Iterate over packet list."""
yield from self.packets
def __len__(self) -> int:
"""Return number of packets in the packet list."""
return len(self.packets)
def has_packet(self,
onion_pub_key: bytes,
origin: bytes,
p_type: str
) -> bool:
"""Return True if a packet with matching selectors exists, else False."""
return any(p for p in self.packets if (p.onion_pub_key == onion_pub_key
and p.origin == origin
and p.type == p_type))
def get_packet(self,
onion_pub_key: bytes,
origin: bytes,
p_type: str,
log_access: bool = False
) -> Packet:
"""Get packet based on Onion Service public key, origin, and type.
If the packet does not exist, create it.
"""
if not self.has_packet(onion_pub_key, origin, p_type):
if log_access:
contact = self.contact_list.generate_dummy_contact()
else:
contact = self.contact_list.get_contact_by_pub_key(onion_pub_key)
self.packets.append(Packet(onion_pub_key, origin, p_type, contact, self.settings))
return next(p for p in self.packets if (p.onion_pub_key == onion_pub_key
and p.origin == origin
and p.type == p_type))

72
src/receiver/receiver_loop.py Executable file
View File

@ -0,0 +1,72 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import struct
import time
import typing
from datetime import datetime
from typing import Dict
from src.common.encoding import bytes_to_int
from src.common.exceptions import FunctionReturn
from src.common.misc import ignored, separate_headers
from src.common.output import m_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.gateway import Gateway
def receiver_loop(queues: Dict[bytes, 'Queue'],
gateway: 'Gateway',
unittest: bool = False
) -> None:
"""Decode received packets and forward them to packet queues."""
gateway_queue = queues[GATEWAY_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
if gateway_queue.qsize() == 0:
time.sleep(0.01)
_, packet = gateway_queue.get()
try:
packet = gateway.detect_errors(packet)
except FunctionReturn:
continue
header, ts_bytes, payload = separate_headers(packet, [DATAGRAM_HEADER_LENGTH, DATAGRAM_TIMESTAMP_LENGTH])
try:
ts = datetime.strptime(str(bytes_to_int(ts_bytes)), "%Y%m%d%H%M%S%f")
except (ValueError, struct.error):
m_print("Error: Failed to decode timestamp in the received packet.", head=1, tail=1)
continue
if header in [MESSAGE_DATAGRAM_HEADER, FILE_DATAGRAM_HEADER,
COMMAND_DATAGRAM_HEADER, LOCAL_KEY_DATAGRAM_HEADER]:
queues[header].put((ts, payload))
if unittest:
break

373
src/receiver/windows.py Normal file
View File

@ -0,0 +1,373 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import sys
import textwrap
import typing
from datetime import datetime
from typing import Any, Dict, Generator, Iterable, List, Optional, Tuple
from src.common.encoding import pub_key_to_short_address
from src.common.exceptions import FunctionReturn
from src.common.misc import get_terminal_width
from src.common.output import clear_screen, m_print, print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import GroupList
from src.common.db_settings import Settings
from src.receiver.packet import PacketList
MsgTuple = Tuple[datetime, str, bytes, bytes, bool, bool]
class RxWindow(Iterable):
"""RxWindow is an ephemeral message log for contact or group.
In addition, command history and file transfers have
their own windows, accessible with separate commands.
"""
def __init__(self,
uid: bytes,
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
packet_list: 'PacketList'
) -> None:
"""Create a new RxWindow object."""
self.uid = uid
self.contact_list = contact_list
self.group_list = group_list
self.settings = settings
self.packet_list = packet_list
self.is_active = False
self.contact = None
self.group = None
self.group_msg_id = os.urandom(GROUP_MSG_ID_LENGTH)
self.window_contacts = [] # type: List[Contact]
self.message_log = [] # type: List[MsgTuple]
self.handle_dict = dict() # type: Dict[bytes, str]
self.previous_msg_ts = datetime.now()
self.unread_messages = 0
if self.uid == WIN_UID_LOCAL:
self.type = WIN_TYPE_COMMAND
self.name = self.type
self.window_contacts = []
elif self.uid == WIN_UID_FILE:
self.type = WIN_TYPE_FILE
self.packet_list = packet_list
elif self.uid in self.contact_list.get_list_of_pub_keys():
self.type = WIN_TYPE_CONTACT
self.contact = self.contact_list.get_contact_by_pub_key(uid)
self.name = self.contact.nick
self.window_contacts = [self.contact]
elif self.uid in self.group_list.get_list_of_group_ids():
self.type = WIN_TYPE_GROUP
self.group = self.group_list.get_group_by_id(self.uid)
self.name = self.group.name
self.window_contacts = self.group.members
else:
raise FunctionReturn(f"Invalid window '{uid}'.")
def __iter__(self) -> Generator:
"""Iterate over window's message log."""
yield from self.message_log
def __len__(self) -> int:
"""Return number of message tuples in the message log."""
return len(self.message_log)
def add_contacts(self, pub_keys: List[bytes]) -> None:
"""Add contact objects to the window."""
self.window_contacts += [self.contact_list.get_contact_by_pub_key(k) for k in pub_keys
if not self.has_contact(k) and self.contact_list.has_pub_key(k)]
def remove_contacts(self, pub_keys: List[bytes]) -> None:
"""Remove contact objects from the window."""
to_remove = set(pub_keys) & set([m.onion_pub_key for m in self.window_contacts])
if to_remove:
self.window_contacts = [c for c in self.window_contacts if c.onion_pub_key not in to_remove]
def reset_window(self) -> None:
"""Reset the ephemeral message log of the window."""
self.message_log = []
def has_contact(self, onion_pub_key: bytes) -> bool:
"""\
Return True if contact with the specified public key is in the
window, else False.
"""
return any(onion_pub_key == c.onion_pub_key for c in self.window_contacts)
def update_handle_dict(self, pub_key: bytes) -> None:
"""Update handle for public key in `handle_dict`."""
if self.contact_list.has_pub_key(pub_key):
self.handle_dict[pub_key] = self.contact_list.get_contact_by_pub_key(pub_key).nick
else:
self.handle_dict[pub_key] = pub_key_to_short_address(pub_key)
def create_handle_dict(self, message_log: Optional[List[MsgTuple]] = None) -> None:
"""Pre-generate {account: handle} dictionary.
Pre-generation allows `self.print()` to indent accounts and
nicks without having to loop over the entire message list for
every message to determine the amount of require indent.
"""
pub_keys = set(c.onion_pub_key for c in self.window_contacts)
if message_log is not None:
pub_keys |= set(tup[2] for tup in message_log)
for k in pub_keys:
self.update_handle_dict(k)
def get_handle(self,
time_stamp: 'datetime', # Timestamp of message to be printed
onion_pub_key: bytes, # Onion Service public key of contact (used as lookup for handles)
origin: bytes, # Determines whether to use "Me" or nick of contact as handle
whisper: bool = False, # When True, displays (whisper) specifier next to handle
event_msg: bool = False # When True, sets handle to "-!-"
) -> str: # Handle to use
"""Returns indented handle complete with headers and trailers."""
time_stamp_str = time_stamp.strftime('%H:%M:%S.%f')[:-4]
if onion_pub_key == WIN_UID_LOCAL or event_msg:
handle = EVENT
ending = ' '
else:
handle = self.handle_dict[onion_pub_key] if origin == ORIGIN_CONTACT_HEADER else ME
handles = list(self.handle_dict.values()) + [ME]
indent = max(len(v) for v in handles) - len(handle) if self.is_active else 0
handle = indent * ' ' + handle
# Handle specifiers for messages to inactive window
if not self.is_active:
handle += {WIN_TYPE_GROUP: f" (group {self.name})",
WIN_TYPE_CONTACT: f" (private message)"}.get(self.type, '')
if whisper:
handle += " (whisper)"
ending = ': '
handle = f"{time_stamp_str} {handle}{ending}"
return handle
def print(self, msg_tuple: MsgTuple, file: Any = None) -> None:
"""Print a new message to the window."""
# Unpack tuple
ts, message, onion_pub_key, origin, whisper, event_msg = msg_tuple
# Determine handle
handle = self.get_handle(ts, onion_pub_key, origin, whisper, event_msg)
# Check if message content needs to be changed to privacy-preserving notification
if not self.is_active and not self.settings.new_message_notify_preview and self.uid != WIN_UID_LOCAL:
trailer = 's' if self.unread_messages > 0 else ''
message = BOLD_ON + f"{self.unread_messages + 1} unread message{trailer}" + NORMAL_TEXT
# Wrap message
wrapper = textwrap.TextWrapper(width=get_terminal_width(),
initial_indent=handle,
subsequent_indent=len(handle)*' ')
wrapped = wrapper.fill(message)
if wrapped == '':
wrapped = handle
# Add bolding unless export file is provided
bold_on, bold_off, f_name = (BOLD_ON, NORMAL_TEXT, sys.stdout) if file is None else ('', '', file)
wrapped = bold_on + wrapped[:len(handle)] + bold_off + wrapped[len(handle):]
if self.is_active:
if self.previous_msg_ts.date() != ts.date():
print(bold_on + f"00:00 -!- Day changed to {str(ts.date())}" + bold_off, file=f_name)
print(wrapped, file=f_name)
else:
if onion_pub_key != WIN_UID_LOCAL:
self.unread_messages += 1
if (self.type == WIN_TYPE_CONTACT and self.contact is not None and self.contact.notifications) \
or (self.type == WIN_TYPE_GROUP and self.group is not None and self.group.notifications) \
or (self.type == WIN_TYPE_COMMAND):
lines = wrapped.split('\n')
if len(lines) > 1:
print(lines[0][:-1] + '') # Preview only first line of the long message
else:
print(wrapped)
print_on_previous_line(delay=self.settings.new_message_notify_duration, flush=True)
self.previous_msg_ts = ts
def add_new(self,
timestamp: 'datetime', # The timestamp of the received message
message: str, # The content of the message
onion_pub_key: bytes = WIN_UID_LOCAL, # The Onion Service public key of associated contact
origin: bytes = ORIGIN_USER_HEADER, # The direction of the message
output: bool = False, # When True, displays message while adding it to message_log
whisper: bool = False, # When True, displays message as whisper message
event_msg: bool = False # When True, uses "-!-" as message handle
) -> None:
"""Add message tuple to message log and optionally print it."""
self.update_handle_dict(onion_pub_key)
msg_tuple = (timestamp, message, onion_pub_key, origin, whisper, event_msg)
self.message_log.append(msg_tuple)
if output:
self.print(msg_tuple)
def redraw(self, file: Any = None) -> None:
"""Print all messages received to the window."""
old_messages = len(self.message_log) - self.unread_messages
self.unread_messages = 0
if file is None:
clear_screen()
if self.message_log:
self.previous_msg_ts = self.message_log[-1][0]
self.create_handle_dict(self.message_log)
for i, msg_tuple in enumerate(self.message_log):
if i == old_messages:
print('\n' + ' Unread Messages '.center(get_terminal_width(), '-') + '\n')
self.print(msg_tuple, file)
else:
m_print(f"This window for {self.name} is currently empty.", bold=True, head=1, tail=1)
def redraw_file_win(self) -> None:
"""Draw file transmission window progress bars."""
# Initialize columns
c1 = ['File name']
c2 = ['Size']
c3 = ['Sender']
c4 = ['Complete']
# Populate columns with file transmission status data
for i, p in enumerate(self.packet_list):
if p.type == FILE and len(p.assembly_pt_list) > 0:
c1.append(p.name)
c2.append(p.size)
c3.append(p.contact.nick)
c4.append(f"{len(p.assembly_pt_list) / p.packets * 100:.2f}%")
if not len(c1) > 1:
m_print("No file transmissions currently in progress.", bold=True, head=1, tail=1)
print_on_previous_line(reps=3, delay=0.1)
return None
# Calculate column widths
c1w, c2w, c3w, c4w = [max(len(v) for v in column) + FILE_TRANSFER_INDENT for column in [c1, c2, c3, c4]]
# Align columns by adding whitespace between fields of each line
lines = [f'{f1:{c1w}}{f2:{c2w}}{f3:{c3w}}{f4:{c4w}}' for f1, f2, f3, f4 in zip(c1, c2, c3, c4)]
# Add a terminal-wide line between the column names and the data
lines.insert(1, get_terminal_width() * '')
# Print the file transfer list
print('\n' + '\n'.join(lines) + '\n')
print_on_previous_line(reps=len(lines)+2, delay=0.1)
class WindowList(Iterable):
"""WindowList manages a list of Window objects."""
def __init__(self,
settings: 'Settings',
contact_list: 'ContactList',
group_list: 'GroupList',
packet_list: 'PacketList'
) -> None:
"""Create a new WindowList object."""
self.settings = settings
self.contact_list = contact_list
self.group_list = group_list
self.packet_list = packet_list
self.active_win = None # type: Optional[RxWindow]
self.windows = [RxWindow(uid, self.contact_list, self.group_list, self.settings, self.packet_list)
for uid in ([WIN_UID_LOCAL, WIN_UID_FILE]
+ self.contact_list.get_list_of_pub_keys()
+ self.group_list.get_list_of_group_ids())]
if self.contact_list.has_local_contact():
self.set_active_rx_window(WIN_UID_LOCAL)
def __iter__(self) -> Generator:
"""Iterate over window list."""
yield from self.windows
def __len__(self) -> int:
"""Return number of windows in the window list."""
return len(self.windows)
def has_window(self, uid: bytes) -> bool:
"""Return True if a window with matching UID exists, else False."""
return any(w.uid == uid for w in self.windows)
def remove_window(self, uid: bytes) -> None:
"""Remove window based on its UID."""
for i, w in enumerate(self.windows):
if uid == w.uid:
del self.windows[i]
break
def get_group_windows(self) -> List[RxWindow]:
"""Return list of group windows."""
return [w for w in self.windows if w.type == WIN_TYPE_GROUP]
def get_window(self, uid: bytes) -> 'RxWindow':
"""Return window that matches the specified UID.
Create window if it does not exist.
"""
if not self.has_window(uid):
self.windows.append(RxWindow(uid, self.contact_list, self.group_list, self.settings, self.packet_list))
return next(w for w in self.windows if w.uid == uid)
def get_local_window(self) -> 'RxWindow':
"""Return command window."""
return self.get_window(WIN_UID_LOCAL)
def set_active_rx_window(self, uid: bytes) -> None:
"""Select new active window."""
if self.active_win is not None:
self.active_win.is_active = False
self.active_win = self.get_window(uid)
self.active_win.is_active = True
if self.active_win.uid == WIN_UID_FILE:
self.active_win.redraw_file_win()
else:
self.active_win.redraw()

0
src/rx/__init__.py → src/relay/__init__.py Executable file → Normal file
View File

355
src/relay/client.py Normal file
View File

@ -0,0 +1,355 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import base64
import hashlib
import time
import typing
from datetime import datetime
from multiprocessing import Process, Queue
from typing import Dict, List
import requests
from cryptography.hazmat.primitives.asymmetric.x448 import X448PublicKey, X448PrivateKey
from src.common.encoding import b58encode, int_to_bytes, onion_address_to_pub_key, pub_key_to_onion_address
from src.common.encoding import pub_key_to_short_address
from src.common.misc import ignored, separate_header, split_byte_string, validate_onion_addr
from src.common.output import m_print, print_key, rp_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.gateway import Gateway
from requests.sessions import Session
QueueDict = Dict[bytes, Queue]
def client_manager(queues: 'QueueDict',
gateway: 'Gateway',
url_token_private_key: X448PrivateKey,
unittest: bool = False
) -> None:
"""Manage `client` processes."""
proc_dict = dict() # type: Dict[bytes, Process]
# Wait for Tor port from `onion_service` process.
while True:
with ignored(EOFError, KeyboardInterrupt):
while queues[TOR_DATA_QUEUE].qsize() == 0:
time.sleep(0.1)
tor_port, onion_addr_user = queues[TOR_DATA_QUEUE].get()
break
while True:
with ignored(EOFError, KeyboardInterrupt):
while queues[CONTACT_KEY_QUEUE].qsize() == 0:
time.sleep(0.1)
command, ser_public_keys, is_existing_contact = queues[CONTACT_KEY_QUEUE].get()
onion_pub_keys = split_byte_string(ser_public_keys, ONION_SERVICE_PUBLIC_KEY_LENGTH)
if command == RP_ADD_CONTACT_HEADER:
for onion_pub_key in onion_pub_keys:
if onion_pub_key not in proc_dict:
onion_addr_user = '' if is_existing_contact else onion_addr_user
proc_dict[onion_pub_key] = Process(target=client, args=(onion_pub_key, queues,
url_token_private_key, tor_port,
gateway, onion_addr_user))
proc_dict[onion_pub_key].start()
elif command == RP_REMOVE_CONTACT_HEADER:
for onion_pub_key in onion_pub_keys:
if onion_pub_key in proc_dict:
process = proc_dict[onion_pub_key] # type: Process
process.terminate()
proc_dict.pop(onion_pub_key)
rp_print(f"Removed {pub_key_to_short_address(onion_pub_key)}", bold=True)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
def client(onion_pub_key: bytes,
queues: 'QueueDict',
url_token_private_key: X448PrivateKey,
tor_port: str,
gateway: 'Gateway',
onion_addr_user: str,
unittest: bool = False
) -> None:
"""Load packets from contact's Onion Service."""
url_token = ''
cached_pk = ''
short_addr = pub_key_to_short_address(onion_pub_key)
onion_addr = pub_key_to_onion_address(onion_pub_key)
check_delay = RELAY_CLIENT_MIN_DELAY
is_online = False
session = requests.session()
session.proxies = {'http': f'socks5h://127.0.0.1:{tor_port}',
'https': f'socks5h://127.0.0.1:{tor_port}'}
rp_print(f"Connecting to {short_addr}...", bold=True)
# When Transmitter Program sends contact under UNENCRYPTED_ADD_EXISTING_CONTACT, this function
# receives user's own Onion address: That way it knows to request the contact to add them:
if onion_addr_user:
while True:
try:
reply = session.get(f'http://{onion_addr}.onion/contact_request/{onion_addr_user}', timeout=45).text
if reply == "OK":
break
except requests.exceptions.RequestException:
time.sleep(RELAY_CLIENT_MIN_DELAY)
while True:
with ignored(EOFError, KeyboardInterrupt):
time.sleep(check_delay)
# Obtain URL token
# ----------------
# Load URL token public key from contact's Onion Service root domain
try:
url_token_public_key_hex = session.get(f'http://{onion_addr}.onion/', timeout=45).text
except requests.exceptions.RequestException:
url_token_public_key_hex = ''
# Manage online status of contact based on availability of URL token's public key
if url_token_public_key_hex == '':
if check_delay < RELAY_CLIENT_MAX_DELAY:
check_delay *= 2
if check_delay > CLIENT_OFFLINE_THRESHOLD and is_online:
is_online = False
rp_print(f"{short_addr} is now offline", bold=True)
continue
else:
check_delay = RELAY_CLIENT_MIN_DELAY
if not is_online:
is_online = True
rp_print(f"{short_addr} is now online", bold=True)
# When contact's URL token public key changes, update URL token
if url_token_public_key_hex != cached_pk:
try:
public_key = bytes.fromhex(url_token_public_key_hex)
assert len(public_key) == TFC_PUBLIC_KEY_LENGTH
assert public_key != bytes(TFC_PUBLIC_KEY_LENGTH)
shared_secret = url_token_private_key.exchange(X448PublicKey.from_public_bytes(public_key))
url_token = hashlib.blake2b(shared_secret, digest_size=SYMMETRIC_KEY_LENGTH).hexdigest()
except (AssertionError, TypeError, ValueError):
continue
cached_pk = url_token_public_key_hex # Update client's URL token public key
queues[URL_TOKEN_QUEUE].put((onion_pub_key, url_token)) # Update Flask server's URL token for contact
# Load TFC data with URL token
# ----------------------------
get_data_loop(onion_addr, url_token, short_addr, onion_pub_key, queues, session, gateway)
if unittest:
break
def get_data_loop(onion_addr: str,
url_token: str,
short_addr: str,
onion_pub_key: bytes,
queues: 'QueueDict',
session: 'Session',
gateway: 'Gateway') -> None:
"""Load TFC data from contact's Onion Service using valid URL token."""
while True:
try:
# See if a file is available
try:
file_data = session.get(f'http://{onion_addr}.onion/{url_token}/files', stream=True).content
if file_data:
ts = datetime.now()
ts_bytes = int_to_bytes(int(ts.strftime('%Y%m%d%H%M%S%f')[:-4]))
packet = FILE_DATAGRAM_HEADER + ts_bytes + onion_pub_key + ORIGIN_CONTACT_HEADER + file_data
queues[DST_MESSAGE_QUEUE].put(packet)
rp_print(f"File from contact {short_addr}", ts)
except requests.exceptions.RequestException:
pass
# See if messages are available
try:
r = session.get(f'http://{onion_addr}.onion/{url_token}/messages', stream=True)
except requests.exceptions.RequestException:
return None
for line in r.iter_lines(): # Iterates over newline-separated datagrams
if not line:
continue
try:
header, payload = separate_header(line, DATAGRAM_HEADER_LENGTH) # type: bytes, bytes
payload_bytes = base64.b85decode(payload)
except (UnicodeError, ValueError):
continue
ts = datetime.now()
ts_bytes = int_to_bytes(int(ts.strftime('%Y%m%d%H%M%S%f')[:-4]))
if header == PUBLIC_KEY_DATAGRAM_HEADER:
if len(payload_bytes) == TFC_PUBLIC_KEY_LENGTH:
msg = f"Received public key from {short_addr} at {ts.strftime('%b %d - %H:%M:%S.%f')[:-4]}:"
print_key(msg, payload_bytes, gateway.settings, public_key=True)
elif header == MESSAGE_DATAGRAM_HEADER:
queues[DST_MESSAGE_QUEUE].put(header + ts_bytes + onion_pub_key
+ ORIGIN_CONTACT_HEADER + payload_bytes)
rp_print(f"Message from contact {short_addr}", ts)
elif header in [GROUP_MSG_INVITE_HEADER, GROUP_MSG_JOIN_HEADER,
GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER,
GROUP_MSG_EXIT_GROUP_HEADER]:
queues[GROUP_MSG_QUEUE].put((header, payload_bytes, short_addr))
else:
rp_print(f"Received invalid packet from {short_addr}", ts, bold=True)
except requests.exceptions.RequestException:
break
def g_msg_manager(queues: 'QueueDict', unittest: bool = False) -> None:
"""Show group management messages according to contact list state.
This process keeps track of existing contacts for whom there's a
page_loader process. When a group management message from a contact
is received, existing contacts are displayed under "known contacts",
and non-existing contacts are displayed under "unknown contacts".
"""
existing_contacts = [] # type: List[bytes]
while True:
with ignored(EOFError, KeyboardInterrupt):
while queues[GROUP_MSG_QUEUE].qsize() == 0:
time.sleep(0.01)
header, payload, trunc_addr = queues[GROUP_MSG_QUEUE].get()
group_id, data = separate_header(payload, GROUP_ID_LENGTH)
if len(group_id) != GROUP_ID_LENGTH:
continue
group_id_hr = b58encode(group_id)
# Update list of existing contacts
while queues[GROUP_MGMT_QUEUE].qsize() > 0:
command, ser_onion_pub_keys = queues[GROUP_MGMT_QUEUE].get()
onion_pub_key_list = split_byte_string(ser_onion_pub_keys, ONION_SERVICE_PUBLIC_KEY_LENGTH)
if command == RP_ADD_CONTACT_HEADER:
existing_contacts = list(set(existing_contacts) | set(onion_pub_key_list))
elif command == RP_REMOVE_CONTACT_HEADER:
existing_contacts = list(set(existing_contacts) - set(onion_pub_key_list))
# Handle group management messages
if header in [GROUP_MSG_INVITE_HEADER, GROUP_MSG_JOIN_HEADER,
GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER]:
pub_keys = split_byte_string(data, ONION_SERVICE_PUBLIC_KEY_LENGTH)
pub_key_length = ONION_SERVICE_PUBLIC_KEY_LENGTH
members = [k for k in pub_keys if len(k) == pub_key_length ]
known = [f" * {pub_key_to_onion_address(m)}" for m in members if m in existing_contacts]
unknown = [f" * {pub_key_to_onion_address(m)}" for m in members if m not in existing_contacts]
line_list = []
if known:
line_list.extend(["Known contacts"] + known)
if unknown:
line_list.extend(["Unknown contacts"] + unknown)
if header in [GROUP_MSG_INVITE_HEADER, GROUP_MSG_JOIN_HEADER]:
action = 'invited you to' if header == GROUP_MSG_INVITE_HEADER else 'joined'
postfix = ' with' if members else ''
m_print([f"{trunc_addr} has {action} group {group_id_hr}{postfix}"] + line_list, box=True)
elif header in [GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER]:
if members:
action, p = ("added", "to") if header == GROUP_MSG_MEMBER_ADD_HEADER else ("removed", "from")
m_print([f"{trunc_addr} has {action} following members {p} group {group_id_hr}"]
+ line_list, box=True)
elif header == GROUP_MSG_EXIT_GROUP_HEADER:
m_print([f"{trunc_addr} has left group {group_id_hr}",
'', "Warning",
"Unless you remove the contact from the group, they",
"can still read messages you send to the group."], box=True)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
def c_req_manager(queues: 'QueueDict', unittest: bool = False) -> None:
"""Manage incoming contact requests."""
existing_contacts = [] # type: List[bytes]
contact_requests = [] # type: List[bytes]
packet_queue = queues[CONTACT_REQ_QUEUE]
contact_queue = queues[F_REQ_MGMT_QUEUE]
setting_queue = queues[C_REQ_MGR_QUEUE]
show_requests = True
while True:
with ignored(EOFError, KeyboardInterrupt):
while packet_queue.qsize() == 0:
time.sleep(0.1)
purp_onion_address = packet_queue.get()
while setting_queue.qsize() != 0:
show_requests = setting_queue.get()
# Update list of existing contacts
while contact_queue.qsize() > 0:
command, ser_onion_pub_keys = contact_queue.get()
onion_pub_key_list = split_byte_string(ser_onion_pub_keys, ONION_SERVICE_PUBLIC_KEY_LENGTH)
if command == RP_ADD_CONTACT_HEADER:
existing_contacts = list(set(existing_contacts) | set(onion_pub_key_list))
elif command == RP_REMOVE_CONTACT_HEADER:
existing_contacts = list(set(existing_contacts) - set(onion_pub_key_list))
if validate_onion_addr(purp_onion_address) == '':
onion_pub_key = onion_address_to_pub_key(purp_onion_address)
if onion_pub_key in existing_contacts:
continue
if onion_pub_key in contact_requests:
continue
if show_requests:
m_print(["New contact request from an unknown TFC account:", purp_onion_address], box=True)
contact_requests.append(onion_pub_key)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break

231
src/relay/commands.py Normal file
View File

@ -0,0 +1,231 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import serial
import sys
import time
import typing
from typing import Any, Dict
from src.common.encoding import bytes_to_bool, bytes_to_int
from src.common.exceptions import FunctionReturn
from src.common.misc import ignored, separate_header, separate_headers, split_byte_string
from src.common.output import clear_screen, m_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.gateway import Gateway
QueueDict = Dict[bytes, Queue]
def relay_command(queues: 'QueueDict',
gateway: 'Gateway',
stdin_fd: int,
unittest: bool = False
) -> None:
"""Process Relay Program commands."""
sys.stdin = os.fdopen(stdin_fd)
queue_from_src = queues[SRC_TO_RELAY_QUEUE]
while True:
with ignored(EOFError, FunctionReturn, KeyboardInterrupt):
while queue_from_src.qsize() == 0:
time.sleep(0.01)
command = queue_from_src.get()
process_command(command, gateway, queues)
if unittest:
break
def process_command(command: bytes,
gateway: 'Gateway',
queues: 'QueueDict'
) -> None:
"""Select function for received Relay Program command."""
header, command = separate_header(command, UNENCRYPTED_COMMAND_HEADER_LENGTH)
# Keyword Function to run ( Parameters )
# ---------------------------------------------------------------------------------
function_d = {UNENCRYPTED_SCREEN_CLEAR: (clear_windows, gateway, ),
UNENCRYPTED_SCREEN_RESET: (reset_windows, gateway, ),
UNENCRYPTED_EXIT_COMMAND: (exit_tfc, gateway, queues),
UNENCRYPTED_WIPE_COMMAND: (wipe, gateway, queues),
UNENCRYPTED_EC_RATIO: (change_ec_ratio, command, gateway, ),
UNENCRYPTED_BAUDRATE: (change_baudrate, command, gateway, ),
UNENCRYPTED_MANAGE_CONTACT_REQ: (manage_contact_req, command, queues),
UNENCRYPTED_ADD_NEW_CONTACT: (add_contact, command, False, queues),
UNENCRYPTED_ADD_EXISTING_CONTACT: (add_contact, command, True, queues),
UNENCRYPTED_REM_CONTACT: (remove_contact, command, queues),
UNENCRYPTED_ONION_SERVICE_DATA: (add_onion_data, command, queues)
} # type: Dict[bytes, Any]
if header not in function_d:
raise FunctionReturn("Error: Received an invalid command.")
from_dict = function_d[header]
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def race_condition_delay(gateway: 'Gateway') -> None:
"""Prevent race condition with Receiver command."""
if gateway.settings.local_testing_mode:
time.sleep(LOCAL_TESTING_PACKET_DELAY)
time.sleep(gateway.settings.data_diode_sockets * 1.0)
def clear_windows(gateway: 'Gateway') -> None:
"""Clear Relay Program screen."""
race_condition_delay(gateway)
clear_screen()
def reset_windows(gateway: 'Gateway') -> None:
"""Reset Relay Program screen."""
race_condition_delay(gateway)
os.system(RESET)
def exit_tfc(gateway: 'Gateway', queues: 'QueueDict') -> None:
"""Exit TFC.
The queue is read by
relay.onion.onion_service()
"""
race_condition_delay(gateway)
queues[ONION_CLOSE_QUEUE].put(EXIT)
def wipe(gateway: 'Gateway', queues: 'QueueDict') -> None:
"""Reset terminal, wipe all user data and power off the system.
No effective RAM overwriting tool currently exists, so as long as Source and
Destination Computers use FDE and DDR3 memory, recovery of user data becomes
impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
The queue is read by
relay.onion.onion_service()
"""
os.system(RESET)
race_condition_delay(gateway)
queues[ONION_CLOSE_QUEUE].put(WIPE)
def change_ec_ratio(command: bytes, gateway: 'Gateway') -> None:
"""Change Relay Program's Reed-Solomon error correction ratio."""
try:
value = int(command)
if value < 0 or value > MAX_INT:
raise ValueError
except ValueError:
raise FunctionReturn("Error: Received invalid EC ratio value from Transmitter Program.")
m_print("Error correction ratio will change on restart.", head=1, tail=1)
gateway.settings.serial_error_correction = value
gateway.settings.store_settings()
def change_baudrate(command: bytes, gateway: 'Gateway') -> None:
"""Change Relay Program's serial interface baud rate setting."""
try:
value = int(command)
if value not in serial.Serial.BAUDRATES:
raise ValueError
except ValueError:
raise FunctionReturn("Error: Received invalid baud rate value from Transmitter Program.")
m_print("Baud rate will change on restart.", head=1, tail=1)
gateway.settings.serial_baudrate = value
gateway.settings.store_settings()
def manage_contact_req(command: bytes,
queues: 'QueueDict',
notify: bool = True) -> None:
"""Control whether contact requests are accepted."""
enabled = bytes_to_bool(command)
if notify:
m_print(f"Contact requests are have been {('enabled' if enabled else 'disabled')}.", head=1, tail=1)
queues[C_REQ_MGR_QUEUE].put(enabled)
def add_contact(command: bytes,
existing: bool,
queues: 'QueueDict'
) -> None:
"""Add clients to Relay Program.
The queues are read by
relay.client.client_manager()
relay.client.group_manager() and
relay.client.f_req_manager()
"""
queues[CONTACT_KEY_QUEUE].put((RP_ADD_CONTACT_HEADER, command, existing))
queues[GROUP_MGMT_QUEUE].put((RP_ADD_CONTACT_HEADER, command))
queues[F_REQ_MGMT_QUEUE].put((RP_ADD_CONTACT_HEADER, command))
def remove_contact(command: bytes, queues: 'QueueDict') -> None:
"""Remove clients from Relay Program.
The queues are read by
relay.client.client_manager()
relay.client.group_manager() and
relay.client.f_req_manager()
"""
queues[CONTACT_KEY_QUEUE].put((RP_REMOVE_CONTACT_HEADER, command, False))
queues[GROUP_MGMT_QUEUE].put((RP_REMOVE_CONTACT_HEADER, command))
queues[F_REQ_MGMT_QUEUE].put((RP_REMOVE_CONTACT_HEADER, command))
def add_onion_data(command: bytes, queues: 'QueueDict') -> None:
"""Add Onion Service data.
Separate onion service private key and public keys for
pending/existing contacts and add them as contacts.
The ONION_KEY_QUEUE is read by
relay.onion.onion_service()
"""
os_private_key, confirmation_code, allow_req_byte, no_pending_bytes, ser_pub_keys \
= separate_headers(command, [ONION_SERVICE_PRIVATE_KEY_LENGTH, CONFIRM_CODE_LENGTH,
ENCODED_BOOLEAN_LENGTH, ENCODED_INTEGER_LENGTH])
no_pending = bytes_to_int(no_pending_bytes)
public_key_list = split_byte_string(ser_pub_keys, ONION_SERVICE_PUBLIC_KEY_LENGTH)
pending_public_keys = public_key_list[:no_pending]
existing_public_keys = public_key_list[no_pending:]
for onion_pub_key in pending_public_keys:
add_contact(onion_pub_key, False, queues)
for onion_pub_key in existing_public_keys:
add_contact(onion_pub_key, True, queues)
manage_contact_req(allow_req_byte, queues, notify=False)
queues[ONION_KEY_QUEUE].put((os_private_key, confirmation_code))

230
src/relay/onion.py Normal file
View File

@ -0,0 +1,230 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import base64
import hashlib
import os
import random
import shlex
import socket
import subprocess
import tempfile
import time
from multiprocessing import Queue
from typing import Any, Dict
import nacl.signing
import stem.control
import stem.process
from src.common.encoding import pub_key_to_onion_address
from src.common.exceptions import CriticalError
from src.common.output import m_print, rp_print
from src.common.statics import *
def get_available_port(min_port: int, max_port: int) -> str:
"""Find a random available port within the given range."""
with socket.socket() as temp_sock:
while True:
try:
temp_sock.bind(('127.0.0.1', random.randint(min_port, max_port)))
break
except OSError:
pass
_, port = temp_sock.getsockname() # type: Any, str
return port
class Tor(object):
"""Tor class manages the starting and stopping of Tor client."""
def __init__(self) -> None:
self.tor_process = None # type: Any
self.controller = None # type: Any
def connect(self, port: str) -> None:
"""Launch Tor as a subprocess."""
tor_data_directory = tempfile.TemporaryDirectory()
tor_control_socket = os.path.join(tor_data_directory.name, 'control_socket')
if not os.path.isfile('/usr/bin/tor'):
raise CriticalError("Check that Tor is installed.")
while True:
try:
self.tor_process = stem.process.launch_tor_with_config(
config={'DataDirectory': tor_data_directory.name,
'SocksPort': str(port),
'ControlSocket': tor_control_socket,
'AvoidDiskWrites': '1',
'Log': 'notice stdout',
'GeoIPFile': '/usr/share/tor/geoip',
'GeoIPv6File ': '/usr/share/tor/geoip6'},
tor_cmd='/usr/bin/tor')
break
except OSError:
pass # Tor timed out. Try again.
start_ts = time.monotonic()
self.controller = stem.control.Controller.from_socket_file(path=tor_control_socket)
self.controller.authenticate()
while True:
time.sleep(0.1)
try:
response = self.controller.get_info("status/bootstrap-phase")
except stem.SocketClosed:
raise CriticalError("Tor socket closed.")
res_parts = shlex.split(response)
summary = res_parts[4].split('=')[1]
if summary == 'Done':
tor_version = self.controller.get_version().version_str.split(' (')[0]
rp_print(f"Setup 70% - Tor {tor_version} is now running", bold=True)
break
if time.monotonic() - start_ts > 15:
start_ts = time.monotonic()
self.controller = stem.control.Controller.from_socket_file(path=tor_control_socket)
self.controller.authenticate()
def stop(self) -> None:
"""Stop the Tor subprocess."""
if self.tor_process:
self.tor_process.terminate()
time.sleep(0.1)
if not self.tor_process.poll():
self.tor_process.kill()
def stem_compatible_ed25519_key_from_private_key(private_key: bytes) -> str:
"""Tor's custom encoding format for v3 Onion Service private keys.
This code is based on Tor's testing code at
https://github.com/torproject/tor/blob/8e84968ffbf6d284e8a877ddcde6ded40b3f5681/src/test/ed25519_exts_ref.py#L48
"""
b = 256
def bit(h: bytes, i: int) -> int:
"""\
Output (i % 8 + 1) right-most bit of (i // 8) right-most byte
of the digest.
"""
return (h[i // 8] >> (i % 8)) & 1
def encode_int(y: int) -> bytes:
"""Encode integer to 32-byte bytestring (little-endian format)."""
bits = [(y >> i) & 1 for i in range(b)]
return b''.join([bytes([(sum([bits[i * 8 + j] << j for j in range(8)]))]) for i in range(b // 8)])
def expand_private_key(sk: bytes) -> bytes:
"""Expand private key to base64 blob."""
h = hashlib.sha512(sk).digest()
a = 2 ** (b - 2) + sum(2 ** i * bit(h, i) for i in range(3, b - 2))
k = b''.join([bytes([h[i]]) for i in range(b // 8, b // 4)])
assert len(k) == ONION_SERVICE_PRIVATE_KEY_LENGTH
return encode_int(a) + k
expanded_private_key = expand_private_key(private_key)
return base64.b64encode(expanded_private_key).decode()
def kill_background_tor() -> None:
"""Kill any open TFC-related Tor instances left open.
Copies of Tor might stay open in cases where the user has closed the
application from Terminator's close window ((x) button).
"""
try:
pids = subprocess.check_output("ps aux |grep '[t]fc/tor' | awk '{print $2}' 2>/dev/null", shell=True)
for pid in pids.split(b'\n'):
subprocess.Popen("kill {}".format(int(pid)), shell=True).wait()
except ValueError:
pass
def onion_service(queues: Dict[bytes, 'Queue']) -> None:
"""Manage the Tor Onion Service and control Tor via stem."""
kill_background_tor()
rp_print("Setup 0% - Waiting for Onion Service configuration...", bold=True)
while queues[ONION_KEY_QUEUE].qsize() == 0:
time.sleep(0.1)
private_key, c_code = queues[ONION_KEY_QUEUE].get() # type: bytes, bytes
public_key_user = bytes(nacl.signing.SigningKey(seed=private_key).verify_key)
onion_addr_user = pub_key_to_onion_address(public_key_user)
try:
rp_print("Setup 10% - Launching Tor...", bold=True)
tor_port = get_available_port(1000, 65535)
tor = Tor()
tor.connect(tor_port)
except (EOFError, KeyboardInterrupt):
return
try:
rp_print("Setup 75% - Launching Onion Service...", bold=True)
key_data = stem_compatible_ed25519_key_from_private_key(private_key)
response = tor.controller.create_ephemeral_hidden_service(ports={80: 5000},
key_type='ED25519-V3',
key_content=key_data,
await_publication=True)
rp_print("Setup 100% - Onion Service is now published.", bold=True)
m_print(["Your TFC account is:",
onion_addr_user, '',
f"Onion Service confirmation code (to Transmitter): {c_code.hex()}"], box=True)
# Allow the client to start looking for contacts at this point.
queues[TOR_DATA_QUEUE].put((tor_port, onion_addr_user))
except (KeyboardInterrupt, stem.SocketClosed):
tor.stop()
return
while True:
try:
time.sleep(0.1)
if queues[ONION_KEY_QUEUE].qsize() > 0:
queues[ONION_KEY_QUEUE].get() # Discard re-sent private keys
if queues[ONION_CLOSE_QUEUE].qsize() > 0:
command = queues[ONION_CLOSE_QUEUE].get()
queues[EXIT_QUEUE].put(command)
tor.controller.remove_hidden_service(response.service_id)
tor.stop()
break
except (EOFError, KeyboardInterrupt):
pass
except stem.SocketClosed:
tor.controller.remove_hidden_service(response.service_id)
tor.stop()
break

176
src/relay/server.py Normal file
View File

@ -0,0 +1,176 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import hmac
import logging
import threading
import time
import typing
from io import BytesIO
from multiprocessing import Queue
from typing import Any, Dict, List, Optional
from flask import Flask, send_file
from src.common.statics import *
if typing.TYPE_CHECKING:
QueueDict = Dict[bytes, Queue]
def flask_server(queues: 'QueueDict',
url_token_public_key: str,
unittest: bool = False
) -> Optional[Flask]:
"""Run Flask web server for outgoing messages.
This process runs Flask web server from where clients of contacts
can load messages sent to them. Making such requests requires the
clients know the secret path, that is, the X448 shared secret
derived from Relay Program's private key, and the public key
obtained from the Onion Service of the contact.
Note that this private key does not handle E2EE of messages, it only
manages E2EE sessions between Relay Programs of conversing parties.
It prevents anyone without the Relay Program's ephemeral private key
from requesting ciphertexts from the user.
The connection between the requests client and Flask server is
end-to-end encrypted: No Tor relay between them can see the content
of the traffic; With Onion Services, there is no exit node. The
connection is strongly authenticated by the Onion Service domain
name, that is, the TFC account pinned by the user.
"""
app = Flask(__name__)
pub_key_dict = dict() # type: Dict[str, bytes]
message_dict = dict() # type: Dict[bytes, List[str]]
file_dict = dict() # type: Dict[bytes, List[bytes]]
class HideRunTime(object):
"""Context manager that hides function runtime.
By joining a thread that sleeps for a longer time than it takes
for the function to run, this context manager hides the actual
running time of the function.
"""
def __init__(self, length: float = 0.0) -> None:
self.length = length
def __enter__(self) -> None:
self.timer = threading.Thread(target=time.sleep, args=(self.length,))
self.timer.start()
def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
self.timer.join()
def validate_url_token(purp_url_token: str) -> bool:
"""Validate URL token using constant time comparison."""
# This context manager hides the duration of URL_TOKEN_QUEUE check as
# well as the number of accounts in pub_key_dict when iterating over keys.
with HideRunTime(0.01):
# Check if the client has derived new URL token for contact(s).
# If yes, add the url tokens to pub_key_dict to have up-to-date
# information about whether the purported URL tokens are valid.
while queues[URL_TOKEN_QUEUE].qsize() > 0:
onion_pub_key, url_token = queues[URL_TOKEN_QUEUE].get()
# Delete old URL token for contact when their URL token pub key changes.
for ut in list(pub_key_dict.keys()):
if pub_key_dict[ut] == onion_pub_key:
del pub_key_dict[ut]
pub_key_dict[url_token] = onion_pub_key
# Here we OR the result of constant time comparison with initial
# False. ORing is also a constant time operation that returns
# True if a matching shared secret was found in pub_key_dict.
valid_url_token = False
for url_token in pub_key_dict:
valid_url_token |= hmac.compare_digest(purp_url_token, url_token)
return valid_url_token
@app.route('/')
def index() -> str:
"""Return the URL token public key to contacts that know the .onion address."""
return url_token_public_key
@app.route('/contact_request/<string:purp_onion_address>')
def contact_request(purp_onion_address: str) -> str:
"""Pass contact request to `c_req_manager`."""
queues[CONTACT_REQ_QUEUE].put(purp_onion_address)
return 'OK'
@app.route('/<purp_url_token>/files/')
def file_get(purp_url_token: str) -> Any:
"""Validate the URL token and return a queued file."""
if not validate_url_token(purp_url_token):
return ''
identified_onion_pub_key = pub_key_dict[purp_url_token]
while queues[F_TO_FLASK_QUEUE].qsize() != 0:
packet, onion_pub_key = queues[F_TO_FLASK_QUEUE].get()
file_dict.setdefault(onion_pub_key, []).append(packet)
if identified_onion_pub_key in file_dict and file_dict[identified_onion_pub_key]:
mem = BytesIO()
mem.write(file_dict[identified_onion_pub_key].pop(0))
mem.seek(0)
return send_file(mem, mimetype='application/octet-stream')
else:
return ''
@app.route('/<purp_url_token>/messages/')
def contacts_url(purp_url_token: str) -> str:
"""Validate the URL token and return queued messages."""
if not validate_url_token(purp_url_token):
return ''
identified_onion_pub_key = pub_key_dict[purp_url_token]
# Load outgoing messages for all contacts,
# return the oldest message for contact
while queues[M_TO_FLASK_QUEUE].qsize() != 0:
packet, onion_pub_key = queues[M_TO_FLASK_QUEUE].get()
message_dict.setdefault(onion_pub_key, []).append(packet)
if identified_onion_pub_key in message_dict and message_dict[identified_onion_pub_key]:
packets = '\n'.join(message_dict[identified_onion_pub_key]) # All messages for contact
message_dict[identified_onion_pub_key] = []
return packets
else:
return ''
# --------------------------------------------------------------------------
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
if unittest:
return app
else: # not unittest
app.run()
return None

197
src/relay/tcb.py Normal file
View File

@ -0,0 +1,197 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import time
import typing
from typing import Dict, Union
from src.common.encoding import bytes_to_int, pub_key_to_short_address
from src.common.encoding import int_to_bytes, b85encode
from src.common.exceptions import FunctionReturn
from src.common.misc import ignored, separate_header, split_byte_string
from src.common.output import rp_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.gateway import Gateway
QueueDict = Dict[bytes, Queue]
def queue_to_flask(packet: Union[bytes, str],
onion_pub_key: bytes,
flask_queue: 'Queue',
ts: 'datetime',
header: bytes
) -> None:
"""Put packet to flask queue and print message."""
p_type = {MESSAGE_DATAGRAM_HEADER: 'Message ',
PUBLIC_KEY_DATAGRAM_HEADER: 'Pub key ',
FILE_DATAGRAM_HEADER: 'File ',
GROUP_MSG_INVITE_HEADER: 'G invite ',
GROUP_MSG_JOIN_HEADER: 'G join ',
GROUP_MSG_MEMBER_ADD_HEADER: 'G add ',
GROUP_MSG_MEMBER_REM_HEADER: 'G remove ',
GROUP_MSG_EXIT_GROUP_HEADER: 'G exit '}[header]
flask_queue.put((packet, onion_pub_key))
rp_print(f"{p_type} to contact {pub_key_to_short_address(onion_pub_key)}", ts)
def src_incoming(queues: 'QueueDict',
gateway: 'Gateway',
unittest: bool = False
) -> None:
"""\
Redirect messages received from Source Computer to appropriate queues.
"""
packets_from_sc = queues[GATEWAY_QUEUE]
packets_to_dc = queues[DST_MESSAGE_QUEUE]
commands_to_dc = queues[DST_COMMAND_QUEUE]
messages_to_flask = queues[M_TO_FLASK_QUEUE]
files_to_flask = queues[F_TO_FLASK_QUEUE]
commands_to_relay = queues[SRC_TO_RELAY_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
while packets_from_sc.qsize() == 0:
time.sleep(0.01)
ts, packet = packets_from_sc.get() # type: datetime, bytes
ts_bytes = int_to_bytes(int(ts.strftime('%Y%m%d%H%M%S%f')[:-4]))
try:
packet = gateway.detect_errors(packet)
except FunctionReturn:
continue
header, packet = separate_header(packet, DATAGRAM_HEADER_LENGTH)
if header == UNENCRYPTED_DATAGRAM_HEADER:
commands_to_relay.put(packet)
elif header in [COMMAND_DATAGRAM_HEADER, LOCAL_KEY_DATAGRAM_HEADER]:
commands_to_dc.put(header + ts_bytes + packet)
p_type = 'Command ' if header == COMMAND_DATAGRAM_HEADER else 'Local key'
rp_print(f"{p_type} to local Receiver", ts)
elif header in [MESSAGE_DATAGRAM_HEADER, PUBLIC_KEY_DATAGRAM_HEADER]:
onion_pub_key, payload = separate_header(packet, ONION_SERVICE_PUBLIC_KEY_LENGTH)
packet_str = header.decode() + b85encode(payload)
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
if header == MESSAGE_DATAGRAM_HEADER:
packets_to_dc.put(header + ts_bytes + onion_pub_key + ORIGIN_USER_HEADER + payload)
elif header == FILE_DATAGRAM_HEADER:
no_contacts_b, payload = separate_header(packet, ENCODED_INTEGER_LENGTH)
no_contacts = bytes_to_int(no_contacts_b)
ser_accounts, file_ct = separate_header(payload, no_contacts * ONION_SERVICE_PUBLIC_KEY_LENGTH)
pub_keys = split_byte_string(ser_accounts, item_len=ONION_SERVICE_PUBLIC_KEY_LENGTH)
for onion_pub_key in pub_keys:
queue_to_flask(file_ct, onion_pub_key, files_to_flask, ts, header)
elif header in [GROUP_MSG_INVITE_HEADER, GROUP_MSG_JOIN_HEADER,
GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER,
GROUP_MSG_EXIT_GROUP_HEADER]:
process_group_management_message(ts, packet, header, messages_to_flask)
if unittest:
break
def process_group_management_message(ts: 'datetime',
packet: bytes,
header: bytes,
messages_to_flask: 'Queue') -> None:
"""Parse and display group management message."""
header_str = header.decode()
group_id, packet = separate_header(packet, GROUP_ID_LENGTH)
if header in [GROUP_MSG_INVITE_HEADER, GROUP_MSG_JOIN_HEADER]:
pub_keys = split_byte_string(packet, ONION_SERVICE_PUBLIC_KEY_LENGTH)
for onion_pub_key in pub_keys:
others = [k for k in pub_keys if k != onion_pub_key]
packet_str = header_str + b85encode(group_id + b''.join(others))
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
elif header in [GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER]:
first_list_len_b, packet = separate_header(packet, ENCODED_INTEGER_LENGTH)
first_list_length = bytes_to_int(first_list_len_b)
pub_keys = split_byte_string(packet, ONION_SERVICE_PUBLIC_KEY_LENGTH)
before_adding = remaining = pub_keys[:first_list_length]
new_in_group = removable = pub_keys[first_list_length:]
if header == GROUP_MSG_MEMBER_ADD_HEADER:
packet_str = GROUP_MSG_MEMBER_ADD_HEADER.decode() + b85encode(group_id + b''.join(new_in_group))
for onion_pub_key in before_adding:
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
for onion_pub_key in new_in_group:
other_new = [k for k in new_in_group if k != onion_pub_key]
packet_str = (GROUP_MSG_INVITE_HEADER.decode()
+ b85encode(group_id + b''.join(other_new + before_adding)))
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
elif header == GROUP_MSG_MEMBER_REM_HEADER:
packet_str = header_str + b85encode(group_id + b''.join(removable))
for onion_pub_key in remaining:
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
elif header == GROUP_MSG_EXIT_GROUP_HEADER:
pub_keys = split_byte_string(packet, ONION_SERVICE_PUBLIC_KEY_LENGTH)
packet_str = header_str + b85encode(group_id)
for onion_pub_key in pub_keys:
queue_to_flask(packet_str, onion_pub_key, messages_to_flask, ts, header)
def dst_outgoing(queues: 'QueueDict',
gateway: 'Gateway',
unittest: bool = False
) -> None:
"""Output packets from queues to Destination Computer.
Commands (and local keys) to local Destination Computer have higher
priority than messages and public keys from contacts. Prioritization
prevents contact from doing DoS on Receiver Program by filling the
queue with packets.
"""
c_queue = queues[DST_COMMAND_QUEUE]
m_queue = queues[DST_MESSAGE_QUEUE]
while True:
try:
if c_queue.qsize() == 0 and m_queue.qsize() == 0:
time.sleep(0.01)
while c_queue.qsize() != 0:
gateway.write(c_queue.get())
if m_queue.qsize() != 0:
gateway.write(m_queue.get())
if unittest and queues[UNITTEST_QUEUE].qsize() > 0:
break
except (EOFError, KeyboardInterrupt):
pass

View File

@ -1,357 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import typing
from typing import Any, Dict, Union
from src.common.db_logs import access_logs, re_encrypt, remove_logs
from src.common.encoding import bytes_to_int
from src.common.exceptions import FunctionReturn
from src.common.misc import ensure_dir
from src.common.output import box_print, clear_screen, phase, print_on_previous_line
from src.common.statics import *
from src.rx.commands_g import group_add_member, group_create, group_rm_member, remove_group
from src.rx.key_exchanges import add_psk_tx_keys, add_x25519_keys, import_psk_rx_keys, local_key_installed
from src.rx.packet import decrypt_assembly_packet
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import Group, GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.rx.packet import PacketList
from src.rx.windows import WindowList
def process_command(ts: 'datetime',
assembly_ct: bytes,
window_list: 'WindowList',
packet_list: 'PacketList',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey',
pubkey_buf: Dict[str, bytes],
exit_queue: 'Queue') -> None:
"""Decrypt command assembly packet and process command."""
assembly_packet, account, origin = decrypt_assembly_packet(assembly_ct, window_list, contact_list, key_list)
cmd_packet = packet_list.get_packet(account, origin, COMMAND)
cmd_packet.add_packet(assembly_packet)
if not cmd_packet.is_complete:
raise FunctionReturn("Incomplete command.", output=False)
command = cmd_packet.assemble_command_packet()
header = command[:2]
cmd_data = command[2:]
# Keyword Function to run ( Parameters )
# -----------------------------------------------------------------------------------------------------------------------------------------
d = {LOCAL_KEY_INSTALLED_HEADER: (local_key_installed, ts, window_list, contact_list ),
SHOW_WINDOW_ACTIVITY_HEADER: (show_win_activity, window_list ),
WINDOW_SELECT_HEADER: (select_win_cmd, cmd_data, window_list ),
CLEAR_SCREEN_HEADER: (clear_active_window, ),
RESET_SCREEN_HEADER: (reset_active_window, cmd_data, window_list ),
EXIT_PROGRAM_HEADER: (exit_tfc, exit_queue),
LOG_DISPLAY_HEADER: (log_command, cmd_data, None, window_list, contact_list, group_list, settings, master_key),
LOG_EXPORT_HEADER: (log_command, cmd_data, ts, window_list, contact_list, group_list, settings, master_key),
LOG_REMOVE_HEADER: (remove_log, cmd_data, settings, master_key),
CHANGE_MASTER_K_HEADER: (change_master_key, ts, window_list, contact_list, group_list, key_list, settings, master_key),
CHANGE_NICK_HEADER: (change_nick, cmd_data, ts, window_list, contact_list, ),
CHANGE_SETTING_HEADER: (change_setting, cmd_data, ts, window_list, contact_list, group_list, settings, ),
CHANGE_LOGGING_HEADER: (contact_setting, cmd_data, ts, window_list, contact_list, group_list, header ),
CHANGE_FILE_R_HEADER: (contact_setting, cmd_data, ts, window_list, contact_list, group_list, header ),
CHANGE_NOTIFY_HEADER: (contact_setting, cmd_data, ts, window_list, contact_list, group_list, header ),
GROUP_CREATE_HEADER: (group_create, cmd_data, ts, window_list, contact_list, group_list, settings ),
GROUP_ADD_HEADER: (group_add_member, cmd_data, ts, window_list, contact_list, group_list, settings ),
GROUP_REMOVE_M_HEADER: (group_rm_member, cmd_data, ts, window_list, contact_list, group_list, ),
GROUP_DELETE_HEADER: (remove_group, cmd_data, ts, window_list, group_list, ),
KEY_EX_X25519_HEADER: (add_x25519_keys, cmd_data, ts, window_list, contact_list, key_list, settings, pubkey_buf),
KEY_EX_PSK_TX_HEADER: (add_psk_tx_keys, cmd_data, ts, window_list, contact_list, key_list, settings, pubkey_buf),
KEY_EX_PSK_RX_HEADER: (import_psk_rx_keys, cmd_data, ts, window_list, contact_list, key_list, settings ),
CONTACT_REMOVE_HEADER: (remove_contact, cmd_data, ts, window_list, contact_list, group_list, key_list, ),
WIPE_USER_DATA_HEADER: (wipe, exit_queue)} # type: Dict[bytes, Any]
try:
from_dict = d[header]
except KeyError:
raise FunctionReturn("Error: Received an invalid command.")
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def show_win_activity(window_list: 'WindowList') -> None:
"""Show number of unread messages in each window."""
unread_wins = [w for w in window_list if (w.uid != LOCAL_ID and w.unread_messages > 0)]
print_list = ["Window activity"] if unread_wins else ["No window activity"]
print_list += [f"{w.name}: {w.unread_messages}" for w in unread_wins]
box_print(print_list)
print_on_previous_line(reps=(len(print_list) + 2), delay=1.5)
def select_win_cmd(cmd_data: bytes, window_list: 'WindowList') -> None:
"""Select window specified by TxM."""
window_uid = cmd_data.decode()
if window_uid == WIN_TYPE_FILE:
clear_screen()
window_list.select_rx_window(window_uid)
def clear_active_window() -> None:
"""Clear active screen."""
clear_screen()
def reset_active_window(cmd_data: bytes, window_list: 'WindowList') -> None:
"""Reset window specified by TxM."""
uid = cmd_data.decode()
window = window_list.get_window(uid)
window.reset_window()
os.system('reset')
def exit_tfc(exit_queue: 'Queue') -> None:
"""Exit TFC."""
exit_queue.put(EXIT)
def log_command(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey') -> None:
"""Display or export logfile for active window."""
export = ts is not None
win_uid, no_msg_bytes = cmd_data.split(US_BYTE)
no_messages = bytes_to_int(no_msg_bytes)
window = window_list.get_window(win_uid.decode())
access_logs(window, contact_list, group_list, settings, master_key, msg_to_load=no_messages, export=export)
if export:
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, f"Exported logfile of {window.type_print} {window.name}.", output=True)
def remove_log(cmd_data: bytes,
settings: 'Settings',
master_key: 'MasterKey') -> None:
"""Remove log entries for contact."""
window_name = cmd_data.decode()
remove_logs(window_name, settings, master_key)
def change_master_key(ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
key_list: 'KeyList',
settings: 'Settings',
master_key: 'MasterKey') -> None:
"""Prompt user for new master password and derive new master key from that."""
try:
old_master_key = master_key.master_key[:]
master_key.new_master_key()
phase("Re-encrypting databases")
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
if os.path.isfile(file_name):
re_encrypt(old_master_key, master_key.master_key, settings)
key_list.store_keys()
settings.store_settings()
contact_list.store_contacts()
group_list.store_groups()
phase(DONE)
box_print("Master key successfully changed.", head=1)
clear_screen(delay=1.5)
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, "Changed RxM master key.")
except KeyboardInterrupt:
raise FunctionReturn("Password change aborted.", delay=1, head=3, tail_clear=True)
def change_nick(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList') -> None:
"""Change contact nick."""
account, nick = [f.decode() for f in cmd_data.split(US_BYTE)]
window = window_list.get_window(account)
window.name = nick
window.handle_dict[account] = (contact_list.get_contact(account).nick
if contact_list.has_contact(account) else account)
contact_list.get_contact(account).nick = nick
contact_list.store_contacts()
cmd_win = window_list.get_local_window()
cmd_win.add_new(ts, f"Changed {account} nick to '{nick}'", output=True)
def change_setting(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> None:
"""Change TFC setting."""
setting, value = [f.decode() for f in cmd_data.split(US_BYTE)]
if setting not in settings.key_list:
raise FunctionReturn(f"Error: Invalid setting '{setting}'")
settings.change_setting(setting, value, contact_list, group_list)
local_win = window_list.get_local_window()
local_win.add_new(ts, f"Changed setting {setting} to '{value}'", output=True)
def contact_setting(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
header: bytes) -> None:
"""Change contact/group related setting."""
setting, win_uid = [f.decode() for f in cmd_data.split(US_BYTE)]
attr, desc, file_cmd = {CHANGE_LOGGING_HEADER: ('log_messages', 'Logging of messages', False),
CHANGE_FILE_R_HEADER: ('file_reception', 'Reception of files', True ),
CHANGE_NOTIFY_HEADER: ('notifications', 'Message notifications', False)}[header]
action, b_value = {ENABLE: ('enable', True),
DISABLE: ('disable', False)}[setting.lower().encode()]
if setting.isupper():
# Change settings for all contacts (and groups)
enabled = [getattr(c, attr) for c in contact_list.get_list_of_contacts()]
enabled += [getattr(g, attr) for g in group_list] if not file_cmd else []
status = "was already" if (( all(enabled) and b_value)
or (not any(enabled) and not b_value)) else 'has been'
specifier = 'every '
w_type = 'contact'
w_name = '.' if file_cmd else ' and group.'
# Set values
for c in contact_list.get_list_of_contacts():
setattr(c, attr, b_value)
contact_list.store_contacts()
if not file_cmd:
for g in group_list:
setattr(g, attr, b_value)
group_list.store_groups()
else:
# Change setting for contacts in specified window
if not window_list.has_window(win_uid):
raise FunctionReturn(f"Error: Found no window for '{win_uid}'")
window = window_list.get_window(win_uid)
group_window = window.type == WIN_TYPE_GROUP
contact_window = window.type == WIN_TYPE_CONTACT
if contact_window:
target = contact_list.get_contact(win_uid) # type: Union[Contact, Group]
else:
target = group_list.get_group(win_uid)
if file_cmd:
enabled = [getattr(m, attr) for m in window.window_contacts]
changed = not all(enabled) if b_value else any(enabled)
else:
changed = getattr(target, attr) != b_value
status = "has been" if changed else "was already"
specifier = 'members in ' if (file_cmd and group_window) else ''
w_type = window.type_print
w_name = f" {window.name}."
# Set values
if contact_window or (group_window and file_cmd):
for c in window.window_contacts:
setattr(c, attr, b_value)
contact_list.store_contacts()
elif window.type == WIN_TYPE_GROUP:
setattr(group_list.get_group(win_uid), attr, b_value)
group_list.store_groups()
message = f"{desc} {status} {action}d for {specifier}{w_type}{w_name}"
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, message, output=True)
def remove_contact(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
key_list: 'KeyList') -> None:
"""Remove contact from RxM."""
rx_account = cmd_data.decode()
key_list.remove_keyset(rx_account)
window_list.remove_window(rx_account)
if not contact_list.has_contact(rx_account):
raise FunctionReturn(f"RxM has no account '{rx_account}' to remove.")
nick = contact_list.get_contact(rx_account).nick
contact_list.remove_contact(rx_account)
message = f"Removed {nick} from contacts."
box_print(message, head=1, tail=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, message)
if any([g.remove_members([rx_account]) for g in group_list]):
box_print(f"Removed {rx_account} from group(s).", tail=1)
def wipe(exit_queue: 'Queue') -> None:
"""Reset terminals, wipe all user data on RxM and power off system.
No effective RAM overwriting tool currently exists, so as long as TxM/RxM
use FDE and DDR3 memory, recovery of user data becomes impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
"""
os.system('reset')
exit_queue.put(WIPE)

View File

@ -1,165 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import typing
from src.common.exceptions import FunctionReturn
from src.common.output import box_print, group_management_print
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_settings import Settings
from src.rx.windows import WindowList
def group_create(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> None:
"""Create a new group."""
fields = [f.decode() for f in cmd_data.split(US_BYTE)]
group_name = fields[0]
purp_accounts = set(fields[1:])
accounts = set(contact_list.get_list_of_accounts())
accepted = list(accounts & purp_accounts)
rejected = list(purp_accounts - accounts)
if len(accepted) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} members per group.")
if len(group_list) == settings.max_number_of_groups:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_groups} groups.")
accepted_contacts = [contact_list.get_contact(c) for c in accepted]
group_list.add_group(group_name,
settings.log_messages_by_default,
settings.show_notifications_by_default,
accepted_contacts)
window = window_list.get_window(group_name)
window.window_contacts = accepted_contacts
window.message_log = []
window.unread_messages = 0
window.create_handle_dict()
group_management_print(NEW_GROUP, accepted, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, f"Created new group {group_name}.")
def group_add_member(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings') -> None:
"""Add member(s) to group."""
fields = [f.decode() for f in cmd_data.split(US_BYTE)]
group_name = fields[0]
purp_accounts = set(fields[1:])
accounts = set(contact_list.get_list_of_accounts())
before_adding = set(group_list.get_group(group_name).get_list_of_member_accounts())
ok_accounts = set(accounts & purp_accounts)
new_in_group_set = set(ok_accounts - before_adding)
end_assembly = list(before_adding | new_in_group_set)
rejected = list(purp_accounts - accounts)
already_in_g = list(before_adding & purp_accounts)
new_in_group = list(new_in_group_set)
if len(end_assembly) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} members per group.")
group = group_list.get_group(group_name)
group.add_members([contact_list.get_contact(a) for a in new_in_group])
window = window_list.get_window(group_name)
window.add_contacts(new_in_group)
window.create_handle_dict()
group_management_print(ADDED_MEMBERS, new_in_group, contact_list, group_name)
group_management_print(ALREADY_MEMBER, already_in_g, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, f"Added members to group {group_name}.")
def group_rm_member(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
group_list: 'GroupList') -> None:
"""Remove member(s) from group."""
fields = [f.decode() for f in cmd_data.split(US_BYTE)]
group_name = fields[0]
purp_accounts = set(fields[1:])
accounts = set(contact_list.get_list_of_accounts())
before_removal = set(group_list.get_group(group_name).get_list_of_member_accounts())
ok_accounts_set = set(purp_accounts & accounts)
removable_set = set(before_removal & ok_accounts_set)
not_in_group = list(ok_accounts_set - before_removal)
rejected = list(purp_accounts - accounts)
removable = list(removable_set)
group = group_list.get_group(group_name)
group.remove_members(removable)
window = window_list.get_window(group_name)
window.remove_contacts(removable)
group_management_print(REMOVED_MEMBERS, removable, contact_list, group_name)
group_management_print(NOT_IN_GROUP, not_in_group, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, f"Removed members from group {group_name}.")
def remove_group(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
group_list: 'GroupList') -> None:
"""Remove group."""
group_name = cmd_data.decode()
window_list.remove_window(group_name)
if group_name not in group_list.get_list_of_group_names():
raise FunctionReturn(f"RxM has no group '{group_name}' to remove.")
group_list.remove_group(group_name)
message = f"Removed group {group_name}."
box_print(message, head=1, tail=1)
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, message)

View File

@ -1,150 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import base64
import binascii
import os.path
import typing
import zlib
import nacl.exceptions
from src.common.crypto import auth_and_decrypt
from src.common.encoding import bytes_to_str
from src.common.exceptions import FunctionReturn
from src.common.input import get_b58_key
from src.common.misc import ensure_dir
from src.common.output import box_print, c_print, phase, print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_settings import Settings
from src.rx.windows import WindowList
def store_unique(f_data: bytes, f_dir: str, f_name: str) -> str:
"""Store file under unique filename.
Add trailing counter .# to duplicate files.
"""
ensure_dir(f_dir)
if os.path.isfile(f_dir + f_name):
ctr = 1
while os.path.isfile(f_dir + f_name + f'.{ctr}'):
ctr += 1
f_name += f'.{ctr}'
with open(f_dir + f_name, 'wb+') as f:
f.write(f_data)
return f_name
def process_received_file(payload: bytes, nick: str) -> None:
"""Process received file assembly packets."""
try:
f_name_b, f_data = payload.split(US_BYTE)
except ValueError:
raise FunctionReturn("Error: Received file had invalid structure.")
try:
f_name = f_name_b.decode()
except UnicodeError:
raise FunctionReturn("Error: Received file name had invalid encoding.")
if not f_name.isprintable() or not f_name:
raise FunctionReturn("Error: Received file had an invalid name.")
try:
f_data = base64.b85decode(f_data)
except (binascii.Error, ValueError):
raise FunctionReturn("Error: Received file had invalid encoding.")
file_ct = f_data[:-KEY_LENGTH]
file_key = f_data[-KEY_LENGTH:]
if len(file_key) != KEY_LENGTH:
raise FunctionReturn("Error: Received file had an invalid key.")
try:
file_pt = auth_and_decrypt(file_ct, file_key, soft_e=True)
except nacl.exceptions.CryptoError:
raise FunctionReturn("Error: Decryption of file data failed.")
try:
file_dc = zlib.decompress(file_pt)
except zlib.error:
raise FunctionReturn("Error: Decompression of file data failed.")
file_dir = f'{DIR_RX_FILES}{nick}/'
final_name = store_unique(file_dc, file_dir, f_name)
box_print(f"Stored file from {nick} as '{final_name}'")
def process_imported_file(ts: 'datetime',
packet: bytes,
window_list: 'WindowList',
settings: 'Settings'):
"""Decrypt and store imported file."""
while True:
try:
print('')
key = get_b58_key(B58_FILE_KEY, settings)
except KeyboardInterrupt:
raise FunctionReturn("File import aborted.", head=2)
try:
phase("Decrypting file", head=1)
file_pt = auth_and_decrypt(packet[1:], key, soft_e=True)
phase(DONE)
break
except (nacl.exceptions.CryptoError, nacl.exceptions.ValueError):
phase('ERROR', done=True)
c_print("Invalid decryption key. Try again.")
print_on_previous_line(reps=7, delay=1.5)
except KeyboardInterrupt:
phase('ABORT', done=True)
raise FunctionReturn("File import aborted.")
try:
phase("Decompressing file")
file_dc = zlib.decompress(file_pt)
phase(DONE)
except zlib.error:
phase('ERROR', done=True)
raise FunctionReturn("Error: Decompression of file data failed.")
try:
f_name = bytes_to_str(file_dc[:PADDED_UTF32_STR_LEN])
except UnicodeError:
raise FunctionReturn("Error: Received file name had invalid encoding.")
if not f_name.isprintable() or not f_name:
raise FunctionReturn("Error: Received file had an invalid name.")
f_data = file_dc[PADDED_UTF32_STR_LEN:]
final_name = store_unique(f_data, DIR_IMPORTED, f_name)
message = f"Stored imported file as '{final_name}'"
box_print(message, head=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, message)

View File

@ -1,286 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os.path
import pipes
import subprocess
import typing
from typing import Dict
import nacl.exceptions
from src.common.crypto import argon2_kdf, auth_and_decrypt, csprng
from src.common.db_masterkey import MasterKey
from src.common.encoding import b58encode
from src.common.exceptions import FunctionReturn
from src.common.input import get_b58_key
from src.common.misc import split_string
from src.common.output import box_print, c_print, clear_screen, phase, print_key, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.rx.windows import WindowList
# Local key
def process_local_key(ts: 'datetime',
packet: bytes,
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings') -> None:
"""Decrypt local key packet and add local contact/keyset."""
bootstrap = not key_list.has_local_key()
try:
while True:
clear_screen()
box_print("Received encrypted local key", tail=1)
kdk = get_b58_key(B58_LOCAL_KEY, settings)
try:
pt = auth_and_decrypt(packet[1:], key=kdk, soft_e=True)
break
except nacl.exceptions.CryptoError:
if bootstrap:
raise FunctionReturn("Error: Incorrect key decryption key.", delay=1.5)
c_print("Incorrect key decryption key.", head=1)
clear_screen(delay=1.5)
key = pt[0:32]
hek = pt[32:64]
conf_code = pt[64:65]
# Add local contact to contact list database
contact_list.add_contact(LOCAL_ID, LOCAL_ID, LOCAL_ID,
bytes(FINGERPRINT_LEN), bytes(FINGERPRINT_LEN),
False, False, True)
# Add local keyset to keyset database
key_list.add_keyset(rx_account=LOCAL_ID,
tx_key=key,
rx_key=csprng(),
tx_hek=hek,
rx_hek=csprng())
box_print(f"Confirmation code for TxM: {conf_code.hex()}", head=1)
local_win = window_list.get_local_window()
local_win.add_new(ts, "Added new local key.")
if bootstrap:
window_list.active_win = local_win
except KeyboardInterrupt:
raise FunctionReturn("Local key setup aborted.", delay=1, head=3, tail_clear=True)
def local_key_installed(ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList') -> None:
"""Clear local key bootstrap process from screen."""
message = "Successfully completed local key exchange."
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, message)
box_print(message)
clear_screen(delay=1)
if not contact_list.has_contacts():
c_print("Waiting for new contacts", head=1, tail=1)
# X25519
def process_public_key(ts: 'datetime',
packet: bytes,
window_list: 'WindowList',
settings: 'Settings',
pubkey_buf: Dict[str, bytes]) -> None:
"""Display contact's public key and add it to buffer."""
pub_key = packet[1:33]
origin = packet[33:34]
try:
account = packet[34:].decode()
except UnicodeError:
raise FunctionReturn("Error! Account for received public key had invalid encoding.")
if origin not in [ORIGIN_CONTACT_HEADER, ORIGIN_USER_HEADER]:
raise FunctionReturn("Error! Received public key had an invalid origin header.")
if origin == ORIGIN_CONTACT_HEADER:
pubkey_buf[account] = pub_key
print_key(f"Received public key from {account}:", pub_key, settings)
local_win = window_list.get_local_window()
pub_key_b58 = ' '.join(split_string(b58encode(pub_key), item_len=(51 if settings.local_testing_mode else 3)))
local_win.add_new(ts, f"Received public key from {account}: {pub_key_b58}")
elif origin == ORIGIN_USER_HEADER and account in pubkey_buf:
clear_screen()
print_key(f"Public key for {account}:", pubkey_buf[account], settings)
def add_x25519_keys(packet: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings',
pubkey_buf: Dict[str, bytes]) -> None:
"""Add contact and their X25519 keys."""
tx_key = packet[0:32]
tx_hek = packet[32:64]
rx_key = packet[64:96]
rx_hek = packet[96:128]
account, nick = [f.decode() for f in packet[128:].split(US_BYTE)]
contact_list.add_contact(account, DUMMY_USER, nick,
bytes(FINGERPRINT_LEN),
bytes(FINGERPRINT_LEN),
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
key_list.add_keyset(account, tx_key, rx_key, tx_hek, rx_hek)
pubkey_buf.pop(account, None)
message = f"Added X25519 keys for {nick} ({account})."
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, message)
box_print(message)
clear_screen(delay=1)
# PSK
def add_psk_tx_keys(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings',
pubkey_buf: Dict[str, bytes]) -> None:
"""Add contact and Tx-PSKs."""
tx_key = cmd_data[0:32]
tx_hek = cmd_data[32:64]
account, nick = [f.decode() for f in cmd_data[64:].split(US_BYTE)]
contact_list.add_contact(account, DUMMY_USER, nick,
bytes(FINGERPRINT_LEN), bytes(FINGERPRINT_LEN),
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
# The Rx-side keys are set as null-byte strings to indicate they have not
# been added yet. This does not allow existential forgeries as
# decrypt_assembly_packet does not allow use of zero-keys for decryption.
key_list.add_keyset(account,
tx_key=tx_key,
rx_key=bytes(KEY_LENGTH),
tx_hek=tx_hek,
rx_hek=bytes(KEY_LENGTH))
pubkey_buf.pop(account, None)
message = f"Added Tx-PSK for {nick} ({account})."
local_win = window_list.get_window(LOCAL_ID)
local_win.add_new(ts, message)
box_print(message)
clear_screen(delay=1)
def import_psk_rx_keys(cmd_data: bytes,
ts: 'datetime',
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList',
settings: 'Settings') -> None:
"""Import Rx-PSK of contact."""
account = cmd_data.decode()
if not contact_list.has_contact(account):
raise FunctionReturn(f"Error: Unknown account '{account}'")
contact = contact_list.get_contact(account)
psk_file = ask_path_gui(f"Select PSK for {contact.nick}", settings, get_file=True)
with open(psk_file, 'rb') as f:
psk_data = f.read()
if len(psk_data) != PSK_FILE_SIZE:
raise FunctionReturn("Error: Invalid PSK data in file.")
salt = psk_data[:ARGON2_SALT_LEN]
ct_tag = psk_data[ARGON2_SALT_LEN:]
while True:
try:
password = MasterKey.get_password("PSK password")
phase("Deriving key decryption key", head=2)
kdk, _ = argon2_kdf(password, salt, parallelism=1)
psk_pt = auth_and_decrypt(ct_tag, key=kdk, soft_e=True)
phase(DONE)
break
except nacl.exceptions.CryptoError:
print_on_previous_line()
c_print("Invalid password. Try again.", head=1)
print_on_previous_line(reps=5, delay=1.5)
except KeyboardInterrupt:
raise FunctionReturn("PSK import aborted.", head=2)
rx_key = psk_pt[0:32]
rx_hek = psk_pt[32:64]
if any(k == bytes(KEY_LENGTH) for k in [rx_key, rx_hek]):
raise FunctionReturn("Error: Received invalid keys from contact.")
keyset = key_list.get_keyset(account)
keyset.rx_key = rx_key
keyset.rx_hek = rx_hek
key_list.store_keys()
# Pipes protects against shell injection. Source of command's parameter
# is user's own RxM and therefore trusted, but it's still good practice.
subprocess.Popen(f"shred -n 3 -z -u {pipes.quote(psk_file)}", shell=True).wait()
if os.path.isfile(psk_file):
box_print(f"Warning! Overwriting of PSK ({psk_file}) failed. Press <Enter> to continue.", manual_proceed=True)
local_win = window_list.get_local_window()
message = f"Added Rx-PSK for {contact.nick} ({account})."
local_win.add_new(ts, message)
box_print([message, '', "Warning!",
"Physically destroy the keyfile transmission ",
"media to ensure that no data escapes RxM!"], head=1, tail=1)

View File

@ -1,225 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import typing
from typing import Any, List, Tuple
from src.common.db_logs import write_log_entry
from src.common.exceptions import FunctionReturn
from src.common.output import box_print
from src.common.statics import *
from src.rx.packet import decrypt_assembly_packet
if typing.TYPE_CHECKING:
from datetime import datetime
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.rx.packet import PacketList
from src.rx.windows import WindowList
def process_message(ts: 'datetime',
assembly_packet_ct: bytes,
window_list: 'WindowList',
packet_list: 'PacketList',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
settings: 'Settings',
master_key: 'MasterKey') -> None:
"""Process received private / group message.
Group management messages have automatic formatting and window
redirection based on group configuration managed by user.
"""
assembly_packet, account, origin = decrypt_assembly_packet(assembly_packet_ct, window_list, contact_list, key_list)
p_type = FILE if assembly_packet[:1].isupper() else MESSAGE
packet = packet_list.get_packet(account, origin, p_type)
logging = contact_list.get_contact(account).log_messages
def log_masking_packets(completed: bool = False) -> None:
"""Add masking packets to log file.
If logging and logfile masking are enabled this function will
in case of erroneous transmissions, store the correct number
of placeholder data packets to log file to hide quantity of
communication that log file observation would reveal.
"""
if logging and settings.logfile_masking and (packet.log_masking_ctr or completed):
iterator = packet.assembly_pt_list if completed else range(packet.log_masking_ctr) # type: Any
for _ in iterator:
write_log_entry(PLACEHOLDER_DATA, account, settings, master_key, origin)
packet.log_masking_ctr = 0
try:
packet.add_packet(assembly_packet)
except FunctionReturn:
log_masking_packets()
raise
log_masking_packets()
if not packet.is_complete:
return None
try:
if p_type == FILE:
packet.assemble_and_store_file()
# Raise FunctionReturn for packets stored as placeholder data.
raise FunctionReturn("File storage complete.", output=False)
elif p_type == MESSAGE:
assembled = packet.assemble_message_packet()
header = assembled[:1]
assembled = assembled[1:]
if header == GROUP_MESSAGE_HEADER:
logging = process_group_message(assembled, ts, account, origin, group_list, window_list)
elif header == PRIVATE_MESSAGE_HEADER:
window = window_list.get_window(account)
window.add_new(ts, assembled.decode(), account, origin, output=True)
elif header == WHISPER_MESSAGE_HEADER:
window = window_list.get_window(account)
window.add_new(ts, assembled.decode(), account, origin, output=True, whisper=True)
raise FunctionReturn("Key message message complete.", output=False)
else:
process_group_management_message(header, assembled, ts, account, origin, contact_list, group_list, window_list)
raise FunctionReturn("Group management message complete.", output=False)
if logging:
for p in packet.assembly_pt_list:
write_log_entry(p, account, settings, master_key, origin)
except (FunctionReturn, UnicodeError):
log_masking_packets(completed=True)
raise
finally:
packet.clear_assembly_packets()
def process_group_message(assembled: bytes,
ts: 'datetime',
account: str,
origin: bytes,
group_list: 'GroupList',
window_list: 'WindowList') -> bool:
"""Process a group message."""
group_msg_id = assembled[:GROUP_MSG_ID_LEN]
group_packet = assembled[GROUP_MSG_ID_LEN:]
try:
group_name, group_message = [f.decode() for f in group_packet.split(US_BYTE)]
except (IndexError, UnicodeError):
raise FunctionReturn("Error: Received an invalid group message.")
if not group_list.has_group(group_name):
raise FunctionReturn("Error: Received message to unknown group.", output=False)
group = group_list.get_group(group_name)
window = window_list.get_window(group_name)
if not group.has_member(account):
raise FunctionReturn("Error: Account is not member of group.", output=False)
# All copies of group messages user sends to members contain same UNIX timestamp.
# This allows RxM to ignore copies of outgoing messages sent by the user.
if origin == ORIGIN_USER_HEADER:
if window.group_msg_id != group_msg_id:
window.group_msg_id = group_msg_id
window.add_new(ts, group_message, account, origin, output=True)
elif origin == ORIGIN_CONTACT_HEADER:
window.add_new(ts, group_message, account, origin, output=True)
return group_list.get_group(group_name).log_messages
def process_group_management_message(header: bytes,
assembled: bytes,
ts: 'datetime',
account: str,
origin: bytes,
contact_list: 'ContactList',
group_list: 'GroupList',
window_list: 'WindowList') -> None:
"""Process group management message."""
local_win = window_list.get_local_window()
nick = contact_list.get_contact(account).nick
try:
group_name, *members = [f.decode() for f in assembled.split(US_BYTE)]
except UnicodeError:
raise FunctionReturn("Error: Received group management message had invalid encoding.")
if origin == ORIGIN_USER_HEADER:
raise FunctionReturn("Ignored group management message from user.", output=False)
account_in_group = group_list.has_group(group_name) and group_list.get_group(group_name).has_member(account)
def get_members() -> Tuple[List[str], str]:
known = [contact_list.get_contact(m).nick for m in members if contact_list.has_contact(m)]
unknown = [ m for m in members if not contact_list.has_contact(m)]
just_len = len(max(known + unknown, key=len))
listed_m_ = [f" * {m.ljust(just_len)}" for m in (known + unknown)]
joined_m_ = ", ".join(known + unknown)
return listed_m_, joined_m_
if header == GROUP_MSG_INVITEJOIN_HEADER:
lw_msg = f"{nick} has {'joined' if account_in_group else 'invited you to'} group '{group_name}'"
message = [lw_msg]
if members:
listed_m, joined_m = get_members()
message[0] += " with following members:"
message += listed_m
lw_msg += " with members " + joined_m
box_print(message, head=1, tail=1)
local_win.add_new(ts, lw_msg)
elif header in [GROUP_MSG_MEMBER_ADD_HEADER, GROUP_MSG_MEMBER_REM_HEADER]:
if account_in_group:
action = {GROUP_MSG_MEMBER_ADD_HEADER: "added following member(s) to",
GROUP_MSG_MEMBER_REM_HEADER: "removed following member(s) from"}[header]
lw_msg = f"{nick} has {action} group {group_name}: "
message = [lw_msg]
if members:
listed_m, joined_m = get_members()
message += listed_m
lw_msg += joined_m
box_print(message, head=1, tail=1)
local_win.add_new(ts, lw_msg)
elif header == GROUP_MSG_EXIT_GROUP_HEADER:
if account_in_group:
box_print([f"{nick} has left group {group_name}.", '', "Warning",
"Unless you remove the contact from the group, they",
"can still read messages you send to the group."],
head=1, tail=1)
else:
raise FunctionReturn("Error: Message from contact had an invalid header.")

View File

@ -1,123 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import sys
import time
import typing
from typing import Dict, List, Tuple
from src.common.exceptions import FunctionReturn
from src.common.output import clear_screen
from src.common.statics import *
from src.rx.commands import process_command
from src.rx.files import process_imported_file
from src.rx.key_exchanges import process_local_key, process_public_key
from src.rx.messages import process_message
from src.rx.packet import PacketList
from src.rx.windows import WindowList
if typing.TYPE_CHECKING:
from datetime import datetime
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_keys import KeyList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
def output_loop(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
contact_list: 'ContactList',
key_list: 'KeyList',
group_list: 'GroupList',
master_key: 'MasterKey',
stdin_fd: int,
unittest: bool = False) -> None:
"""Process received packets according to their priority."""
l_queue = queues[LOCAL_KEY_PACKET_HEADER]
p_queue = queues[PUBLIC_KEY_PACKET_HEADER]
m_queue = queues[MESSAGE_PACKET_HEADER]
c_queue = queues[COMMAND_PACKET_HEADER]
i_queue = queues[IMPORTED_FILE_HEADER]
e_queue = queues[EXIT_QUEUE]
sys.stdin = os.fdopen(stdin_fd)
packet_buf = dict() # type: Dict[str, List[Tuple[datetime, bytes]]]
pubkey_buf = dict() # type: Dict[str, bytes]
packet_list = PacketList(settings, contact_list)
window_list = WindowList(settings, contact_list, group_list, packet_list)
clear_screen()
while True:
try:
if l_queue.qsize() != 0:
ts, packet = l_queue.get()
process_local_key(ts, packet, window_list, contact_list, key_list, settings)
if not contact_list.has_local_contact():
time.sleep(0.01)
continue
if c_queue.qsize() != 0:
ts, packet = c_queue.get()
process_command(ts, packet, window_list, packet_list, contact_list, key_list, group_list, settings, master_key, pubkey_buf, e_queue)
continue
if p_queue.qsize() != 0:
ts, packet = p_queue.get()
process_public_key(ts, packet, window_list, settings, pubkey_buf)
continue
if window_list.active_win is not None and window_list.active_win.uid == WIN_TYPE_FILE:
window_list.active_win.redraw_file_win()
# Prioritize buffered messages
for rx_account in packet_buf:
if contact_list.has_contact(rx_account) and key_list.has_rx_key(rx_account) and packet_buf[rx_account]:
ts, packet = packet_buf[rx_account].pop(0)
process_message(ts, packet, window_list, packet_list, contact_list, key_list, group_list, settings, master_key)
continue
if m_queue.qsize() != 0:
ts, packet = m_queue.get()
rx_account = packet[PACKET_LENGTH:].decode()
if contact_list.has_contact(rx_account) and key_list.has_rx_key(rx_account):
process_message(ts, packet, window_list, packet_list, contact_list, key_list, group_list, settings, master_key)
else:
packet_buf.setdefault(rx_account, []).append((ts, packet))
continue
if i_queue.qsize() != 0:
ts, packet = i_queue.get()
process_imported_file(ts, packet, window_list, settings)
continue
time.sleep(0.01)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
except (FunctionReturn, KeyboardInterrupt):
pass

View File

@ -1,385 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import datetime
import struct
import typing
import zlib
from typing import Callable, Dict, Generator, Iterable, List, Sized, Tuple
import nacl.exceptions
from src.common.crypto import auth_and_decrypt, hash_chain, rm_padding_bytes
from src.common.encoding import bytes_to_int
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import readable_size
from src.common.output import box_print, c_print
from src.common.statics import *
from src.rx.files import process_received_file
if typing.TYPE_CHECKING:
from src.common.db_contacts import Contact, ContactList
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.rx.windows import RxWindow, WindowList
def get_packet_values(packet: bytes,
window: 'RxWindow',
contact_list: 'ContactList') -> Tuple[bytes, str, str, str, str, str]:
"""Load packet-related variables."""
if packet[:1] == COMMAND_PACKET_HEADER:
origin = ORIGIN_USER_HEADER
direction = "from"
key_dir = TX
p_type = "command"
account = LOCAL_ID
nick = "local TxM"
else:
origin = packet[345:346]
if origin not in [ORIGIN_USER_HEADER, ORIGIN_CONTACT_HEADER]:
raise FunctionReturn("Error: Received packet had an invalid origin-header.", window=window)
direction, key_dir = ("sent to", TX) if origin == ORIGIN_USER_HEADER else ("from", RX)
p_type = "packet"
account = packet[346:].decode()
nick = contact_list.get_contact(account).nick
if account == LOCAL_ID:
raise FunctionReturn("Warning! Received packet masqueraded as command.", window=window)
return origin, direction, key_dir, p_type, account, nick
def process_offset(offset: int,
origin: bytes,
direction: str,
nick: str,
window: 'RxWindow') -> None:
"""Display warnings about increased offsets.
If offset has increased over threshold, ask
the user to confirm hash ratchet catch up.
"""
if offset > HARAC_WARN_THRESHOLD and origin == ORIGIN_CONTACT_HEADER:
box_print([f"Warning! {offset} packets from {nick} were not received.",
f"This might indicate that {offset} most recent packets were ",
f"lost during transmission, or that the contact is attempting ",
f"a DoS attack. You can wait for TFC to attempt to decrypt the ",
"packet, but it might take a very long time or even forever."])
if not yes("Proceed with the decryption?", tail=1):
raise FunctionReturn(f"Dropped packet from {nick}.", window=window)
elif offset:
box_print(f"Warning! {offset} packet{'s' if offset > 1 else ''} {direction} {nick} were not received.")
def decrypt_assembly_packet(packet: bytes,
window_list: 'WindowList',
contact_list: 'ContactList',
key_list: 'KeyList') -> Tuple[bytes, str, bytes]:
"""Decrypt assembly packet from contact/local TxM."""
enc_harac = packet[1:49]
enc_msg = packet[49:345]
window = window_list.get_local_window()
origin, direction, key_dir, p_type, account, nick = get_packet_values(packet, window, contact_list)
# Load keys
keyset = key_list.get_keyset(account)
header_key = getattr(keyset, f'{key_dir}_hek')
message_key = getattr(keyset, f'{key_dir}_key')
if any(k == bytes(KEY_LENGTH) for k in [header_key, message_key]):
raise FunctionReturn("Warning! Loaded zero-key for packet decryption.")
# Decrypt hash ratchet counter
try:
harac_bytes = auth_and_decrypt(enc_harac, header_key, soft_e=True)
except nacl.exceptions.CryptoError:
raise FunctionReturn(f"Warning! Received {p_type} {direction} {nick} had an invalid hash ratchet MAC.", window=window)
# Catch up with hash ratchet offset
purp_harac = bytes_to_int(harac_bytes)
stored_harac = getattr(keyset, f'{key_dir}_harac')
offset = purp_harac - stored_harac
if offset < 0:
raise FunctionReturn(f"Warning! Received {p_type} {direction} {nick} had an expired hash ratchet counter.", window=window)
process_offset(offset, origin, direction, nick, window)
for _ in range(offset):
message_key = hash_chain(message_key)
# Decrypt packet
try:
assembly_packet = auth_and_decrypt(enc_msg, message_key, soft_e=True)
except nacl.exceptions.CryptoError:
raise FunctionReturn(f"Warning! Received {p_type} {direction} {nick} had an invalid MAC.", window=window)
# Update keys in database
keyset.update_key(key_dir, hash_chain(message_key), offset + 1)
return assembly_packet, account, origin
class Packet(object):
"""Packet objects collect and keep track of received assembly packets."""
def __init__(self,
account: str,
contact: 'Contact',
origin: bytes,
p_type: str,
settings: 'Settings') -> None:
"""Create a new Packet object."""
self.account = account
self.contact = contact
self.origin = origin
self.type = p_type
self.settings = settings
# File transmission metadata
self.packets = None # type: int
self.time = None # type: str
self.size = None # type: str
self.name = None # type: str
self.sh = dict(message=M_S_HEADER, file=F_S_HEADER, command=C_S_HEADER)[self.type]
self.lh = dict(message=M_L_HEADER, file=F_L_HEADER, command=C_L_HEADER)[self.type]
self.ah = dict(message=M_A_HEADER, file=F_A_HEADER, command=C_A_HEADER)[self.type]
self.eh = dict(message=M_E_HEADER, file=F_E_HEADER, command=C_E_HEADER)[self.type]
self.ch = dict(message=M_C_HEADER, file=F_C_HEADER, command=C_C_HEADER)[self.type]
self.nh = dict(message=P_N_HEADER, file=P_N_HEADER, command=C_N_HEADER)[self.type]
self.assembly_pt_list = [] # type: List[bytes]
self.log_masking_ctr = 0 # type: int
self.long_active = False
self.is_complete = False
def add_masking_packet_to_logfile(self, increase: int = 1) -> None:
"""Increase log_masking_ctr for message and file packets."""
if self.type in [MESSAGE, FILE]:
self.log_masking_ctr += increase
def clear_file_metadata(self) -> None:
"""Clear file metadata."""
self.packets = None
self.time = None
self.size = None
self.name = None
def clear_assembly_packets(self) -> None:
"""Clear packet state."""
self.assembly_pt_list = []
self.long_active = False
self.is_complete = False
def new_file_packet(self) -> None:
"""New file transmission handling logic."""
name = self.name
was_active = self.long_active
self.clear_file_metadata()
self.clear_assembly_packets()
if self.origin == ORIGIN_USER_HEADER:
self.add_masking_packet_to_logfile()
raise FunctionReturn("Ignored file from user.", output=False)
if not self.contact.file_reception:
self.add_masking_packet_to_logfile()
raise FunctionReturn(f"Alert! File transmission from {self.contact.nick} but reception is disabled.")
if was_active:
c_print(f"Alert! File '{name}' from {self.contact.nick} never completed.", head=1, tail=1)
def check_long_packet(self):
"""Check if long packet has permission to be extended."""
if not self.long_active:
self.add_masking_packet_to_logfile()
raise FunctionReturn("Missing start packet.", output=False)
if self.type == FILE and not self.contact.file_reception:
self.add_masking_packet_to_logfile(increase=len(self.assembly_pt_list) + 1)
self.clear_assembly_packets()
raise FunctionReturn("Alert! File reception disabled mid-transfer.")
def process_short_header(self, packet: bytes) -> None:
"""Process short packet."""
if self.long_active:
self.add_masking_packet_to_logfile(increase=len(self.assembly_pt_list))
if self.type == FILE:
self.new_file_packet()
packet = self.sh + packet[17:]
self.assembly_pt_list = [packet]
self.long_active = False
self.is_complete = True
def process_long_header(self, packet: bytes) -> None:
"""Process first packet of long transmission."""
if self.long_active:
self.add_masking_packet_to_logfile(increase=len(self.assembly_pt_list))
if self.type == FILE:
self.new_file_packet()
try:
self.packets = bytes_to_int(packet[1:9])
self.time = str(datetime.timedelta(seconds=bytes_to_int(packet[9:17])))
self.size = readable_size(bytes_to_int(packet[17:25]))
self.name = packet[25:].split(US_BYTE)[0].decode()
packet = self.lh + packet[25:]
box_print([f'Receiving file from {self.contact.nick}:',
f'{self.name} ({self.size})',
f'ETA {self.time} ({self.packets} packets)'])
except (struct.error, UnicodeError, ValueError):
self.add_masking_packet_to_logfile()
raise FunctionReturn("Error: Received file packet had an invalid header.")
self.assembly_pt_list = [packet]
self.long_active = True
self.is_complete = False
def process_append_header(self, packet: bytes) -> None:
"""Process consecutive packet(s) of long transmission."""
self.check_long_packet()
self.assembly_pt_list.append(packet)
def process_end_header(self, packet: bytes) -> None:
"""Process last packet of long transmission."""
self.check_long_packet()
self.assembly_pt_list.append(packet)
self.long_active = False
self.is_complete = True
def abort_packet(self, message: str) -> None:
"""Process cancel/noise packet."""
if self.type == FILE and self.origin == ORIGIN_CONTACT_HEADER and self.long_active:
c_print(message, head=1, tail=1)
self.clear_file_metadata()
self.add_masking_packet_to_logfile(increase=len(self.assembly_pt_list) + 1)
self.clear_assembly_packets()
def process_cancel_header(self, _: bytes) -> None:
"""Process cancel packet for long transmission."""
self.abort_packet(f"{self.contact.nick} cancelled file.")
def process_noise_header(self, _: bytes) -> None:
"""Process traffic masking noise packet."""
self.abort_packet(f"Alert! File '{self.name}' from {self.contact.nick} never completed.")
def add_packet(self, packet: bytes) -> None:
"""Add a new assembly packet to the object."""
try:
func_d = {self.sh: self.process_short_header,
self.lh: self.process_long_header,
self.ah: self.process_append_header,
self.eh: self.process_end_header,
self.ch: self.process_cancel_header,
self.nh: self.process_noise_header} # type: Dict[bytes, Callable]
func = func_d[packet[:1]]
except KeyError:
# Erroneous headers are ignored, but stored as placeholder data.
self.add_masking_packet_to_logfile()
raise FunctionReturn("Error: Received packet had an invalid assembly packet header.")
func(packet)
def assemble_message_packet(self) -> bytes:
"""Assemble message packet."""
padded = b''.join([p[1:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
if len(self.assembly_pt_list) > 1:
msg_ct = payload[:-KEY_LENGTH]
msg_key = payload[-KEY_LENGTH:]
try:
payload = auth_and_decrypt(msg_ct, msg_key, soft_e=True)
except (nacl.exceptions.CryptoError, nacl.exceptions.ValueError):
raise FunctionReturn("Error: Decryption of message failed.")
try:
return zlib.decompress(payload)
except zlib.error:
raise FunctionReturn("Error: Decompression of message failed.")
def assemble_and_store_file(self) -> None:
"""Assemble file packet and store it."""
padded = b''.join([p[1:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
process_received_file(payload, self.contact.nick)
def assemble_command_packet(self) -> bytes:
"""Assemble command packet."""
padded = b''.join([p[1:] for p in self.assembly_pt_list])
payload = rm_padding_bytes(padded)
if len(self.assembly_pt_list) > 1:
cmd_hash = payload[-KEY_LENGTH:]
payload = payload[:-KEY_LENGTH]
if hash_chain(payload) != cmd_hash:
raise FunctionReturn("Error: Received an invalid command.")
try:
return zlib.decompress(payload)
except zlib.error:
raise FunctionReturn("Error: Decompression of command failed.")
class PacketList(Iterable, Sized):
"""PacketList manages all file, message, and command packets."""
def __init__(self, settings: 'Settings', contact_list: 'ContactList') -> None:
"""Create a new PacketList object."""
self.settings = settings
self.contact_list = contact_list
self.packets = [] # type: List[Packet]
def __iter__(self) -> Generator:
"""Iterate over packet list."""
yield from self.packets
def __len__(self) -> int:
"""Return number of packets in packet list."""
return len(self.packets)
def has_packet(self, account: str, origin: bytes, p_type: str) -> bool:
"""Return True if packet with matching selectors exists, else False."""
return any(p for p in self.packets if (p.account == account
and p.origin == origin
and p.type == p_type))
def get_packet(self, account: str, origin: bytes, p_type: str) -> Packet:
"""Get packet based on account, origin and type.
If packet does not exist, create it.
"""
if not self.has_packet(account, origin, p_type):
contact = self.contact_list.get_contact(account)
self.packets.append(Packet(account, contact, origin, p_type, self.settings))
return next(p for p in self.packets if (p.account == account
and p.origin == origin
and p.type == p_type))

View File

@ -1,68 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import time
import typing
from datetime import datetime
from typing import Dict
from src.common.misc import ignored
from src.common.output import box_print
from src.common.reed_solomon import ReedSolomonError, RSCodec
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_settings import Settings
def receiver_loop(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
unittest: bool = False) -> None:
"""Decode received packets and forward them to packet queues.
This function also determines the timestamp for received message.
"""
rs = RSCodec(2 * settings.session_serial_error_correction)
gw_queue = queues[GATEWAY_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
if gw_queue.qsize() == 0:
time.sleep(0.01)
packet = gw_queue.get()
timestamp = datetime.now()
try:
packet = bytes(rs.decode(packet))
except ReedSolomonError:
box_print("Error: Failed to correct errors in received packet.", head=1, tail=1)
continue
p_header = packet[:1]
if p_header in [PUBLIC_KEY_PACKET_HEADER, MESSAGE_PACKET_HEADER,
LOCAL_KEY_PACKET_HEADER, COMMAND_PACKET_HEADER,
IMPORTED_FILE_HEADER]:
queues[p_header].put((timestamp, packet))
if unittest:
break

View File

@ -1,327 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import sys
import textwrap
import typing
from datetime import datetime
from typing import Dict, Generator, Iterable, List, Tuple
from src.common.exceptions import FunctionReturn
from src.common.misc import get_terminal_width
from src.common.output import c_print, clear_screen, print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import GroupList
from src.common.db_settings import Settings
from src.rx.packet import PacketList
class RxWindow(Iterable):
"""RxWindow is an ephemeral message log for contact or group.
In addition, command history and file transfers have
their own windows, accessible with separate commands.
"""
def __init__(self,
uid: str,
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
packet_list: 'PacketList' = None) -> None:
"""Create a new RxWindow object."""
self.uid = uid
self.contact_list = contact_list
self.group_list = group_list
self.settings = settings
self.packet_list = packet_list
self.is_active = False
self.group_msg_id = os.urandom(GROUP_MSG_ID_LEN)
self.window_contacts = [] # type: List[Contact]
self.message_log = [] # type: List[Tuple[datetime, str, str, bytes, bool]]
self.handle_dict = dict() # type: Dict[str, str]
self.previous_msg_ts = datetime.now()
self.unread_messages = 0
if self.uid == LOCAL_ID:
self.type = WIN_TYPE_COMMAND
self.type_print = 'system messages'
self.window_contacts = [self.contact_list.get_contact(LOCAL_ID)]
self.name = self.type_print
elif self.uid == WIN_TYPE_FILE:
self.type = WIN_TYPE_FILE
self.packet_list = packet_list
elif self.uid in self.contact_list.get_list_of_accounts():
self.type = WIN_TYPE_CONTACT
self.type_print = 'contact'
self.window_contacts = [self.contact_list.get_contact(uid)]
self.name = self.contact_list.get_contact(uid).nick
elif self.uid in self.group_list.get_list_of_group_names():
self.type = WIN_TYPE_GROUP
self.type_print = 'group'
self.window_contacts = self.group_list.get_group_members(self.uid)
self.name = self.group_list.get_group(self.uid).name
else:
raise FunctionReturn(f"Invalid window '{uid}'")
def __len__(self) -> int:
"""Return number of message tuples in message log."""
return len(self.message_log)
def __iter__(self) -> Generator:
"""Iterate over window's message log."""
yield from self.message_log
def add_contacts(self, accounts: List[str]) -> None:
"""Add contact objects to window."""
self.window_contacts += [self.contact_list.get_contact(a) for a in accounts
if not self.has_contact(a) and self.contact_list.has_contact(a)]
def remove_contacts(self, accounts: List[str]) -> None:
"""Remove contact objects from window."""
to_remove = set(accounts) & set([m.rx_account for m in self.window_contacts])
if to_remove:
self.window_contacts = [c for c in self.window_contacts if c.rx_account not in to_remove]
def reset_window(self) -> None:
"""Reset window."""
self.message_log = []
def has_contact(self, account: str) -> bool:
"""Return True if contact with specified account is in window, else False."""
return any(c.rx_account == account for c in self.window_contacts)
def create_handle_dict(self, message_log: List[Tuple['datetime', str, str, bytes, bool]] = None) -> None:
"""Pre-generate {account: handle} dictionary.
This allows `self.print()` to indent accounts and nicks without
having to loop over entire message list for every message.
"""
accounts = set(c.rx_account for c in self.window_contacts)
if message_log is not None:
accounts |= set(a for ts, ma, a, o, w in message_log)
for a in accounts:
self.handle_dict[a] = self.contact_list.get_contact(a).nick if self.contact_list.has_contact(a) else a
def get_handle(self, time_stamp: 'datetime', account: str, origin: bytes, whisper: bool=False) -> str:
"""Returns indented handle complete with headers and trailers."""
if self.type == WIN_TYPE_COMMAND:
handle = "-!- "
else:
handle = self.handle_dict[account] if origin == ORIGIN_CONTACT_HEADER else "Me"
handles = list(self.handle_dict.values()) + ["Me"]
indent = len(max(handles, key=len)) - len(handle) if self.is_active else 0
handle = indent * ' ' + handle
handle = time_stamp.strftime('%H:%M') + ' ' + handle
if not self.is_active:
handle += {WIN_TYPE_GROUP: f" (group {self.name})",
WIN_TYPE_CONTACT: f" (private message)" }.get(self.type, '')
if self.type != WIN_TYPE_COMMAND:
if whisper:
handle += " (whisper)"
handle += ": "
return handle
def print(self, msg_tuple: Tuple['datetime', str, str, bytes, bool], file=None) -> None:
"""Print new message to window."""
bold_on, bold_off, f_name = (BOLD_ON, NORMAL_TEXT, sys.stdout) if file is None else ('', '', file)
ts, message, account, origin, whisper = msg_tuple
if not self.is_active and not self.settings.new_message_notify_preview and self.type != WIN_TYPE_COMMAND:
message = BOLD_ON + f"{self.unread_messages + 1} unread message{'s' if self.unread_messages > 1 else ''}" + NORMAL_TEXT
handle = self.get_handle(ts, account, origin, whisper)
wrapper = textwrap.TextWrapper(get_terminal_width(), initial_indent=handle, subsequent_indent=len(handle)*' ')
wrapped = wrapper.fill(message)
if wrapped == '':
wrapped = handle
wrapped = bold_on + wrapped[:len(handle)] + bold_off + wrapped[len(handle):]
if self.is_active:
if self.previous_msg_ts.date() != ts.date():
print(bold_on + f"00:00 -!- Day changed to {str(ts.date())}" + bold_off, file=f_name)
print(wrapped, file=f_name)
else:
self.unread_messages += 1
if (self.type == WIN_TYPE_CONTACT and self.contact_list.get_contact(account).notifications) \
or (self.type == WIN_TYPE_GROUP and self.group_list.get_group(self.uid).notifications) \
or (self.type == WIN_TYPE_COMMAND):
if len(wrapped.split('\n')) > 1:
# Preview only first line of long message
print(wrapped.split('\n')[0][:-3] + "...")
else:
print(wrapped)
print_on_previous_line(delay=self.settings.new_message_notify_duration, flush=True)
self.previous_msg_ts = ts
def add_new(self,
timestamp: 'datetime',
message: str,
account: str = LOCAL_ID,
origin: bytes = ORIGIN_USER_HEADER,
output: bool = False,
whisper: bool = False) -> None:
"""Add message tuple to message log and optionally print it."""
msg_tuple = (timestamp, message, account, origin, whisper)
self.message_log.append(msg_tuple)
self.handle_dict[account] = (self.contact_list.get_contact(account).nick
if self.contact_list.has_contact(account) else account)
if output:
self.print(msg_tuple)
def redraw(self, file=None) -> None:
"""Print all messages received to window."""
self.unread_messages = 0
if file is None:
clear_screen()
if self.message_log:
self.previous_msg_ts = self.message_log[0][0]
self.create_handle_dict(self.message_log)
for msg_tuple in self.message_log:
self.print(msg_tuple, file)
else:
c_print(f"This window for {self.name} is currently empty.", head=1, tail=1)
def redraw_file_win(self) -> None:
"""Draw file transmission window progress bars."""
# Columns
c1 = ['File name']
c2 = ['Size']
c3 = ['Sender']
c4 = ['Complete']
for i, p in enumerate(self.packet_list):
if p.type == FILE and len(p.assembly_pt_list) > 0:
c1.append(p.name)
c2.append(p.size)
c3.append(p.contact.nick)
c4.append(f"{len(p.assembly_pt_list) / p.packets * 100:.2f}%")
if not len(c1) > 1:
c_print("No file transmissions currently in progress.", head=1, tail=1)
print_on_previous_line(reps=3, delay=0.1)
return None
lst = []
for name, size, sender, percent, in zip(c1, c2, c3, c4):
lst.append('{0:{1}} {2:{3}} {4:{5}} {6:{7}}'.format(
name, max(len(v) for v in c1) + CONTACT_LIST_INDENT,
size, max(len(v) for v in c2) + CONTACT_LIST_INDENT,
sender, max(len(v) for v in c3) + CONTACT_LIST_INDENT,
percent, max(len(v) for v in c4) + CONTACT_LIST_INDENT))
lst.insert(1, get_terminal_width() * '')
print('\n' + '\n'.join(lst) + '\n')
print_on_previous_line(reps=len(lst)+2, delay=0.1)
class WindowList(Iterable):
"""WindowList manages a list of Window objects."""
def __init__(self,
settings: 'Settings',
contact_list: 'ContactList',
group_list: 'GroupList',
packet_list: 'PacketList') -> None:
"""Create a new WindowList object."""
self.settings = settings
self.contact_list = contact_list
self.group_list = group_list
self.packet_list = packet_list
self.active_win = None # type: RxWindow
self.windows = [RxWindow(uid, self.contact_list, self.group_list, self.settings, self.packet_list)
for uid in ([WIN_TYPE_FILE]
+ self.contact_list.get_list_of_accounts()
+ self.group_list.get_list_of_group_names())]
if self.contact_list.has_local_contact():
self.select_rx_window(LOCAL_ID)
def __len__(self) -> int:
"""Return number of windows in window list."""
return len(self.windows)
def __iter__(self) -> Generator:
"""Iterate over window list."""
yield from self.windows
def get_group_windows(self) -> List[RxWindow]:
"""Return list of group windows."""
return [w for w in self.windows if w.type == WIN_TYPE_GROUP]
def has_window(self, uid: str) -> bool:
"""Return True if window with matching UID exists, else False."""
return uid in [w.uid for w in self.windows]
def remove_window(self, uid: str) -> None:
"""Remove window based on it's UID."""
for i, w in enumerate(self.windows):
if uid == w.uid:
del self.windows[i]
break
def select_rx_window(self, uid: str) -> None:
"""Select new active window."""
if self.active_win is not None:
self.active_win.is_active = False
self.active_win = self.get_window(uid)
self.active_win.is_active = True
if self.active_win.type == WIN_TYPE_FILE:
self.active_win.redraw_file_win()
else:
self.active_win.redraw()
def get_local_window(self) -> 'RxWindow':
"""Return command window."""
return self.get_window(LOCAL_ID)
def get_window(self, uid: str) -> 'RxWindow':
"""Return window that matches the specified UID.
Create window if it does not exist.
"""
if not self.has_window(uid):
self.windows.append(RxWindow(uid, self.contact_list, self.group_list, self.settings, self.packet_list))
return next(w for w in self.windows if w.uid == uid)

691
src/transmitter/commands.py Executable file
View File

@ -0,0 +1,691 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import readline
import struct
import textwrap
import time
import typing
from multiprocessing import Queue
from typing import Any, Dict, List, Tuple, Union
from src.common.db_logs import access_logs, change_log_db_key, remove_logs
from src.common.encoding import b58decode, b58encode, bool_to_bytes, int_to_bytes, onion_address_to_pub_key
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import ensure_dir, get_terminal_width, validate_onion_addr
from src.common.output import clear_screen, m_print, phase, print_on_previous_line
from src.common.statics import *
from src.transmitter.commands_g import process_group_command
from src.transmitter.contact import add_new_contact, change_nick, contact_setting, remove_contact
from src.transmitter.key_exchanges import export_onion_service_data, new_local_key, rxp_load_psk, verify_fingerprints
from src.transmitter.packet import cancel_packet, queue_command, queue_message, queue_to_nc
from src.transmitter.user_input import UserInput
from src.transmitter.windows import select_window
if typing.TYPE_CHECKING:
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_onion import OnionService
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.transmitter.windows import TxWindow
QueueDict = Dict[bytes, Queue]
def process_command(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey',
onion_service: 'OnionService',
gateway: 'Gateway'
) -> None:
"""\
Select function based on the first keyword of the
issued command, and pass relevant parameters to it.
"""
# Keyword Function to run ( Parameters )
# -----------------------------------------------------------------------------------------------------------------------------------------
d = {'about': (print_about, ),
'add': (add_new_contact, contact_list, group_list, settings, queues, onion_service ),
'cf': (cancel_packet, user_input, window, settings, queues ),
'cm': (cancel_packet, user_input, window, settings, queues ),
'clear': (clear_screens, user_input, window, settings, queues ),
'cmd': (rxp_show_sys_win, user_input, window, settings, queues ),
'connect': (send_onion_service_key, contact_list, settings, onion_service, gateway),
'exit': (exit_tfc, settings, queues, gateway),
'export': (log_command, user_input, window, contact_list, group_list, settings, queues, master_key ),
'fw': (rxp_show_sys_win, user_input, window, settings, queues ),
'group': (process_group_command, user_input, contact_list, group_list, settings, queues, master_key ),
'help': (print_help, settings ),
'history': (log_command, user_input, window, contact_list, group_list, settings, queues, master_key ),
'localkey': (new_local_key, contact_list, settings, queues, ),
'logging': (contact_setting, user_input, window, contact_list, group_list, settings, queues ),
'msg': (select_window, user_input, window, settings, queues, onion_service, gateway),
'names': (print_recipients, contact_list, group_list, ),
'nick': (change_nick, user_input, window, contact_list, group_list, settings, queues ),
'notify': (contact_setting, user_input, window, contact_list, group_list, settings, queues ),
'passwd': (change_master_key, user_input, contact_list, group_list, settings, queues, master_key, onion_service ),
'psk': (rxp_load_psk, window, contact_list, settings, queues ),
'reset': (clear_screens, user_input, window, settings, queues ),
'rm': (remove_contact, user_input, window, contact_list, group_list, settings, queues, master_key ),
'rmlogs': (remove_log, user_input, contact_list, group_list, settings, queues, master_key ),
'set': (change_setting, user_input, window, contact_list, group_list, settings, queues, gateway),
'settings': (print_settings, settings, gateway),
'store': (contact_setting, user_input, window, contact_list, group_list, settings, queues ),
'unread': (rxp_display_unread, settings, queues ),
'verify': (verify, window, contact_list ),
'whisper': (whisper, user_input, window, settings, queues ),
'whois': (whois, user_input, contact_list, group_list ),
'wipe': (wipe, settings, queues, gateway)
} # type: Dict[str, Any]
try:
cmd_key = user_input.plaintext.split()[0]
except (IndexError, UnboundLocalError):
raise FunctionReturn("Error: Invalid command.", head_clear=True)
try:
from_dict = d[cmd_key]
except KeyError:
raise FunctionReturn(f"Error: Invalid command '{cmd_key}'.", head_clear=True)
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def print_about() -> None:
"""Print URLs that direct to TFC's project site and documentation."""
clear_screen()
print(f"\n Tinfoil Chat {VERSION}\n\n"
" Website: https://github.com/maqp/tfc/\n"
" Wikipage: https://github.com/maqp/tfc/wiki\n")
def clear_screens(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""Clear/reset screen of Source, Destination, and Networked Computer.
Only send an unencrypted command to Networked Computer if traffic
masking is disabled.
With clear command, sending only the command header is enough.
However, as reset command removes the ephemeral message log on
Receiver Program, Transmitter Program must define the window to
reset (in case, e.g., previous window selection command packet
dropped, and active window state is inconsistent between the
TCB programs).
"""
clear = user_input.plaintext.split()[0] == CLEAR
command = CLEAR_SCREEN if clear else RESET_SCREEN + window.uid
queue_command(command, settings, queues)
clear_screen()
if not settings.traffic_masking:
pt_cmd = UNENCRYPTED_SCREEN_CLEAR if clear else UNENCRYPTED_SCREEN_RESET
packet = UNENCRYPTED_DATAGRAM_HEADER + pt_cmd
queue_to_nc(packet, queues[RELAY_PACKET_QUEUE])
if not clear:
readline.clear_history()
os.system(RESET)
def rxp_show_sys_win(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict',
) -> None:
"""\
Display a system window on Receiver Program until the user presses
Enter.
Receiver Program has a dedicated window, WIN_UID_LOCAL, for system
messages that shows information about received commands, status
messages etc.
Receiver Program also has another window, WIN_UID_FILE, that shows
progress of file transmission from contacts that have traffic
masking enabled.
"""
cmd = user_input.plaintext.split()[0]
win_uid = dict(cmd=WIN_UID_LOCAL, fw=WIN_UID_FILE)[cmd]
command = WIN_SELECT + win_uid
queue_command(command, settings, queues)
try:
m_print(f"<Enter> returns Receiver to {window.name}'s window", manual_proceed=True, box=True)
except (EOFError, KeyboardInterrupt):
pass
print_on_previous_line(reps=4, flush=True)
command = WIN_SELECT + window.uid
queue_command(command, settings, queues)
def exit_tfc(settings: 'Settings',
queues: 'QueueDict',
gateway: 'Gateway'
) -> None:
"""Exit TFC on all three computers.
To exit TFC as fast as possible, this function starts by clearing
all command queues before sending the exit command to Receiver
Program. It then sends an unencrypted exit command to Relay Program
on Networked Computer. As the `sender_loop` process loads the
unencrypted exit command from queue, it detects the user's
intention, and after outputting the packet, sends the EXIT signal to
Transmitter Program's main() method that's running the
`monitor_processes` loop. Upon receiving the EXIT signal,
`monitor_processes` kills all Transmitter Program's processes and
exits the program.
During local testing, this function adds some delays to prevent TFC
programs from dying when sockets disconnect.
"""
for q in [COMMAND_PACKET_QUEUE, RELAY_PACKET_QUEUE]:
while queues[q].qsize() > 0:
queues[q].get()
queue_command(EXIT_PROGRAM, settings, queues)
if not settings.traffic_masking:
if settings.local_testing_mode:
time.sleep(LOCAL_TESTING_PACKET_DELAY)
time.sleep(gateway.settings.data_diode_sockets * 1.5)
else:
time.sleep(gateway.settings.race_condition_delay)
relay_command = UNENCRYPTED_DATAGRAM_HEADER + UNENCRYPTED_EXIT_COMMAND
queue_to_nc(relay_command, queues[RELAY_PACKET_QUEUE])
def log_command(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey'
) -> None:
"""Display message logs or export them to plaintext file on TCBs.
Transmitter Program processes sent, Receiver Program sent and
received, messages of all participants in the active window.
"""
cmd = user_input.plaintext.split()[0]
export, header = dict(export =(True, LOG_EXPORT),
history=(False, LOG_DISPLAY))[cmd]
try:
msg_to_load = int(user_input.plaintext.split()[1])
except ValueError:
raise FunctionReturn("Error: Invalid number of messages.", head_clear=True)
except IndexError:
msg_to_load = 0
try:
command = header + int_to_bytes(msg_to_load) + window.uid
except struct.error:
raise FunctionReturn("Error: Invalid number of messages.", head_clear=True)
if export:
if not yes(f"Export logs for '{window.name}' in plaintext?", abort=False):
raise FunctionReturn("Log file export aborted.", tail_clear=True, head=0, delay=1)
queue_command(command, settings, queues)
access_logs(window, contact_list, group_list, settings, master_key, msg_to_load, export)
if export:
raise FunctionReturn(f"Exported log file of {window.type} '{window.name}'.", head_clear=True)
def send_onion_service_key(contact_list: 'ContactList',
settings: 'Settings',
onion_service: 'OnionService',
gateway: 'Gateway'
) -> None:
"""Resend Onion Service key to Relay Program on Networked Computer.
This command is used in cases where Relay Program had to be
restarted for some reason (e.g. due to system updates).
"""
try:
if settings.traffic_masking:
m_print(["Warning!",
"Exporting Onion Service data to Networked Computer ",
"during traffic masking can reveal to an adversary ",
"TFC is being used at the moment. You should only do ",
"this if you've had to restart the Relay Program."], bold=True, head=1, tail=1)
if not yes("Proceed with the Onion Service data export?", abort=False):
raise FunctionReturn("Onion Service data export canceled.", tail_clear=True, delay=1, head=0)
export_onion_service_data(contact_list, settings, onion_service, gateway)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Onion Service data export canceled.", tail_clear=True, delay=1, head=2)
def print_help(settings: 'Settings') -> None:
"""Print the list of commands."""
def help_printer(tuple_list: List[Union[Tuple[str, str, bool]]]) -> None:
"""Print list of commands and their descriptions.
Style in which commands are printed depends on terminal width.
Depending on whether traffic masking is enabled, some commands
are either displayed or hidden.
"""
len_longest_command = max(len(t[0]) for t in tuple_list) + 1 # Add one for spacing
wrapper = textwrap.TextWrapper(width=max(1, terminal_width - len_longest_command))
for help_cmd, description, display in tuple_list:
if not display:
continue
desc_lines = wrapper.fill(description).split('\n')
desc_indent = (len_longest_command - len(help_cmd)) * ' '
print(help_cmd + desc_indent + desc_lines[0])
# Print wrapped description lines with indent
if len(desc_lines) > 1:
for line in desc_lines[1:]:
print(len_longest_command * ' ' + line)
print('')
# ------------------------------------------------------------------------------------------------------------------
y_tm = settings.traffic_masking
n_tm = not settings.traffic_masking
common_commands = [("/about", "Show links to project resources", True),
("/add", "Add new contact", n_tm),
("/cf", "Cancel file transmission to active contact/group", y_tm),
("/cm", "Cancel message transmission to active contact/group", True),
("/clear, ' '", "Clear TFC screens", True),
("/cmd, '//'", "Display command window on Receiver", True),
("/connect", "Resend Onion Service data to Relay", True),
("/exit", "Exit TFC on all three computers", True),
("/export (n)", "Export (n) messages from recipient's log file", True),
("/file", "Send file to active contact/group", True),
("/fw", "Display file reception window on Receiver", y_tm),
("/help", "Display this list of commands", True),
("/history (n)", "Print (n) messages from recipient's log file", True),
("/localkey", "Generate new local key pair", n_tm),
("/logging {on,off}(' all')", "Change message log setting (for all contacts)", True),
("/msg {A,N,G}", "Change recipient to Account, Nick, or Group", n_tm),
("/names", "List contacts and groups", True),
("/nick N", "Change nickname of active recipient/group to N", True),
("/notify {on,off} (' all')", "Change notification settings (for all contacts)", True),
("/passwd {tx,rx}", "Change master password on target system", n_tm),
("/psk", "Open PSK import dialog on Receiver", n_tm),
("/reset", "Reset ephemeral session log for active window", True),
("/rm {A,N}", "Remove contact specified by account A or nick N", n_tm),
("/rmlogs {A,N}", "Remove log entries for account A or nick N", True),
("/set S V", "Change setting S to value V", True),
("/settings", "List setting names, values and descriptions", True),
("/store {on,off} (' all')", "Change file reception (for all contacts)", True),
("/unread, ' '", "List windows with unread messages on Receiver", True),
("/verify", "Verify fingerprints with active contact", True),
("/whisper M", "Send message M, asking it not to be logged", True),
("/whois {A,N}", "Check which A corresponds to N or vice versa", True),
("/wipe", "Wipe all TFC user data and power off systems", True),
("Shift + PgUp/PgDn", "Scroll terminal up/down", True)]
group_commands = [("/group create G A₁..Aₙ", "Create group G and add accounts A₁..Aₙ", n_tm),
("/group join ID G A₁..Aₙ", "Join group ID, call it G and add accounts A₁..Aₙ", n_tm),
("/group add G A₁..Aₙ", "Add accounts A₁..Aₙ to group G", n_tm),
("/group rm G A₁..Aₙ", "Remove accounts A₁..Aₙ from group G", n_tm),
("/group rm G", "Remove group G", n_tm)]
terminal_width = get_terminal_width()
clear_screen()
print(textwrap.fill("List of commands:", width=terminal_width))
print('')
help_printer(common_commands)
print(terminal_width * '')
if settings.traffic_masking:
print('')
else:
print(textwrap.fill("Group management:", width=terminal_width))
print('')
help_printer(group_commands)
print(terminal_width * '' + '\n')
def print_recipients(contact_list: 'ContactList', group_list: 'GroupList') -> None:
"""Print the list of contacts and groups."""
contact_list.print_contacts()
group_list.print_groups()
def change_master_key(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey',
onion_service: 'OnionService'
) -> None:
"""Change the master key on Transmitter/Receiver Program."""
try:
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
try:
device = user_input.plaintext.split()[1].lower()
except IndexError:
raise FunctionReturn(f"Error: No target-system ('{TX}' or '{RX}') specified.", head_clear=True)
if device not in [TX, RX]:
raise FunctionReturn(f"Error: Invalid target system '{device}'.", head_clear=True)
if device == RX:
queue_command(CH_MASTER_KEY, settings, queues)
return None
old_master_key = master_key.master_key[:]
new_master_key = master_key.master_key = master_key.new_master_key()
phase("Re-encrypting databases")
queues[KEY_MANAGEMENT_QUEUE].put((KDB_CHANGE_MASTER_KEY_HEADER, master_key))
ensure_dir(DIR_USER_DATA)
if os.path.isfile(f'{DIR_USER_DATA}{settings.software_operation}_logs'):
change_log_db_key(old_master_key, new_master_key, settings)
contact_list.store_contacts()
group_list.store_groups()
settings.store_settings()
onion_service.store_onion_service_private_key()
phase(DONE)
m_print("Master key successfully changed.", bold=True, tail_clear=True, delay=1, head=1)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Password change aborted.", tail_clear=True, delay=1, head=2)
def remove_log(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey'
) -> None:
"""Remove log entries for contact or group."""
try:
selection = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No contact/group specified.", head_clear=True)
if not yes(f"Remove logs for {selection}?", abort=False, head=1):
raise FunctionReturn("Log file removal aborted.", tail_clear=True, delay=1, head=0)
# Determine selector (group ID or Onion Service public key) from command parameters
if selection in contact_list.contact_selectors():
selector = contact_list.get_contact_by_address_or_nick(selection).onion_pub_key
elif selection in group_list.get_list_of_group_names():
selector = group_list.get_group(selection).group_id
elif len(selection) == ONION_ADDRESS_LENGTH:
if validate_onion_addr(selection):
raise FunctionReturn("Error: Invalid account.", head_clear=True)
selector = onion_address_to_pub_key(selection)
elif len(selection) == GROUP_ID_ENC_LENGTH:
try:
selector = b58decode(selection)
except ValueError:
raise FunctionReturn("Error: Invalid group ID.", head_clear=True)
else:
raise FunctionReturn("Error: Unknown selector.", head_clear=True)
# Remove logs that match the selector
command = LOG_REMOVE + selector
queue_command(command, settings, queues)
remove_logs(contact_list, group_list, settings, master_key, selector)
def change_setting(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
gateway: 'Gateway'
) -> None:
"""Change setting on Transmitter and Receiver Program."""
# Validate the KV-pair
try:
setting = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No setting specified.", head_clear=True)
if setting not in (settings.key_list + gateway.settings.key_list):
raise FunctionReturn(f"Error: Invalid setting '{setting}'.", head_clear=True)
try:
value = user_input.plaintext.split()[2]
except IndexError:
raise FunctionReturn("Error: No value for setting specified.", head_clear=True)
# Check if the setting can be changed
relay_settings = dict(serial_error_correction=UNENCRYPTED_EC_RATIO,
serial_baudrate =UNENCRYPTED_BAUDRATE,
allow_contact_requests =UNENCRYPTED_MANAGE_CONTACT_REQ)
if settings.traffic_masking and (setting in relay_settings or setting == 'max_number_of_contacts'):
raise FunctionReturn("Error: Can't change this setting during traffic masking.", head_clear=True)
if setting in ['use_serial_usb_adapter', 'built_in_serial_interface']:
raise FunctionReturn("Error: Serial interface setting can only be changed manually.", head_clear=True)
# Change the setting
if setting in gateway.settings.key_list:
gateway.settings.change_setting(setting, value)
else:
settings.change_setting(setting, value, contact_list, group_list)
receiver_command = CH_SETTING + setting.encode() + US_BYTE + value.encode()
queue_command(receiver_command, settings, queues)
if setting in relay_settings:
if setting == 'allow_contact_requests':
value = bool_to_bytes(settings.allow_contact_requests).decode()
relay_command = UNENCRYPTED_DATAGRAM_HEADER + relay_settings[setting] + value.encode()
queue_to_nc(relay_command, queues[RELAY_PACKET_QUEUE])
# Propagate the effects of the setting
if setting == 'max_number_of_contacts':
contact_list.store_contacts()
queues[KEY_MANAGEMENT_QUEUE].put((KDB_UPDATE_SIZE_HEADER, settings))
if setting in ['max_number_of_group_members', 'max_number_of_groups']:
group_list.store_groups()
if setting == 'traffic_masking':
queues[SENDER_MODE_QUEUE].put(settings)
queues[TRAFFIC_MASKING_QUEUE].put(settings.traffic_masking)
window.deselect()
if setting == 'log_file_masking':
queues[LOGFILE_MASKING_QUEUE].put(settings.log_file_masking)
def print_settings(settings: 'Settings',
gateway: 'Gateway') -> None:
"""Print settings and gateway settings."""
settings.print_settings()
gateway.settings.print_settings()
def rxp_display_unread(settings: 'Settings', queues: 'QueueDict') -> None:
"""\
Display the list of windows that contain unread messages on Receiver
Program.
"""
queue_command(WIN_ACTIVITY, settings, queues)
def verify(window: 'TxWindow', contact_list: 'ContactList') -> None:
"""Verify fingerprints with contact."""
if window.type == WIN_TYPE_GROUP or window.contact is None:
raise FunctionReturn("Error: A group is selected.", head_clear=True)
if window.contact.uses_psk():
raise FunctionReturn("Pre-shared keys have no fingerprints.", head_clear=True)
try:
verified = verify_fingerprints(window.contact.tx_fingerprint,
window.contact.rx_fingerprint)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Fingerprint verification aborted.", delay=1, head=2, tail_clear=True)
status_hr, status = {True: ("Verified", KEX_STATUS_VERIFIED),
False: ("Unverified", KEX_STATUS_UNVERIFIED)}[verified]
window.contact.kex_status = status
contact_list.store_contacts()
m_print(f"Marked fingerprints with {window.name} as '{status_hr}'.", bold=True, tail_clear=True, delay=1, tail=1)
def whisper(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict',
) -> None:
"""\
Send a message to the contact that overrides their enabled logging
setting for that message.
The functionality of this feature is impossible to enforce, but if
the recipient can be trusted and they do not modify their client,
this feature can be used to send the message off-the-record.
"""
try:
message = user_input.plaintext.strip().split(' ', 1)[1]
except IndexError:
raise FunctionReturn("Error: No whisper message specified.", head_clear=True)
queue_message(user_input=UserInput(message, MESSAGE),
window=window,
settings=settings,
queues=queues,
whisper=True,
log_as_ph=True)
def whois(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList'
) -> None:
"""Do a lookup for a contact or group selector."""
try:
selector = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No account or nick specified.", head_clear=True)
# Contacts
if selector in contact_list.get_list_of_addresses():
m_print([f"Nick of '{selector}' is ",
f"{contact_list.get_contact_by_address_or_nick(selector).nick}"], bold=True)
elif selector in contact_list.get_list_of_nicks():
m_print([f"Account of '{selector}' is",
f"{contact_list.get_contact_by_address_or_nick(selector).onion_address}"], bold=True)
# Groups
elif selector in group_list.get_list_of_group_names():
m_print([f"Group ID of group '{selector}' is",
f"{b58encode(group_list.get_group(selector).group_id)}"], bold=True)
elif selector in group_list.get_list_of_hr_group_ids():
m_print([f"Name of group with ID '{selector}' is",
f"{group_list.get_group_by_id(b58decode(selector)).name}"], bold=True)
else:
raise FunctionReturn("Error: Unknown selector.", head_clear=True)
def wipe(settings: 'Settings',
queues: 'QueueDict',
gateway: 'Gateway'
) -> None:
"""\
Reset terminals, wipe all TFC user data from Source, Networked, and
Destination Computer, and power all three systems off.
The purpose of the wipe command is to provide additional protection
against physical attackers, e.g. in situation where a dissident gets
a knock on their door. By overwriting and deleting user data the
program prevents access to encrypted databases. Additional security
should be sought with full disk encryption (FDE).
Unfortunately, no effective tool for overwriting RAM currently exists.
However, as long as Source and Destination Computers use FDE and
DDR3 memory, recovery of sensitive data becomes impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
"""
if not yes("Wipe all user data and power off systems?", abort=False):
raise FunctionReturn("Wipe command aborted.", head_clear=True)
clear_screen()
for q in [COMMAND_PACKET_QUEUE, RELAY_PACKET_QUEUE]:
while queues[q].qsize() != 0:
queues[q].get()
queue_command(WIPE_USR_DATA, settings, queues)
if not settings.traffic_masking:
if settings.local_testing_mode:
time.sleep(0.8)
time.sleep(gateway.settings.data_diode_sockets * 2.2)
else:
time.sleep(gateway.settings.race_condition_delay)
relay_command = UNENCRYPTED_DATAGRAM_HEADER + UNENCRYPTED_WIPE_COMMAND
queue_to_nc(relay_command, queues[RELAY_PACKET_QUEUE])
os.system(RESET)

View File

@ -0,0 +1,327 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import typing
from typing import Callable, Dict, List, Optional
from src.common.db_logs import remove_logs
from src.common.encoding import b58decode, int_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import ignored, validate_group_name
from src.common.output import group_management_print, m_print
from src.common.statics import *
from src.transmitter.packet import queue_command, queue_to_nc
from src.transmitter.user_input import UserInput
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.transmitter.windows import TxWindow
QueueDict = Dict[bytes, Queue]
def process_group_command(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey'
) -> None:
"""Parse a group command and process it accordingly."""
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
input_parameters = user_input.plaintext.split() # type: List[str]
try:
command_type = input_parameters[1]
except IndexError:
raise FunctionReturn("Error: Invalid group command.", head_clear=True)
if command_type not in ['create', 'join', 'add', 'rm']:
raise FunctionReturn("Error: Invalid group command.")
group_id = None # type: Optional[bytes]
if command_type == 'join':
try:
group_id_s = input_parameters[2]
except IndexError:
raise FunctionReturn("Error: No group ID specified.", head_clear=True)
try:
group_id = b58decode(group_id_s)
except ValueError:
raise FunctionReturn("Error: Invalid group ID.", head_clear=True)
if group_id in group_list.get_list_of_group_ids():
raise FunctionReturn("Error: Group with matching ID already exists.", head_clear=True)
try:
name_index = 3 if command_type == 'join' else 2
group_name = input_parameters[name_index]
except IndexError:
raise FunctionReturn("Error: No group name specified.", head_clear=True)
member_index = 4 if command_type == 'join' else 3
purp_members = input_parameters[member_index:]
# Swap specified strings to public keys
selectors = contact_list.contact_selectors()
pub_keys = [contact_list.get_contact_by_address_or_nick(m).onion_pub_key for m in purp_members if m in selectors]
func = dict(create=group_create,
join =group_create,
add =group_add_member,
rm =group_rm_member)[command_type] # type: Callable
func(group_name, pub_keys, contact_list, group_list, settings, queues, master_key, group_id)
print('')
def group_create(group_name: str,
purp_members: List[bytes],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
_: 'MasterKey',
group_id: Optional[bytes] = None
) -> None:
"""Create a new group.
Validate the group name and determine what members can be added.
"""
error_msg = validate_group_name(group_name, contact_list, group_list)
if error_msg:
raise FunctionReturn(error_msg, head_clear=True)
public_keys = set(contact_list.get_list_of_pub_keys())
purp_pub_keys = set(purp_members)
accepted = list(purp_pub_keys & public_keys)
rejected = list(purp_pub_keys - public_keys)
if len(accepted) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} "
f"members per group.", head_clear=True)
if len(group_list) == settings.max_number_of_groups:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_groups} groups.", head_clear=True)
header = GROUP_MSG_INVITE_HEADER if group_id is None else GROUP_MSG_JOIN_HEADER
if group_id is None:
while True:
group_id = os.urandom(GROUP_ID_LENGTH)
if group_id not in group_list.get_list_of_group_ids():
break
group_list.add_group(group_name,
group_id,
settings.log_messages_by_default,
settings.show_notifications_by_default,
members=[contact_list.get_contact_by_pub_key(k) for k in accepted])
command = GROUP_CREATE + group_id + group_name.encode() + US_BYTE + b''.join(accepted)
queue_command(command, settings, queues)
group_management_print(NEW_GROUP, accepted, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if accepted:
if yes("Publish the list of group members to participants?", abort=False):
create_packet = header + group_id + b''.join(accepted)
queue_to_nc(create_packet, queues[RELAY_PACKET_QUEUE])
else:
m_print(f"Created an empty group '{group_name}'.", bold=True, head=1)
def group_add_member(group_name: str,
purp_members: List['bytes'],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey',
_: Optional[bytes] = None
) -> None:
"""Add new member(s) to a specified group."""
if group_name not in group_list.get_list_of_group_names():
if yes(f"Group {group_name} was not found. Create new group?", abort=False, head=1):
group_create(group_name, purp_members, contact_list, group_list, settings, queues, master_key)
return None
else:
raise FunctionReturn("Group creation aborted.", head=0, delay=1, tail_clear=True)
purp_pub_keys = set(purp_members)
pub_keys = set(contact_list.get_list_of_pub_keys())
before_adding = set(group_list.get_group(group_name).get_list_of_member_pub_keys())
ok_pub_keys_set = set(pub_keys & purp_pub_keys)
new_in_group_set = set(ok_pub_keys_set - before_adding)
end_assembly = list(before_adding | new_in_group_set)
rejected = list(purp_pub_keys - pub_keys)
already_in_g = list(before_adding & purp_pub_keys)
new_in_group = list(new_in_group_set)
ok_pub_keys = list(ok_pub_keys_set)
if len(end_assembly) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} "
f"members per group.", head_clear=True)
group = group_list.get_group(group_name)
group.add_members([contact_list.get_contact_by_pub_key(k) for k in new_in_group])
command = GROUP_ADD + group.group_id + b''.join(ok_pub_keys)
queue_command(command, settings, queues)
group_management_print(ADDED_MEMBERS, new_in_group, contact_list, group_name)
group_management_print(ALREADY_MEMBER, already_in_g, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if new_in_group:
if yes("Publish the list of new members to involved?", abort=False):
add_packet = (GROUP_MSG_MEMBER_ADD_HEADER
+ group.group_id
+ int_to_bytes(len(before_adding))
+ b''.join(before_adding)
+ b''.join(new_in_group))
queue_to_nc(add_packet, queues[RELAY_PACKET_QUEUE])
def group_rm_member(group_name: str,
purp_members: List[bytes],
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey',
_: Optional[bytes] = None
) -> None:
"""Remove member(s) from the specified group or remove the group itself."""
if not purp_members:
group_rm_group(group_name, contact_list, group_list, settings, queues, master_key)
if group_name not in group_list.get_list_of_group_names():
raise FunctionReturn(f"Group '{group_name}' does not exist.", head_clear=True)
purp_pub_keys = set(purp_members)
pub_keys = set(contact_list.get_list_of_pub_keys())
before_removal = set(group_list.get_group(group_name).get_list_of_member_pub_keys())
ok_pub_keys_set = set(purp_pub_keys & pub_keys)
removable_set = set(before_removal & ok_pub_keys_set)
remaining = list(before_removal - removable_set)
not_in_group = list(ok_pub_keys_set - before_removal)
rejected = list(purp_pub_keys - pub_keys)
removable = list(removable_set)
ok_pub_keys = list(ok_pub_keys_set)
group = group_list.get_group(group_name)
group.remove_members(removable)
command = GROUP_REMOVE + group.group_id + b''.join(ok_pub_keys)
queue_command(command, settings, queues)
group_management_print(REMOVED_MEMBERS, removable, contact_list, group_name)
group_management_print(NOT_IN_GROUP, not_in_group, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if removable and remaining and yes("Publish the list of removed members to remaining members?", abort=False):
rem_packet = (GROUP_MSG_MEMBER_REM_HEADER
+ group.group_id
+ int_to_bytes(len(remaining))
+ b''.join(remaining)
+ b''.join(removable))
queue_to_nc(rem_packet, queues[RELAY_PACKET_QUEUE])
def group_rm_group(group_name: str,
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey'
) -> None:
"""Remove the group with its members."""
if not yes(f"Remove group '{group_name}'?", abort=False):
raise FunctionReturn("Group removal aborted.", head=0, delay=1, tail_clear=True)
if group_name in group_list.get_list_of_group_names():
group_id = group_list.get_group(group_name).group_id
else:
try:
group_id = b58decode(group_name)
except ValueError:
raise FunctionReturn("Error: Invalid group name/ID.", head_clear=True)
command = LOG_REMOVE + group_id
queue_command(command, settings, queues)
command = GROUP_DELETE + group_id
queue_command(command, settings, queues)
if group_list.has_group(group_name):
with ignored(FunctionReturn):
remove_logs(contact_list, group_list, settings, master_key, group_id)
else:
raise FunctionReturn(f"Transmitter has no group '{group_name}' to remove.")
group = group_list.get_group(group_name)
if not group.empty() and yes("Notify members about leaving the group?", abort=False):
exit_packet = (GROUP_MSG_EXIT_GROUP_HEADER
+ group.group_id
+ b''.join(group.get_list_of_member_pub_keys()))
queue_to_nc(exit_packet, queues[RELAY_PACKET_QUEUE])
group_list.remove_group_by_name(group_name)
raise FunctionReturn(f"Removed group '{group_name}'.", head=0, delay=1, tail_clear=True, bold=True)
def rename_group(new_name: str,
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
) -> None:
"""Rename the active group."""
if window.type == WIN_TYPE_CONTACT or window.group is None:
raise FunctionReturn("Error: Selected window is not a group window.", head_clear=True)
error_msg = validate_group_name(new_name, contact_list, group_list)
if error_msg:
raise FunctionReturn(error_msg, head_clear=True)
command = GROUP_RENAME + window.uid + new_name.encode()
queue_command(command, settings, queues)
old_name = window.group.name
window.group.name = new_name
group_list.store_groups()
raise FunctionReturn(f"Renamed group '{old_name}' to '{new_name}'.", delay=1, tail_clear=True)

287
src/transmitter/contact.py Normal file
View File

@ -0,0 +1,287 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import typing
from typing import Dict
from src.common.db_logs import remove_logs
from src.common.encoding import onion_address_to_pub_key
from src.common.exceptions import FunctionReturn
from src.common.input import box_input, yes
from src.common.misc import ignored, validate_key_exchange, validate_nick, validate_onion_addr
from src.common.output import m_print
from src.common.statics import *
from src.transmitter.commands_g import rename_group
from src.transmitter.key_exchanges import create_pre_shared_key, start_key_exchange
from src.transmitter.packet import queue_command, queue_to_nc
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_onion import OnionService
from src.common.db_settings import Settings
from src.transmitter.user_input import UserInput
from src.transmitter.windows import TxWindow
QueueDict = Dict[bytes, Queue]
def add_new_contact(contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
onion_service: 'OnionService'
) -> None:
"""Prompt for contact account details and initialize desired key exchange.
This function requests the minimum amount of data about the
recipient as possible. The TFC account of contact is the same as the
Onion URL of contact's v3 Tor Onion Service. Since the accounts are
random and hard to remember, the user has to choose a nickname for
their contact. Finally, the user must select the key exchange method:
ECDHE for convenience in a pre-quantum world, or PSK for situations
where physical key exchange is possible, and ciphertext must remain
secure even after sufficient QTMs are available to adversaries.
Before starting the key exchange, Transmitter Program exports the
public key of contact's Onion Service to Relay Program on their
Networked Computer so that a connection to the contact can be
established.
"""
try:
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
if len(contact_list) >= settings.max_number_of_contacts:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_contacts} accounts.",
head_clear=True)
m_print("Add new contact", head=1, bold=True, head_clear=True)
m_print(["Your TFC account is",
onion_service.user_onion_address,
'', "Warning!",
"Anyone who knows this account",
"can see when your TFC is online"], box=True)
contact_address = box_input("Contact account",
expected_len=ONION_ADDRESS_LENGTH,
validator=validate_onion_addr,
validator_args=onion_service.user_onion_address).strip()
onion_pub_key = onion_address_to_pub_key(contact_address)
contact_nick = box_input("Contact nick",
expected_len=ONION_ADDRESS_LENGTH, # Limited to 255 but such long nick is unpractical.
validator=validate_nick,
validator_args=(contact_list, group_list, onion_pub_key)).strip()
key_exchange = box_input(f"Key exchange ([{ECDHE}],PSK) ",
default=ECDHE,
expected_len=28,
validator=validate_key_exchange).strip()
relay_command = UNENCRYPTED_DATAGRAM_HEADER + UNENCRYPTED_ADD_NEW_CONTACT + onion_pub_key
queue_to_nc(relay_command, queues[RELAY_PACKET_QUEUE])
if key_exchange.upper() in ECDHE:
start_key_exchange(onion_pub_key, contact_nick, contact_list, settings, queues)
elif key_exchange.upper() in PSK:
create_pre_shared_key(onion_pub_key, contact_nick, contact_list, settings, onion_service, queues)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Contact creation aborted.", head=2, delay=1, tail_clear=True)
def remove_contact(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict',
master_key: 'MasterKey') -> None:
"""Remove contact from TFC."""
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
try:
selection = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No account specified.", head_clear=True)
if not yes(f"Remove contact '{selection}'?", abort=False, head=1):
raise FunctionReturn("Removal of contact aborted.", head=0, delay=1, tail_clear=True)
if selection in contact_list.contact_selectors():
onion_pub_key = contact_list.get_contact_by_address_or_nick(selection).onion_pub_key
else:
if validate_onion_addr(selection):
raise FunctionReturn("Error: Invalid selection.", head=0, delay=1, tail_clear=True)
else:
onion_pub_key = onion_address_to_pub_key(selection)
receiver_command = CONTACT_REM + onion_pub_key
queue_command(receiver_command, settings, queues)
with ignored(FunctionReturn):
remove_logs(contact_list, group_list, settings, master_key, onion_pub_key)
queues[KEY_MANAGEMENT_QUEUE].put((KDB_REMOVE_ENTRY_HEADER, onion_pub_key))
relay_command = UNENCRYPTED_DATAGRAM_HEADER + UNENCRYPTED_REM_CONTACT + onion_pub_key
queue_to_nc(relay_command, queues[RELAY_PACKET_QUEUE])
if onion_pub_key in contact_list.get_list_of_pub_keys():
contact = contact_list.get_contact_by_pub_key(onion_pub_key)
target = f"{contact.nick} ({contact.short_address})"
contact_list.remove_contact_by_pub_key(onion_pub_key)
m_print(f"Removed {target} from contacts.", head=1, tail=1)
else:
target = f"{selection[:TRUNC_ADDRESS_LENGTH]}"
m_print(f"Transmitter has no {target} to remove.", head=1, tail=1)
if any([g.remove_members([onion_pub_key]) for g in group_list]):
m_print(f"Removed {target} from group(s).", tail=1)
if window.type == WIN_TYPE_CONTACT:
if onion_pub_key == window.uid:
window.deselect()
if window.type == WIN_TYPE_GROUP:
for c in window:
if c.onion_pub_key == onion_pub_key:
window.update_window(group_list)
# If the last member of the group is removed, deselect
# the group. Deselection is not done in
# update_group_win_members because it would prevent
# selecting the empty group for group related commands
# such as notifications.
if not window.window_contacts:
window.deselect()
def change_nick(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict') -> None:
"""Change nick of contact."""
try:
nick = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No nick specified.", head_clear=True)
if window.type == WIN_TYPE_GROUP:
rename_group(nick, window, contact_list, group_list, settings, queues)
assert window.contact is not None
onion_pub_key = window.contact.onion_pub_key
error_msg = validate_nick(nick, (contact_list, group_list, onion_pub_key))
if error_msg:
raise FunctionReturn(error_msg, head_clear=True)
window.contact.nick = nick
window.name = nick
contact_list.store_contacts()
command = CH_NICKNAME + onion_pub_key + nick.encode()
queue_command(command, settings, queues)
def contact_setting(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""\
Change logging, file reception, or notification setting of a group
or (all) contact(s).
"""
try:
parameters = user_input.plaintext.split()
cmd_key = parameters[0]
cmd_header = {LOGGING: CH_LOGGING,
STORE: CH_FILE_RECV,
NOTIFY: CH_NOTIFY}[cmd_key]
setting, b_value = dict(on=(ENABLE, True),
off=(DISABLE, False))[parameters[1]]
except (IndexError, KeyError):
raise FunctionReturn("Error: Invalid command.", head_clear=True)
# If second parameter 'all' is included, apply setting for all contacts and groups
try:
win_uid = b''
if parameters[2] == ALL:
cmd_value = setting.upper()
else:
raise FunctionReturn("Error: Invalid command.", head_clear=True)
except IndexError:
win_uid = window.uid
cmd_value = setting + win_uid
if win_uid:
if window.type == WIN_TYPE_CONTACT and window.contact is not None:
if cmd_key == LOGGING: window.contact.log_messages = b_value
if cmd_key == STORE: window.contact.file_reception = b_value
if cmd_key == NOTIFY: window.contact.notifications = b_value
contact_list.store_contacts()
if window.type == WIN_TYPE_GROUP and window.group is not None:
if cmd_key == LOGGING: window.group.log_messages = b_value
if cmd_key == STORE:
for c in window:
c.file_reception = b_value
if cmd_key == NOTIFY: window.group.notifications = b_value
group_list.store_groups()
else:
for contact in contact_list:
if cmd_key == LOGGING: contact.log_messages = b_value
if cmd_key == STORE: contact.file_reception = b_value
if cmd_key == NOTIFY: contact.notifications = b_value
contact_list.store_contacts()
for group in group_list:
if cmd_key == LOGGING: group.log_messages = b_value
if cmd_key == NOTIFY: group.notifications = b_value
group_list.store_groups()
command = cmd_header + cmd_value
if settings.traffic_masking and cmd_key == LOGGING:
# Send `log_writer_loop` the new logging setting that is loaded
# when the next noise packet is loaded from `noise_packet_loop`.
queues[LOG_SETTING_QUEUE].put(b_value)
window.update_log_messages()
queue_command(command, settings, queues)

159
src/transmitter/files.py Executable file
View File

@ -0,0 +1,159 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import datetime
import os
import typing
import zlib
from typing import Tuple
from src.common.crypto import byte_padding, csprng, encrypt_and_sign
from src.common.encoding import int_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.misc import readable_size, split_byte_string
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_settings import Settings
from src.transmitter.windows import TxWindow
class File(object):
"""File object wraps methods around file data/header processing.
The File object is only used when sending a file during traffic
masking.
"""
def __init__(self,
path: str,
window: 'TxWindow',
settings: 'Settings'
) -> None:
"""Load file data from specified path and add headers."""
self.window = window
self.settings = settings
self.name = self.get_name(path)
data = self.load_file_data(path)
size, self.size_hr = self.get_size(path)
processed = self.process_file_data(data)
self.time_hr, self.plaintext = self.finalize(size, processed)
@staticmethod
def get_name(path: str) -> bytes:
"""Parse and validate file name."""
name = (path.split('/')[-1]).encode()
File.name_length_check(name)
return name
@staticmethod
def name_length_check(name: bytes) -> None:
"""Ensure that file header fits the first packet."""
full_header_length = (FILE_PACKET_CTR_LENGTH
+ FILE_ETA_FIELD_LENGTH
+ FILE_SIZE_FIELD_LENGTH
+ len(name) + len(US_BYTE))
if full_header_length >= PADDING_LENGTH:
raise FunctionReturn("Error: File name is too long.", head_clear=True)
@staticmethod
def load_file_data(path: str) -> bytes:
"""Load file name, size, and data from the specified path."""
if not os.path.isfile(path):
raise FunctionReturn("Error: File not found.", head_clear=True)
with open(path, 'rb') as f:
data = f.read()
return data
@staticmethod
def get_size(path: str) -> Tuple[bytes, str]:
"""Get size of file in bytes and in human readable form."""
byte_size = os.path.getsize(path)
if byte_size == 0:
raise FunctionReturn("Error: Target file is empty.", head_clear=True)
size = int_to_bytes(byte_size)
size_hr = readable_size(byte_size)
return size, size_hr
@staticmethod
def process_file_data(data: bytes) -> bytes:
"""Compress, encrypt and encode file data.
Compress file to reduce data transmission time. Add an inner
layer of encryption to provide sender-based control over partial
transmission.
"""
compressed = zlib.compress(data, level=COMPRESSION_LEVEL)
file_key = csprng()
processed = encrypt_and_sign(compressed, key=file_key)
processed += file_key
return processed
def finalize(self, size: bytes, processed: bytes) -> Tuple[str, bytes]:
"""Finalize packet and generate plaintext."""
time_bytes, time_print = self.update_delivery_time(self.name, size, processed, self.settings, self.window)
packet_data = time_bytes + size + self.name + US_BYTE + processed
return time_print, packet_data
@staticmethod
def update_delivery_time(name: bytes,
size: bytes,
processed: bytes,
settings: 'Settings',
window: 'TxWindow'
) -> Tuple[bytes, str]:
"""Calculate transmission time.
Transmission time depends on delay settings, file size and
number of members if the recipient is a group.
"""
time_bytes = bytes(FILE_ETA_FIELD_LENGTH)
no_packets = File.count_number_of_packets(name, size, processed, time_bytes)
avg_delay = settings.tm_static_delay + (settings.tm_random_delay / 2)
total_time = len(window) * no_packets * avg_delay
total_time *= 2 # Accommodate command packets between file packets
total_time += no_packets * TRAFFIC_MASKING_QUEUE_CHECK_DELAY
# Update delivery time
time_bytes = int_to_bytes(int(total_time))
time_hr = str(datetime.timedelta(seconds=int(total_time)))
return time_bytes, time_hr
@staticmethod
def count_number_of_packets(name: bytes,
size: bytes,
processed: bytes,
time_bytes: bytes
) -> int:
"""Count number of packets needed for file delivery."""
packet_data = time_bytes + size + name + US_BYTE + processed
if len(packet_data) < PADDING_LENGTH:
return 1
else:
packet_data += bytes(FILE_PACKET_CTR_LENGTH)
packet_data = byte_padding(packet_data)
return len(split_byte_string(packet_data, item_len=PADDING_LENGTH))

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
@ -23,69 +24,74 @@ import readline
import sys
import typing
from typing import Dict
from typing import Dict, NoReturn
from src.common.exceptions import FunctionReturn
from src.common.misc import get_tab_completer, ignored
from src.common.statics import *
from src.tx.commands import process_command
from src.tx.contact import add_new_contact
from src.tx.key_exchanges import new_local_key
from src.tx.packet import queue_file, queue_message
from src.tx.user_input import get_input
from src.tx.windows import TxWindow
from src.transmitter.commands import process_command
from src.transmitter.contact import add_new_contact
from src.transmitter.key_exchanges import export_onion_service_data, new_local_key
from src.transmitter.packet import queue_file, queue_message
from src.transmitter.user_input import get_input
from src.transmitter.windows import TxWindow
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_onion import OnionService
from src.common.db_settings import Settings
from src.common.gateway import Gateway
def input_loop(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
gateway: 'Gateway',
contact_list: 'ContactList',
group_list: 'GroupList',
master_key: 'MasterKey',
stdin_fd: int) -> None:
def input_loop(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
gateway: 'Gateway',
contact_list: 'ContactList',
group_list: 'GroupList',
master_key: 'MasterKey',
onion_service: 'OnionService',
stdin_fd: int
) -> NoReturn:
"""Get input from user and process it accordingly.
Tx side of TFC runs two processes -- input and sender loop -- separate
from one another. This allows prioritized output of queued assembly
packets. input_loop handles Tx-side functions excluding assembly packet
encryption, output and logging, and hash ratchet key/counter updates in
key_list database.
Running this loop as a process allows handling different functions
including inputs, key exchanges, file loading and assembly packet
generation, separate from assembly packet output.
"""
sys.stdin = os.fdopen(stdin_fd)
window = TxWindow(contact_list, group_list)
while True:
with ignored(EOFError, FunctionReturn, KeyboardInterrupt):
readline.set_completer(get_tab_completer(contact_list, group_list, settings))
readline.set_completer(get_tab_completer(contact_list, group_list, settings, gateway))
readline.parse_and_bind('tab: complete')
window.update_group_win_members(group_list)
window.update_window(group_list)
while not onion_service.is_delivered:
export_onion_service_data(contact_list, settings, onion_service, gateway)
while not contact_list.has_local_contact():
new_local_key(contact_list, settings, queues)
while not contact_list.has_contacts():
add_new_contact(contact_list, group_list, settings, queues)
add_new_contact(contact_list, group_list, settings, queues, onion_service)
while not window.is_selected():
window.select_tx_window(settings, queues)
window.select_tx_window(settings, queues, onion_service, gateway)
user_input = get_input(window, settings)
if user_input.type == MESSAGE:
queue_message(user_input, window, settings, queues[MESSAGE_PACKET_QUEUE])
queue_message(user_input, window, settings, queues)
elif user_input.type == FILE:
queue_file(window, settings, queues[FILE_PACKET_QUEUE], gateway)
queue_file(window, settings, queues)
elif user_input.type == COMMAND:
process_command(user_input, window, settings, queues, contact_list, group_list, master_key)
process_command(
user_input, window, contact_list, group_list, settings, queues, master_key, onion_service, gateway)

View File

@ -0,0 +1,543 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
import time
import typing
from typing import Dict
from src.common.crypto import argon2_kdf, blake2b, csprng, encrypt_and_sign, X448
from src.common.db_masterkey import MasterKey
from src.common.encoding import bool_to_bytes, int_to_bytes, pub_key_to_short_address, str_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.input import ask_confirmation_code, get_b58_key, nc_bypass_msg, yes
from src.common.output import m_print, phase, print_fingerprint, print_key, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
from src.transmitter.packet import queue_command, queue_to_nc
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_onion import OnionService
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.transmitter.windows import TxWindow
QueueDict = Dict[bytes, Queue]
def export_onion_service_data(contact_list: 'ContactList',
settings: 'Settings',
onion_service: 'OnionService',
gateway: 'Gateway'
) -> None:
"""\
Send the Tor Onion Service's private key and list of Onion Service
public keys of contacts to Relay Program on Networked Computer.
This private key is not intended to be used by the Transmitter
Program. Because the Networked Computer we are exporting it to
might not store data, we use the trusted Source Computer to generate
the private key and store it safely. The private key is needed by
Tor on Networked Computer to start the Onion Service.
Exporting this private key does not endanger message confidentiality
because TFC uses a separate key exchange with separate private key
to create the symmetric keys that protect the messages. That private
key is never exported to the Networked Computer.
Access to this key does not give any to user any information other
than the v3 Onion Address. However, if they have compromised Relay
Program to gain access to the key, they can see its public part
anyway.
This key is used by Tor to sign Diffie-Hellman public keys used when
clients of contacts establish a secure connection to the Onion
Service. This key can't be used to decrypt traffic retrospectively.
The worst possible case in the situation of key compromise is, the
key allows the attacker to start their own copy of the user's Onion
Service.
This does not allow impersonating as the user however, because the
attacker is not in possession of keys that allow them to create
valid ciphertexts. Even if they inject TFC public keys to conduct a
MITM attack, that attack will be detected during fingerprint
comparison.
In addition to the private key, the Onion Service data packet also
transmits the list of Onion Service public keys of existing and
pending contacts to the Relay Program, as well as the setting that
determines whether contact requests are allowed. Bundling all this
data in a single packet is great in the sense a single confirmation
code can be used to ensure that Relay Program has all the
information necessary to perform its duties.
"""
m_print("Onion Service setup", bold=True, head_clear=True, head=1, tail=1)
pending_contacts = b''.join(contact_list.get_list_of_pending_pub_keys())
existing_contacts = b''.join(contact_list.get_list_of_existing_pub_keys())
no_pending = int_to_bytes(len(contact_list.get_list_of_pending_pub_keys()))
contact_data = no_pending + pending_contacts + existing_contacts
relay_command = (UNENCRYPTED_DATAGRAM_HEADER
+ UNENCRYPTED_ONION_SERVICE_DATA
+ onion_service.onion_private_key
+ onion_service.conf_code
+ bool_to_bytes(settings.allow_contact_requests)
+ contact_data)
gateway.write(relay_command)
while True:
purp_code = ask_confirmation_code('Relay')
if purp_code == onion_service.conf_code.hex():
onion_service.is_delivered = True
onion_service.new_confirmation_code()
break
elif purp_code == '':
phase("Resending Onion Service data", head=2)
gateway.write(relay_command)
phase(DONE)
print_on_previous_line(reps=5)
else:
m_print(["Incorrect confirmation code. If Relay Program did not",
"receive Onion Service data, resend it by pressing <Enter>."], head=1)
print_on_previous_line(reps=5, delay=2)
def new_local_key(contact_list: 'ContactList',
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""Run local key exchange protocol.
Local key encrypts commands and data sent from Source Computer to
user's Destination Computer. The key is delivered to Destination
Computer in packet encrypted with an ephemeral, symmetric, key
encryption key.
The check-summed Base58 format key decryption key is typed to
Receiver Program manually. This prevents local key leak in following
scenarios:
1. CT is intercepted by an adversary on compromised Networked
Computer, but no visual eavesdropping takes place.
2. CT is not intercepted by an adversary on Networked Computer,
but visual eavesdropping records key decryption key.
3. CT is delivered from Source Computer to Destination Computer
directly (bypassing compromised Networked Computer), and
visual eavesdropping records key decryption key.
Once the correct key decryption key is entered to Receiver Program,
it will display the 2-hexadecimal confirmation code generated by
the Transmitter Program. The code will be entered back to
Transmitter Program to confirm the user has successfully delivered
the key decryption key.
The protocol is completed with Transmitter Program sending
LOCAL_KEY_RDY signal to the Receiver Program, that then moves to
wait for public keys from contact.
"""
try:
if settings.traffic_masking and contact_list.has_local_contact():
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
m_print("Local key setup", bold=True, head_clear=True, head=1, tail=1)
if not contact_list.has_local_contact():
time.sleep(0.5)
key = csprng()
hek = csprng()
kek = csprng()
c_code = os.urandom(CONFIRM_CODE_LENGTH)
local_key_packet = LOCAL_KEY_DATAGRAM_HEADER + encrypt_and_sign(plaintext=key + hek + c_code, key=kek)
# Deliver local key to Destination computer
nc_bypass_msg(NC_BYPASS_START, settings)
queue_to_nc(local_key_packet, queues[RELAY_PACKET_QUEUE])
while True:
print_key("Local key decryption key (to Receiver)", kek, settings)
purp_code = ask_confirmation_code('Receiver')
if purp_code == c_code.hex():
nc_bypass_msg(NC_BYPASS_STOP, settings)
break
elif purp_code == '':
phase("Resending local key", head=2)
queue_to_nc(local_key_packet, queues[RELAY_PACKET_QUEUE])
phase(DONE)
print_on_previous_line(reps=(9 if settings.local_testing_mode else 10))
else:
m_print(["Incorrect confirmation code. If Receiver did not receive",
"the encrypted local key, resend it by pressing <Enter>."], head=1)
print_on_previous_line(reps=(9 if settings.local_testing_mode else 10), delay=2)
# Add local contact to contact list database
contact_list.add_contact(LOCAL_PUBKEY,
LOCAL_NICK,
bytes(FINGERPRINT_LENGTH),
bytes(FINGERPRINT_LENGTH),
KEX_STATUS_LOCAL_KEY,
False, False, False)
# Add local contact to keyset database
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER,
LOCAL_PUBKEY,
key, csprng(),
hek, csprng()))
# Notify Receiver that confirmation code was successfully entered
queue_command(LOCAL_KEY_RDY, settings, queues)
m_print("Successfully completed the local key exchange.", bold=True, tail_clear=True, delay=1, head=1)
os.system(RESET)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("Local key setup aborted.", tail_clear=True, delay=1, head=2)
def verify_fingerprints(tx_fp: bytes, # User's fingerprint
rx_fp: bytes # Contact's fingerprint
) -> bool: # True if fingerprints match, else False
"""\
Verify fingerprints over an authenticated out-of-band channel to
detect MITM attacks against TFC's key exchange.
MITM or man-in-the-middle attack is an attack against an inherent
problem in cryptography:
Cryptography is math, nothing more. During key exchange public keys
are just very large numbers. There is no way to tell by looking if a
number (received from an untrusted network / Networked Computer) is
the same number the contact generated.
Public key fingerprints are values designed to be compared by humans
either visually or audibly (or sometimes by using semi-automatic
means such as QR-codes). By comparing the fingerprint over an
authenticated channel it's possible to verify that the correct key
was received from the network.
"""
m_print("To verify received public key was not replaced by an attacker "
"call the contact over an end-to-end encrypted line, preferably Signal "
"(https://signal.org/). Make sure Signal's safety numbers have been "
"verified, and then verbally compare the key fingerprints below.",
head_clear=True, max_width=49, head=1, tail=1)
print_fingerprint(tx_fp, " Your fingerprint (you read) ")
print_fingerprint(rx_fp, "Purported fingerprint for contact (they read)")
return yes("Is the contact's fingerprint correct?")
def start_key_exchange(onion_pub_key: bytes, # Public key of contact's v3 Onion Service
nick: str, # Contact's nickname
contact_list: 'ContactList', # Contact list object
settings: 'Settings', # Settings object
queues: 'QueueDict' # Dictionary of multiprocessing queues
) -> None:
"""Start X448 key exchange with the recipient.
This function first creates the X448 key pair. It then outputs the
public key to Relay Program on Networked Computer, that passes the
public key to contact's Relay Program. When Contact's public key
reaches the user's Relay Program, the user will manually copy the
key into their Transmitter Program.
The X448 shared secret is used to create unidirectional message and
header keys, that will be used in forward secret communication. This
is followed by the fingerprint verification where the user manually
authenticates the public key.
Once the fingerprint has been accepted, this function will add the
contact/key data to contact/key databases, and export that data to
the Receiver Program on Destination Computer. The transmission is
encrypted with the local key.
---
TFC provides proactive security by making fingerprint verification
part of the key exchange. This prevents the situation where the
users don't know about the feature, and thus helps minimize the risk
of MITM attack.
The fingerprints can be skipped by pressing Ctrl+C. This feature is
not advertised however, because verifying fingerprints the only
strong way to be sure TFC is not under MITM attack. When
verification is skipped, TFC marks the contact's X448 keys as
"Unverified". The fingerprints can later be verified with the
`/verify` command: answering `yes` to the question on whether the
fingerprints match, marks the X448 keys as "Verified".
Variable naming:
tx = user's key rx = contact's key fp = fingerprint
mk = message key hk = header key
"""
if not contact_list.has_pub_key(onion_pub_key):
contact_list.add_contact(onion_pub_key, nick,
bytes(FINGERPRINT_LENGTH), bytes(FINGERPRINT_LENGTH),
KEX_STATUS_PENDING,
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
contact = contact_list.get_contact_by_pub_key(onion_pub_key)
# Generate new private key or load cached private key
if contact.tfc_private_key is None:
tfc_private_key_user = X448.generate_private_key()
else:
tfc_private_key_user = contact.tfc_private_key
try:
tfc_public_key_user = X448.derive_public_key(tfc_private_key_user)
# Import public key of contact
while True:
public_key_packet = PUBLIC_KEY_DATAGRAM_HEADER + onion_pub_key + tfc_public_key_user
queue_to_nc(public_key_packet, queues[RELAY_PACKET_QUEUE])
tfc_public_key_contact = get_b58_key(B58_PUBLIC_KEY, settings, contact.short_address)
if tfc_public_key_contact != b'':
break
# Validate public key of contact
if len(tfc_public_key_contact) != TFC_PUBLIC_KEY_LENGTH:
m_print(["Warning!",
"Received invalid size public key.",
"Aborting key exchange for your safety."], bold=True, tail=1)
raise FunctionReturn("Error: Invalid public key length", output=False)
if tfc_public_key_contact == bytes(TFC_PUBLIC_KEY_LENGTH):
# The public key of contact is zero with negligible probability,
# therefore we assume such key is malicious and attempts to set
# the shared key to zero.
m_print(["Warning!",
"Received a malicious zero-public key.",
"Aborting key exchange for your safety."], bold=True, tail=1)
raise FunctionReturn("Error: Zero public key", output=False)
# Derive shared key
dh_shared_key = X448.shared_key(tfc_private_key_user, tfc_public_key_contact)
# Domain separate unidirectional keys from shared key by using public
# keys as message and the context variable as personalization string.
tx_mk = blake2b(tfc_public_key_contact, dh_shared_key, person=b'message_key', digest_size=SYMMETRIC_KEY_LENGTH)
rx_mk = blake2b(tfc_public_key_user, dh_shared_key, person=b'message_key', digest_size=SYMMETRIC_KEY_LENGTH)
tx_hk = blake2b(tfc_public_key_contact, dh_shared_key, person=b'header_key', digest_size=SYMMETRIC_KEY_LENGTH)
rx_hk = blake2b(tfc_public_key_user, dh_shared_key, person=b'header_key', digest_size=SYMMETRIC_KEY_LENGTH)
# Domain separate fingerprints of public keys by using the
# shared secret as key and the context variable as
# personalization string. This way entities who might monitor
# fingerprint verification channel are unable to correlate
# spoken values with public keys that they might see on RAM or
# screen of Networked Computer: Public keys can not be derived
# from the fingerprints due to preimage resistance of BLAKE2b,
# and fingerprints can not be derived from public key without
# the X448 shared secret. Using the context variable ensures
# fingerprints are distinct from derived message and header keys.
tx_fp = blake2b(tfc_public_key_user, dh_shared_key, person=b'fingerprint', digest_size=FINGERPRINT_LENGTH)
rx_fp = blake2b(tfc_public_key_contact, dh_shared_key, person=b'fingerprint', digest_size=FINGERPRINT_LENGTH)
# Verify fingerprints
try:
if not verify_fingerprints(tx_fp, rx_fp):
m_print(["Warning!",
"Possible man-in-the-middle attack detected.",
"Aborting key exchange for your safety."], bold=True, tail=1)
raise FunctionReturn("Error: Fingerprint mismatch", delay=2.5, output=False)
kex_status = KEX_STATUS_VERIFIED
except (EOFError, KeyboardInterrupt):
m_print(["Skipping fingerprint verification.",
'', "Warning!",
"Man-in-the-middle attacks can not be detected",
"unless fingerprints are verified! To re-verify",
"the contact, use the command '/verify'.",
'', "Press <enter> to continue."],
manual_proceed=True, box=True, head=2)
kex_status = KEX_STATUS_UNVERIFIED
# Send keys to the Receiver Program
c_code = blake2b(onion_pub_key, digest_size=CONFIRM_CODE_LENGTH)
command = (KEY_EX_ECDHE
+ onion_pub_key
+ tx_mk + rx_mk
+ tx_hk + rx_hk
+ str_to_bytes(nick))
queue_command(command, settings, queues)
while True:
purp_code = ask_confirmation_code('Receiver')
if purp_code == c_code.hex():
break
elif purp_code == '':
phase("Resending contact data", head=2)
queue_command(command, settings, queues)
phase(DONE)
print_on_previous_line(reps=5)
else:
m_print("Incorrect confirmation code.", head=1)
print_on_previous_line(reps=4, delay=2)
# Store contact data into databases
contact.tfc_private_key = None
contact.tx_fingerprint = tx_fp
contact.rx_fingerprint = rx_fp
contact.kex_status = kex_status
contact_list.store_contacts()
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER,
onion_pub_key,
tx_mk, csprng(),
tx_hk, csprng()))
m_print(f"Successfully added {nick}.", bold=True, tail_clear=True, delay=1, head=1)
except (EOFError, KeyboardInterrupt):
contact.tfc_private_key = tfc_private_key_user
raise FunctionReturn("Key exchange interrupted.", tail_clear=True, delay=1, head=2)
def create_pre_shared_key(onion_pub_key: bytes, # Public key of contact's v3 Onion Service
nick: str, # Nick of contact
contact_list: 'ContactList', # Contact list object
settings: 'Settings', # Settings object
onion_service: 'OnionService', # OnionService object
queues: 'QueueDict' # Dictionary of multiprocessing queues
) -> None:
"""Generate a new pre-shared key for manual key delivery.
Pre-shared keys offer a low-tech solution against the slowly
emerging threat of quantum computers. PSKs are less convenient and
not usable in every scenario, but until a quantum-safe key exchange
algorithm with reasonably short keys is standardized, TFC can't
provide a better alternative against quantum computers.
The generated keys are protected by a key encryption key, derived
from a 256-bit salt and a password (that is to be shared with the
recipient) using Argon2d key derivation function.
The encrypted message and header keys are stored together with salt
on a removable media. This media must be a never-before-used device
from sealed packaging. Re-using an old device might infect Source
Computer, and the malware could either copy sensitive data on that
removable media, or Source Computer might start transmitting the
sensitive data covertly over the serial interface to malware on
Networked Computer.
Once the key has been exported to the clean drive, contact data and
keys are exported to the Receiver Program on Destination computer.
The transmission is encrypted with the local key.
"""
try:
tx_mk = csprng()
tx_hk = csprng()
salt = csprng()
password = MasterKey.new_password("password for PSK")
phase("Deriving key encryption key", head=2)
kek = argon2_kdf(password, salt, rounds=ARGON2_ROUNDS, memory=ARGON2_MIN_MEMORY)
phase(DONE)
ct_tag = encrypt_and_sign(tx_mk + tx_hk, key=kek)
while True:
trunc_addr = pub_key_to_short_address(onion_pub_key)
store_d = ask_path_gui(f"Select removable media for {nick}", settings)
f_name = f"{store_d}/{onion_service.user_short_address}.psk - Give to {trunc_addr}"
try:
with open(f_name, 'wb+') as f:
f.write(salt + ct_tag)
break
except PermissionError:
m_print("Error: Did not have permission to write to the directory.", delay=0.5)
continue
command = (KEY_EX_PSK_TX
+ onion_pub_key
+ tx_mk + csprng()
+ tx_hk + csprng()
+ str_to_bytes(nick))
queue_command(command, settings, queues)
contact_list.add_contact(onion_pub_key, nick,
bytes(FINGERPRINT_LENGTH), bytes(FINGERPRINT_LENGTH),
KEX_STATUS_NO_RX_PSK,
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER,
onion_pub_key,
tx_mk, csprng(),
tx_hk, csprng()))
m_print(f"Successfully added {nick}.", bold=True, tail_clear=True, delay=1, head=1)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("PSK generation aborted.", tail_clear=True, delay=1, head=2)
def rxp_load_psk(window: 'TxWindow',
contact_list: 'ContactList',
settings: 'Settings',
queues: 'QueueDict',
) -> None:
"""Send command to Receiver Program to load PSK for active contact."""
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
if window.type == WIN_TYPE_GROUP or window.contact is None:
raise FunctionReturn("Error: Group is selected.", head_clear=True)
if not contact_list.get_contact_by_pub_key(window.uid).uses_psk():
raise FunctionReturn(f"Error: The current key was exchanged with {ECDHE}.", head_clear=True)
c_code = blake2b(window.uid, digest_size=CONFIRM_CODE_LENGTH)
command = KEY_EX_PSK_RX + c_code + window.uid
queue_command(command, settings, queues)
while True:
try:
purp_code = ask_confirmation_code('Receiver')
if purp_code == c_code.hex():
window.contact.kex_status = KEX_STATUS_HAS_RX_PSK
contact_list.store_contacts()
raise FunctionReturn(f"Removed PSK reminder for {window.name}.", tail_clear=True, delay=1)
else:
m_print("Incorrect confirmation code.", head=1)
print_on_previous_line(reps=4, delay=2)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("PSK verification aborted.", tail_clear=True, delay=1, head=2)

521
src/transmitter/packet.py Executable file
View File

@ -0,0 +1,521 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import base64
import os
import typing
import zlib
from typing import Dict, List, Optional, Union
from src.common.crypto import blake2b, byte_padding, csprng, encrypt_and_sign
from src.common.encoding import bool_to_bytes, int_to_bytes, str_to_bytes
from src.common.exceptions import CriticalError, FunctionReturn
from src.common.input import yes
from src.common.misc import split_byte_string
from src.common.output import m_print, phase, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
from src.transmitter.files import File
from src.transmitter.user_input import UserInput
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.transmitter.windows import TxWindow, MockWindow
QueueDict = Dict[bytes, Queue]
def queue_to_nc(packet: bytes,
nc_queue: 'Queue',
) -> None:
"""Queue unencrypted command/exported file to Networked Computer.
This function queues unencrypted packets intended for Relay Program
on Networked Computer. These packets are processed in the order of
priority by the `sender_loop` process of src.transmitter.sender_loop
module.
"""
nc_queue.put(packet)
def queue_command(command: bytes,
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""Split command to assembly packets and queue them for sender_loop()."""
assembly_packets = split_to_assembly_packets(command, COMMAND)
queue_assembly_packets(assembly_packets, COMMAND, settings, queues)
def queue_message(user_input: 'UserInput',
window: Union['MockWindow', 'TxWindow'],
settings: 'Settings',
queues: 'QueueDict',
header: bytes = b'',
whisper: bool = False,
log_as_ph: bool = False
) -> None:
"""\
Prepend header to message, split the message into assembly packets,
and queue the assembly packets.
In this function the Transmitter Program adds the headers that allow
the recipient's Receiver Program to redirect the received message to
the correct window.
Each message packet starts with a 1 byte whisper-header that
determines whether the packet should be logged by the recipient. For
private messages no additional information aside the
PRIVATE_MESSAGE_HEADER -- that informs the Receiver Program to use
sender's window -- is required.
For group messages, the GROUP_MESSAGE_HEADER tells the Receiver
Program that the header is followed by two additional headers:
1) 4-byte Group ID that tells to what group the message was
intended to. If the Receiver Program has not whitelisted the
group ID, the group message will be ignored. The group ID
space was chosen so that the birthday bound is at 65536
because it's unlikely a user will ever have that many groups.
2) 16-byte group message ID. This random ID is not important for
the receiver. Instead, it is used by the sender's Receiver
Program to detect what group messages are copies sent to other
members of the group (these will be ignored from ephemeral and
persistent message log). The message ID space was chosen so
that the birthday bound is 2^64 (the same as the hash ratchet
counter space).
Once the headers are determined, the message is split into assembly
packets, that are then queued for encryption and transmission by the
`sender_loop` process.
"""
if not header:
if window.type == WIN_TYPE_GROUP and window.group is not None:
header = GROUP_MESSAGE_HEADER + window.group.group_id + os.urandom(GROUP_MSG_ID_LENGTH)
else:
header = PRIVATE_MESSAGE_HEADER
payload = bool_to_bytes(whisper) + header + user_input.plaintext.encode()
assembly_packets = split_to_assembly_packets(payload, MESSAGE)
queue_assembly_packets(assembly_packets, MESSAGE, settings, queues, window, log_as_ph)
def queue_file(window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""Ask file path and load file data.
In TFC there are two ways to send a file.
For traffic masking, the file is loaded and sent inside normal
messages using assembly packet headers dedicated for file
transmission. This transmission is much slower, so the File object
will determine metadata about the transmission's estimated transfer
time, number of packets and the name and size of file. This
information is inserted to the first assembly packet so that the
recipient can observe the transmission progress from file transfer
window.
When traffic masking is disabled, file transmission is much faster
as the file is only encrypted and transferred over serial once
before the Relay Program multi-casts the ciphertext to each
specified recipient. See the send_file docstring (below) for more
details.
"""
path = ask_path_gui("Select file to send...", settings, get_file=True)
if path.endswith(('tx_contacts', 'tx_groups', 'tx_keys', 'tx_login_data', 'tx_settings',
'rx_contacts', 'rx_groups', 'rx_keys', 'rx_login_data', 'rx_settings',
'tx_serial_settings.json', 'nc_serial_settings.json',
'rx_serial_settings.json', 'tx_onion_db')):
raise FunctionReturn("Error: Can't send TFC database.", head_clear=True)
if not settings.traffic_masking:
send_file(path, settings, queues, window)
return
file = File(path, window, settings)
assembly_packets = split_to_assembly_packets(file.plaintext, FILE)
if settings.confirm_sent_files:
try:
if not yes(f"Send {file.name.decode()} ({file.size_hr}) to {window.type_print} {window.name} "
f"({len(assembly_packets)} packets, time: {file.time_hr})?"):
raise FunctionReturn("File selection aborted.", head_clear=True)
except (EOFError, KeyboardInterrupt):
raise FunctionReturn("File selection aborted.", head_clear=True)
queue_assembly_packets(assembly_packets, FILE, settings, queues, window, log_as_ph=True)
def send_file(path: str,
settings: 'Settings',
queues: 'QueueDict',
window: 'TxWindow'
) -> None:
"""Send file to window members in a single transmission.
This is the default mode for file transmission, used when traffic
masking is not enabled. The file is loaded and compressed before it
is encrypted. The encrypted file is then exported to Networked
Computer along with a list of Onion Service public keys (members in
window) of all recipients to whom the Relay Program will multi-cast
the file to.
Once the file ciphertext has been exported, this function will
multi-cast the file decryption key to each recipient inside an
automated key delivery message that uses a special FILE_KEY_HEADER
in place of standard PRIVATE_MESSAGE_HEADER. To know for which file
ciphertext the key is for, an identifier must be added to the key
delivery message. The identifier in this case is the BLAKE2b digest
of the ciphertext itself. The reason of using the digest as the
identifier is, it authenticates both the ciphertext and its origin.
To understand this, consider the following attack scenario:
Let the file ciphertext identifier be just a random 32-byte value "ID".
1) Alice sends Bob and Chuck (a malicious common peer) a file
ciphertext and identifier CT|ID (where | denotes concatenation).
2) Chuck who has compromised Bob's Networked Computer interdicts the
CT|ID from Alice.
3) Chuck decrypts CT in his end, makes edits to the plaintext PT to
create PT'.
4) Chuck re-encrypts PT' with the same symmetric key to produce CT'.
5) Chuck re-uses the ID and produces CT'|ID.
6) Chuck uploads the CT'|ID to Bob's Networked Computer and replaces
the interdicted CT|ID with it.
7) When Bob' Receiver Program receives the automated key delivery
message from Alice, his Receiver program uses the bundled ID to
identify the key is for CT'.
8) Bob's Receiver decrypts CT' using the newly received key and
obtains Chuck's PT', that appears to come from Alice.
Now, consider a situation where the ID is instead calculated
ID = BLAKE2b(CT), if Chuck edits the PT, the CT' will by definition
be different from CT, and the BLAKE2b digest will also be different.
In order to make Bob decrypt CT', Chuck needs to also change the
hash in Alice's key delivery message, which means Chuck needs to
create an existential forgery of the TFC message. Since the Poly1305
tag prevents this, the calculated ID is enough to authenticate the
ciphertext.
If Chuck attempts to send their own key delivery message, Chuck's
own Onion Service public key used to identify the TFC message key
(decryption key for the key delivery message) will be permanently
associated with the file hash, so if they inject a file CT, and Bob
has decided to enable file reception for Chuck, the file CT will
appear to come from Chuck, and not from Alice. From the perspective
of Bob, it's as if Chuck had dropped Alice's file and sent him
another file instead.
"""
from src.transmitter.windows import MockWindow # Avoid circular import
if settings.traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.", head_clear=True)
name = path.split('/')[-1]
data = bytearray()
data.extend(str_to_bytes(name))
if not os.path.isfile(path):
raise FunctionReturn("Error: File not found.", head_clear=True)
if os.path.getsize(path) == 0:
raise FunctionReturn("Error: Target file is empty.", head_clear=True)
phase("Reading data")
with open(path, 'rb') as f:
data.extend(f.read())
phase(DONE)
print_on_previous_line(flush=True)
phase("Compressing data")
comp = bytes(zlib.compress(bytes(data), level=COMPRESSION_LEVEL))
phase(DONE)
print_on_previous_line(flush=True)
phase("Encrypting data")
file_key = csprng()
file_ct = encrypt_and_sign(comp, file_key)
ct_hash = blake2b(file_ct)
phase(DONE)
print_on_previous_line(flush=True)
phase("Exporting data")
no_contacts = int_to_bytes(len(window))
ser_contacts = b''.join([c.onion_pub_key for c in window])
file_packet = FILE_DATAGRAM_HEADER + no_contacts + ser_contacts + file_ct
queue_to_nc(file_packet, queues[RELAY_PACKET_QUEUE])
key_delivery_msg = base64.b85encode(ct_hash + file_key).decode()
for contact in window:
queue_message(user_input=UserInput(key_delivery_msg, MESSAGE),
window =MockWindow(contact.onion_pub_key, [contact]),
settings =settings,
queues =queues,
header =FILE_KEY_HEADER,
log_as_ph =True)
phase(DONE)
print_on_previous_line(flush=True)
m_print(f"Sent file '{name}' to {window.type_print} {window.name}.")
def split_to_assembly_packets(payload: bytes, p_type: str) -> List[bytes]:
"""Split payload to assembly packets.
Messages and commands are compressed to reduce transmission time.
Files directed to this function during traffic masking have been
compressed at an earlier point.
If the compressed message cannot be sent over one packet, it is
split into multiple assembly packets. Long messages are encrypted
with an inner layer of XChaCha20-Poly1305 to provide sender based
control over partially transmitted data. Regardless of packet size,
files always have an inner layer of encryption, and it is added
before the file data is passed to this function. Commands do not
need sender-based control, so they are only delivered with a hash
that makes integrity check easy.
First assembly packet in file transmission is prepended with an
8-byte packet counter header that tells the sender and receiver how
many packets the file transmission requires.
Each assembly packet is prepended with a header that tells the
Receiver Program if the packet is a short (single packet)
transmission or if it's the start packet, a continuation packet, or
the last packet of a multi-packet transmission.
"""
s_header = {MESSAGE: M_S_HEADER, FILE: F_S_HEADER, COMMAND: C_S_HEADER}[p_type]
l_header = {MESSAGE: M_L_HEADER, FILE: F_L_HEADER, COMMAND: C_L_HEADER}[p_type]
a_header = {MESSAGE: M_A_HEADER, FILE: F_A_HEADER, COMMAND: C_A_HEADER}[p_type]
e_header = {MESSAGE: M_E_HEADER, FILE: F_E_HEADER, COMMAND: C_E_HEADER}[p_type]
if p_type in [MESSAGE, COMMAND]:
payload = zlib.compress(payload, level=COMPRESSION_LEVEL)
if len(payload) < PADDING_LENGTH:
padded = byte_padding(payload)
packet_list = [s_header + padded]
else:
if p_type == MESSAGE:
msg_key = csprng()
payload = encrypt_and_sign(payload, msg_key)
payload += msg_key
elif p_type == FILE:
payload = bytes(FILE_PACKET_CTR_LENGTH) + payload
elif p_type == COMMAND:
payload += blake2b(payload)
padded = byte_padding(payload)
p_list = split_byte_string(padded, item_len=PADDING_LENGTH)
if p_type == FILE:
p_list[0] = int_to_bytes(len(p_list)) + p_list[0][FILE_PACKET_CTR_LENGTH:]
packet_list = ([l_header + p_list[0]] +
[a_header + p for p in p_list[1:-1]] +
[e_header + p_list[-1]])
return packet_list
def queue_assembly_packets(assembly_packet_list: List[bytes],
p_type: str,
settings: 'Settings',
queues: 'QueueDict',
window: Optional[Union['TxWindow', 'MockWindow']] = None,
log_as_ph: bool = False
) -> None:
"""Queue assembly packets for sender_loop().
This function is the last function on Transmitter Program's
`input_loop` process. It feeds the assembly packets to
multiprocessing queues along with metadata required for transmission
and message logging. The data put into these queues is read by the
`sender_loop` process in src.transmitter.sender_loop module.
"""
if p_type in [MESSAGE, FILE] and window is not None:
if settings.traffic_masking:
queue = queues[TM_MESSAGE_PACKET_QUEUE] if p_type == MESSAGE else queues[TM_FILE_PACKET_QUEUE]
for assembly_packet in assembly_packet_list:
queue.put((assembly_packet, window.log_messages, log_as_ph))
else:
queue = queues[MESSAGE_PACKET_QUEUE]
for c in window:
for assembly_packet in assembly_packet_list:
queue.put((assembly_packet, c.onion_pub_key, window.log_messages, log_as_ph, window.uid))
elif p_type == COMMAND:
queue = queues[TM_COMMAND_PACKET_QUEUE] if settings.traffic_masking else queues[COMMAND_PACKET_QUEUE]
for assembly_packet in assembly_packet_list:
queue.put(assembly_packet)
def send_packet(key_list: 'KeyList', # Key list object
gateway: 'Gateway', # Gateway object
log_queue: 'Queue', # Multiprocessing queue for logged messages
assembly_packet: bytes, # Padded plaintext assembly packet
onion_pub_key: Optional[bytes] = None, # Recipient v3 Onion Service address
log_messages: Optional[bool] = None, # When True, log the message assembly packet
log_as_ph: Optional[bool] = None # When True, log assembly packet as placeholder data
) -> None:
"""Encrypt and send assembly packet.
The assembly packets are encrypted using a symmetric message key.
TFC provides forward secrecy via a hash ratchet, meaning previous
message key is replaced by it's BLAKE2b hash. The preimage
resistance of the hash function prevents retrospective decryption of
ciphertexts in cases of physical compromise.
The hash ratchet state (the number of times initial message key has
been passed through BLAKE2b) is delivered to recipient inside the
hash ratchet counter. This counter is encrypted with a static
symmetric key called the header key.
The encrypted assembly packet and encrypted harac are prepended with
datagram headers that tell if the encrypted assembly packet is a
command or a message. Packets with MESSAGE_DATAGRAM_HEADER also
contain a second header, which is the public key of the recipient's
Onion Service. This allows the ciphertext to be requested from Relay
Program's server by the correct contact.
Once the encrypted_packet has been output, the hash ratchet advances
to the next state, and the assembly packet is pushed to log_queue,
which is read by the `log_writer_loop` process (that can be found
at src.common.db_logs). This approach prevents IO delays caused by
`input_loop` reading the log file from affecting the `sender_loop`
process, which could reveal schedule information under traffic
masking mode.
"""
if len(assembly_packet) != ASSEMBLY_PACKET_LENGTH:
raise CriticalError("Invalid assembly packet PT length.")
if onion_pub_key is None:
keyset = key_list.get_keyset(LOCAL_PUBKEY)
header = COMMAND_DATAGRAM_HEADER
else:
keyset = key_list.get_keyset(onion_pub_key)
header = MESSAGE_DATAGRAM_HEADER + onion_pub_key
harac_in_bytes = int_to_bytes(keyset.tx_harac)
encrypted_harac = encrypt_and_sign(harac_in_bytes, keyset.tx_hk)
encrypted_message = encrypt_and_sign(assembly_packet, keyset.tx_mk)
encrypted_packet = header + encrypted_harac + encrypted_message
gateway.write(encrypted_packet)
keyset.rotate_tx_mk()
log_queue.put((onion_pub_key, assembly_packet, log_messages, log_as_ph, key_list.master_key))
def cancel_packet(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict'
) -> None:
"""Cancel sent message/file to contact/group.
In cases where the assembly packets have not yet been encrypted or
output to Networked Computer, the queued messages or files to active
window can be cancelled. Any single-packet message and file this
function removes from the queue/transfer buffer are unavailable to
recipient. However, in the case of multi-packet transmissions, if
only the last assembly packet is cancelled, the recipient might
obtain large enough section of the key that protects the inner
encryption layer to allow them to brute force the rest of the key,
and thus, decryption of the packet. There is simply no way to
prevent this kind of attack without making TFC proprietary and
re-writing it in a compiled language (which is very bad for users'
rights).
"""
header, p_type = dict(cm=(M_C_HEADER, 'messages'),
cf=(F_C_HEADER, 'files' ))[user_input.plaintext]
if settings.traffic_masking:
queue = queues[TM_MESSAGE_PACKET_QUEUE] if header == M_C_HEADER else queues[TM_FILE_PACKET_QUEUE]
else:
if header == F_C_HEADER:
raise FunctionReturn("Files are only queued during traffic masking.", head_clear=True)
queue = queues[MESSAGE_PACKET_QUEUE]
cancel_pt = header + bytes(PADDING_LENGTH)
log_as_ph = False # Never log cancel assembly packets as placeholder data
cancel = False
if settings.traffic_masking:
if queue.qsize() != 0:
cancel = True
# Get most recent log_messages setting status in queue
log_messages = False
while queue.qsize() != 0:
log_messages = queue.get()[1]
queue.put((cancel_pt, log_messages, log_as_ph))
m_print(f"Cancelled queues {p_type}." if cancel else f"No {p_type} to cancel.", head=1, tail=1)
else:
p_buffer = []
while queue.qsize() != 0:
queue_data = queue.get()
window_uid = queue_data[4]
# Put messages unrelated to the active window into the buffer
if window_uid != window.uid:
p_buffer.append(queue_data)
else:
cancel = True
# Put cancel packets for each window contact to queue first
if cancel:
for c in window:
queue.put((cancel_pt, c.onion_pub_key, c.log_messages, log_as_ph, window.uid))
# Put buffered tuples back to the queue
for p in p_buffer:
queue.put(p)
if cancel:
message = f"Cancelled queued {p_type} to {window.type_print} {window.name}."
else:
message = f"No {p_type} queued for {window.type_print} {window.name}."
raise FunctionReturn(message, head_clear=True)

276
src/transmitter/sender_loop.py Executable file
View File

@ -0,0 +1,276 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import time
import typing
from typing import Dict, List, Optional, Tuple
from src.common.misc import ignored
from src.common.statics import *
from src.transmitter.packet import send_packet
from src.transmitter.traffic_masking import HideRunTime
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.common.db_settings import Settings
QueueDict = Dict[bytes, Queue]
Message_buffer = Dict[bytes, List[Tuple[bytes, bytes, bool, bool, bytes]]]
def sender_loop(queues: 'QueueDict',
settings: 'Settings',
gateway: 'Gateway',
key_list: 'KeyList',
unittest: bool = False
) -> None:
"""Output packets from queues based on queue priority.
Depending on traffic masking setting adjusted by the user, enable
either traffic masking or standard sender loop for packet output.
"""
m_buffer = dict() # type: Message_buffer
while True:
if settings.traffic_masking:
settings = traffic_masking_loop(queues, settings, gateway, key_list)
else:
settings, m_buffer = standard_sender_loop(queues, gateway, key_list, m_buffer)
if unittest:
break
def traffic_masking_loop(queues: 'QueueDict',
settings: 'Settings',
gateway: 'Gateway',
key_list: 'KeyList',
) -> 'Settings':
"""Run Transmitter Program in traffic masking mode.
The traffic masking loop loads assembly packets from a set of queues.
As Python's multiprocessing lacks priority queues, several queues are
prioritized based on their status.
Files are only transmitted when messages are not being output: This
is because file transmission is usually very slow and the user might
need to send messages in the meantime. Command datagrams are output
from Source Computer between each message datagram. The frequency in
output allows commands to take effect as soon as possible but this
unfortunately slows down message/file delivery by half. Each contact
in the window is cycled in order.
When this loop is active, making changes to the recipient list is
prevented to protect the user from accidentally revealing the use of
TFC.
The traffic is masked the following way: If both m_queue and f_queue
are empty, a noise assembly packet is loaded from np_queue. If no
command packet is available in c_queue, a noise command packet is
loaded from nc_queue. Both noise queues are filled with independent
processes that ensure both noise queues always have packets to
output.
TFC does its best to hide the assembly packet loading times and
encryption duration by using constant time context manager with
CSPRNG spawned jitter, constant time queue status lookup and constant
time XChaCha20 cipher. However, since TFC is written in a high-level
language, it is impossible to guarantee Source Computer never
reveals to Networked Computer when the user operates the Source
Computer.
"""
ws_queue = queues[WINDOW_SELECT_QUEUE]
m_queue = queues[TM_MESSAGE_PACKET_QUEUE]
f_queue = queues[TM_FILE_PACKET_QUEUE]
c_queue = queues[TM_COMMAND_PACKET_QUEUE]
np_queue = queues[TM_NOISE_PACKET_QUEUE]
nc_queue = queues[TM_NOISE_COMMAND_QUEUE]
rp_queue = queues[RELAY_PACKET_QUEUE]
log_queue = queues[LOG_PACKET_QUEUE]
sm_queue = queues[SENDER_MODE_QUEUE]
while True:
with ignored(EOFError, KeyboardInterrupt):
while ws_queue.qsize() == 0:
time.sleep(0.01)
window_contacts = ws_queue.get()
# Window selection command to Receiver Program.
while c_queue.qsize() == 0:
time.sleep(0.01)
send_packet(key_list, gateway, log_queue, c_queue.get())
break
while True:
with ignored(EOFError, KeyboardInterrupt):
# Load message/file assembly packet.
with HideRunTime(settings, duration=TRAFFIC_MASKING_QUEUE_CHECK_DELAY):
# Choosing element from list is constant time.
#
# First queue we evaluate: if m_queue has data Second to evaluate. If m_queue
# in it, False is evaluated as 0, and we load has no data but f_queue has, the
# the first nested list. At that point we load False is evaluated as 0 meaning
# from m_queue regardless of f_queue state. f_queue (True as 1 and np_queue)
# | |
# v v
queue = [[m_queue, m_queue], [f_queue, np_queue]][m_queue.qsize() == 0][f_queue.qsize() == 0]
# Regardless of queue, each .get() returns a tuple with identical
# amount of data: 256 bytes long bytestring and two booleans.
assembly_packet, log_messages, log_as_ph = queue.get() # type: bytes, bool, bool
for c in window_contacts:
# Message/file assembly packet to window contact.
with HideRunTime(settings, delay_type=TRAFFIC_MASKING):
send_packet(key_list, gateway, log_queue, assembly_packet, c.onion_pub_key, log_messages)
# Send a command between each assembly packet for each contact.
with HideRunTime(settings, delay_type=TRAFFIC_MASKING):
# Choosing element from list is constant time.
queue = [c_queue, nc_queue][c_queue.qsize() == 0]
# Each loaded command and noise command is a 256 long bytestring.
command = queue.get() # type: bytes
send_packet(key_list, gateway, log_queue, command)
# The two queues below are empty until the user is willing to reveal to
# Networked Computer they are either disabling Traffic masking or exiting
# TFC. Until that happens, queue status check takes constant time.
# Check for unencrypted commands that close TFC.
if rp_queue.qsize() != 0:
packet = rp_queue.get()
command = packet[DATAGRAM_HEADER_LENGTH:]
if command in [UNENCRYPTED_EXIT_COMMAND, UNENCRYPTED_WIPE_COMMAND]:
gateway.write(packet)
queues[EXIT_QUEUE].put(command)
# If traffic masking has been disabled, move all packets to standard_sender_loop queues.
if sm_queue.qsize() != 0 and all(q.qsize() == 0 for q in (m_queue, f_queue, c_queue)):
settings = queues[SENDER_MODE_QUEUE].get()
return settings
def standard_sender_loop(queues: 'QueueDict',
gateway: 'Gateway',
key_list: 'KeyList',
m_buffer: Optional['Message_buffer'] = None
) -> Tuple['Settings', 'Message_buffer']:
"""Run Transmitter program in standard send mode.
The standard sender loop loads assembly packets from a set of queues.
As Python's multiprocessing lacks priority queues, several queues are
prioritized based on their status:
KEY_MANAGEMENT_QUEUE has the highest priority. This is to ensure the
no queued message/command is encrypted with expired keyset.
COMMAND_PACKET_QUEUE has the second highest priority, to ensure
commands are issued swiftly to Receiver program. Some commands like
screen clearing might need to be issued quickly.
RELAY_PACKET_QUEUE has third highest priority. These are still
commands but since Relay Program does not handle sensitive data,
issuing commands to that devices does not take priority.
Buffered messages have fourth highest priority. This ensures that if
for whatever reason the keyset is removed, buffered messages do not
get lost. Packets are loaded from the buffer in FIFO basis ensuring
packets arrive to the recipient in order.
MESSAGE_PACKET_QUEUE has fifth highest priority. Any buffered
messages need to arrive earlier, thus new messages must be
prioritized after the buffered ones.
SENDER_MODE_QUEUE has sixth highest priority. This prevents outgoing
packets from being left in the queues used by this loop. This queue
returns up-to-date settings object for `sender_loop` parent loop,
that in turn uses it to start `traffic_masking_loop`.
Along with settings, this function returns the m_buffer status so that
assembly packets that could not have been sent due to missing key
can be output later, if the user resumes to standard_sender_loop and
adds new keys for the contact.
"""
km_queue = queues[KEY_MANAGEMENT_QUEUE]
c_queue = queues[COMMAND_PACKET_QUEUE]
rp_queue = queues[RELAY_PACKET_QUEUE]
m_queue = queues[MESSAGE_PACKET_QUEUE]
sm_queue = queues[SENDER_MODE_QUEUE]
log_queue = queues[LOG_PACKET_QUEUE]
if m_buffer is None:
m_buffer = dict()
while True:
with ignored(EOFError, KeyboardInterrupt):
if km_queue.qsize() != 0:
key_list.manage(*km_queue.get())
continue
# Commands to Receiver
if c_queue.qsize() != 0:
if key_list.has_local_keyset():
send_packet(key_list, gateway, log_queue, c_queue.get())
continue
# Commands/files to Networked Computer
if rp_queue.qsize() != 0:
packet = rp_queue.get()
gateway.write(packet)
command = packet[DATAGRAM_HEADER_LENGTH:]
if command in [UNENCRYPTED_EXIT_COMMAND, UNENCRYPTED_WIPE_COMMAND]:
time.sleep(gateway.settings.local_testing_mode * 0.1)
time.sleep(gateway.settings.data_diode_sockets * 1.5)
signal = WIPE if command == UNENCRYPTED_WIPE_COMMAND else EXIT
queues[EXIT_QUEUE].put(signal)
continue
# Buffered messages
for onion_pub_key in m_buffer:
if key_list.has_keyset(onion_pub_key) and m_buffer[onion_pub_key]:
send_packet(key_list, gateway, log_queue, *m_buffer[onion_pub_key].pop(0)[:-1])
continue
# New messages
if m_queue.qsize() != 0:
queue_data = m_queue.get() # type: Tuple[bytes, bytes, bool, bool, bytes]
onion_pub_key = queue_data[1]
if key_list.has_keyset(onion_pub_key):
send_packet(key_list, gateway, log_queue, *queue_data[:-1])
else:
m_buffer.setdefault(onion_pub_key, []).append(queue_data)
continue
# If traffic masking has been enabled, switch send mode when all queues are empty.
if sm_queue.qsize() != 0 and all(q.qsize() == 0 for q in (km_queue, c_queue, rp_queue, m_queue)):
settings = sm_queue.get()
return settings, m_buffer
time.sleep(0.01)

View File

@ -0,0 +1,104 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import random
import threading
import time
import typing
from typing import Any, Dict, Optional, Tuple, Union
from src.common.misc import ignored
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_settings import Settings
QueueDict = Dict[bytes, Queue]
class HideRunTime(object):
"""Runtime hiding time context manager.
By joining a thread that sleeps for a longer time than it takes for
the function to run, this context manager hides the actual running
time of the function.
Note that random.SystemRandom() uses the Kernel CSPRNG (/dev/urandom),
not Python's weak PRNG based on Mersenne Twister:
https://docs.python.org/2/library/random.html#random.SystemRandom
"""
def __init__(self,
settings: 'Settings',
delay_type: str = STATIC,
duration: float = 0.0
) -> None:
if delay_type == TRAFFIC_MASKING:
self.length = settings.tm_static_delay
self.length += random.SystemRandom().uniform(0, settings.tm_random_delay)
elif delay_type == STATIC:
self.length = duration
def __enter__(self) -> None:
self.timer = threading.Thread(target=time.sleep, args=(self.length,))
self.timer.start()
def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
self.timer.join()
def noise_loop(queues: 'QueueDict',
contact_list: Optional['ContactList'] = None,
unittest: bool = False
) -> None:
"""Generate noise packets for traffic masking.
This process ensures noise packet / noise command queue always has
noise assembly packets available.
"""
log_messages = True # This setting is ignored: settings.log_file_masking controls logging of noise packets.
log_as_ph = True
header = C_N_HEADER if contact_list is None else P_N_HEADER
noise_assembly_packet = header + bytes(PADDING_LENGTH)
if contact_list is None:
# Noise command
queue = queues[TM_NOISE_COMMAND_QUEUE]
content = noise_assembly_packet # type: Union[bytes, Tuple[bytes, bool, bool]]
else:
# Noise packet
queue = queues[TM_NOISE_PACKET_QUEUE]
content = (noise_assembly_packet, log_messages, log_as_ph)
while True:
with ignored(EOFError, KeyboardInterrupt):
while queue.qsize() < NOISE_PACKET_BUFFER:
queue.put(content)
time.sleep(0.1)
if unittest:
break

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import typing
@ -24,11 +25,14 @@ from src.common.output import print_on_previous_line
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_settings import Settings
from src.tx.windows import TxWindow
from src.common.db_settings import Settings
from src.transmitter.windows import TxWindow
def process_aliases(plaintext: str, settings: 'Settings', window: 'TxWindow') -> str:
def process_aliases(plaintext: str,
settings: 'Settings',
window: 'TxWindow'
) -> str:
"""Check if plaintext is an alias for another command."""
aliases = [(' ', '/unread' ),
(' ', '/exit' if settings.double_space_exits else '/clear'),
@ -37,6 +41,8 @@ def process_aliases(plaintext: str, settings: 'Settings', window: 'TxWindow') ->
for a in aliases:
if plaintext == a[0]:
plaintext = a[1]
# Replace what the user typed
print_on_previous_line()
print(f"Msg to {window.type_print} {window.name}: {plaintext}")
break
@ -45,7 +51,7 @@ def process_aliases(plaintext: str, settings: 'Settings', window: 'TxWindow') ->
def get_input(window: 'TxWindow', settings: 'Settings') -> 'UserInput':
"""Read and process input from user."""
"""Read and process input from the user and determine its type."""
while True:
try:
plaintext = input(f"Msg to {window.type_print} {window.name}: ")
@ -60,18 +66,21 @@ def get_input(window: 'TxWindow', settings: 'Settings') -> 'UserInput':
# Determine plaintext type
pt_type = MESSAGE
if plaintext == '/file':
pt_type = FILE
elif plaintext.startswith('/'):
plaintext = plaintext[1:]
plaintext = plaintext[len('/'):]
pt_type = COMMAND
# Check if group was empty
if pt_type in [MESSAGE, FILE] and window.type == WIN_TYPE_GROUP and not window.group.has_members():
print_on_previous_line()
print(f"Msg to {window.type_print} {window.name}: Error: Group is empty.")
print_on_previous_line(delay=0.5)
continue
# Check if the group was empty
if pt_type in [MESSAGE, FILE] and window.type == WIN_TYPE_GROUP:
if window.group is not None and window.group.empty():
print_on_previous_line()
print(f"Msg to {window.type_print} {window.name}: Error: The group is empty.")
print_on_previous_line(delay=0.5)
continue
return UserInput(plaintext, pt_type)
@ -80,10 +89,10 @@ class UserInput(object):
"""UserInput objects are messages, files or commands.
The type of created UserInput object is determined based on input
by user. Commands start with slash, but as files are a special case
of command, /file commands are interpreted as file type. The 'type'
attribute allows tx_loop to determine what function should process
the user input.
by the user. Commands start with a slash, but as files are a special
case of a command, /file commands are interpreted as the file type.
The 'type' attribute allows tx_loop to determine what function
should process the user input.
"""
def __init__(self, plaintext: str, type_: str) -> None:

265
src/transmitter/windows.py Executable file
View File

@ -0,0 +1,265 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import typing
from typing import Dict, Generator, Iterable, List, Optional, Sized
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.output import clear_screen
from src.common.statics import *
from src.transmitter.contact import add_new_contact
from src.transmitter.key_exchanges import export_onion_service_data, start_key_exchange
from src.transmitter.packet import queue_command
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import Group, GroupList
from src.common.db_onion import OnionService
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.transmitter.user_input import UserInput
QueueDict = Dict[bytes, Queue]
class MockWindow(Iterable):
"""\
Mock window simplifies queueing of message assembly packets for
automatically generated group management and key delivery messages.
"""
def __init__(self, uid: bytes, contacts: List['Contact']) -> None:
"""Create a new MockWindow object."""
self.window_contacts = contacts
self.type = WIN_TYPE_CONTACT
self.group = None # type: Optional[Group]
self.name = None # type: Optional[str]
self.uid = uid
self.log_messages = self.window_contacts[0].log_messages
def __iter__(self) -> Generator:
"""Iterate over contact objects in the window."""
yield from self.window_contacts
class TxWindow(Iterable, Sized):
"""\
TxWindow object contains data about the active recipient (contact or
group).
"""
def __init__(self,
contact_list: 'ContactList',
group_list: 'GroupList'
) -> None:
"""Create a new TxWindow object."""
self.contact_list = contact_list
self.group_list = group_list
self.window_contacts = [] # type: List[Contact]
self.contact = None # type: Optional[Contact]
self.group = None # type: Optional[Group]
self.name = '' # type: str
self.uid = b'' # type: bytes
self.group_id = None # type: Optional[bytes]
self.log_messages = None # type: Optional[bool]
self.type = '' # type: str
self.type_print = None # type: Optional[str]
def __iter__(self) -> Generator:
"""Iterate over Contact objects in the window."""
yield from self.window_contacts
def __len__(self) -> int:
"""Return the number of contacts in the window."""
return len(self.window_contacts)
def select_tx_window(self,
settings: 'Settings', # Settings object
queues: 'QueueDict', # Dictionary of Queues
onion_service: 'OnionService', # OnionService object
gateway: 'Gateway', # Gateway object
selection: Optional[str] = None, # Selector for window
cmd: bool = False # True when `/msg` command is used to switch window
) -> None:
"""Select specified window or ask the user to specify one."""
if selection is None:
self.contact_list.print_contacts()
self.group_list.print_groups()
if self.contact_list.has_only_pending_contacts():
print("\n'/connect' sends Onion Service/contact data to Relay"
"\n'/add' adds another contact."
"\n'/rm <Nick>' removes an existing contact.\n")
selection = input("Select recipient: ").strip()
if selection in self.group_list.get_list_of_group_names():
if cmd and settings.traffic_masking and selection != self.name:
raise FunctionReturn("Error: Can't change window during traffic masking.", head_clear=True)
self.contact = None
self.group = self.group_list.get_group(selection)
self.window_contacts = self.group.members
self.name = self.group.name
self.uid = self.group.group_id
self.group_id = self.group.group_id
self.log_messages = self.group.log_messages
self.type = WIN_TYPE_GROUP
self.type_print = 'group'
elif selection in self.contact_list.contact_selectors():
if cmd and settings.traffic_masking:
contact = self.contact_list.get_contact_by_address_or_nick(selection)
if contact.onion_pub_key != self.uid:
raise FunctionReturn("Error: Can't change window during traffic masking.", head_clear=True)
self.contact = self.contact_list.get_contact_by_address_or_nick(selection)
if self.contact.kex_status == KEX_STATUS_PENDING:
start_key_exchange(self.contact.onion_pub_key,
self.contact.nick,
self.contact_list,
settings, queues)
self.group = None
self.group_id = None
self.window_contacts = [self.contact]
self.name = self.contact.nick
self.uid = self.contact.onion_pub_key
self.log_messages = self.contact.log_messages
self.type = WIN_TYPE_CONTACT
self.type_print = 'contact'
elif selection.startswith('/'):
self.window_selection_command(selection, settings, queues, onion_service, gateway)
else:
raise FunctionReturn("Error: No contact/group was found.")
if settings.traffic_masking:
queues[WINDOW_SELECT_QUEUE].put(self.window_contacts)
packet = WIN_SELECT + self.uid
queue_command(packet, settings, queues)
clear_screen()
def window_selection_command(self,
selection: str,
settings: 'Settings',
queues: 'QueueDict',
onion_service: 'OnionService',
gateway: 'Gateway'
) -> None:
"""Commands for adding and removing contacts from contact selection menu.
In situations where only pending contacts are available and
those contacts are not online, these commands prevent the user
from not being able to add new contacts.
"""
if selection == '/add':
add_new_contact(self.contact_list, self.group_list, settings, queues, onion_service)
raise FunctionReturn("New contact added.", output=False)
elif selection == '/connect':
export_onion_service_data(self.contact_list, settings, onion_service, gateway)
elif selection.startswith('/rm'):
try:
selection = selection.split()[1]
except IndexError:
raise FunctionReturn("Error: No account specified.", delay=1)
if not yes(f"Remove contact '{selection}'?", abort=False, head=1):
raise FunctionReturn("Removal of contact aborted.", head=0, delay=1)
if selection in self.contact_list.contact_selectors():
onion_pub_key = self.contact_list.get_contact_by_address_or_nick(selection).onion_pub_key
self.contact_list.remove_contact_by_pub_key(onion_pub_key)
self.contact_list.store_contacts()
raise FunctionReturn(f"Removed contact '{selection}'.", delay=1)
else:
raise FunctionReturn(f"Error: Unknown contact '{selection}'.", delay=1)
else:
raise FunctionReturn("Error: Invalid command.", delay=1)
def deselect(self) -> None:
"""Deselect active window."""
self.window_contacts = []
self.contact = None # type: Contact
self.group = None # type: Group
self.name = '' # type: str
self.uid = b'' # type: bytes
self.log_messages = None # type: bool
self.type = '' # type: str
self.type_print = None # type: str
def is_selected(self) -> bool:
"""Return True if a window is selected, else False."""
return self.name != ''
def update_log_messages(self) -> None:
"""Update window's logging setting."""
if self.type == WIN_TYPE_CONTACT and self.contact is not None:
self.log_messages = self.contact.log_messages
if self.type == WIN_TYPE_GROUP and self.group is not None:
self.log_messages = self.group.log_messages
def update_window(self, group_list: 'GroupList') -> None:
"""Update window.
Since previous input may have changed the window data, reload
window data before prompting for UserInput.
"""
if self.type == WIN_TYPE_GROUP:
if self.group_id is not None and group_list.has_group_id(self.group_id):
self.group = group_list.get_group_by_id(self.group_id)
self.window_contacts = self.group.members
self.name = self.group.name
self.uid = self.group.group_id
else:
self.deselect()
elif self.type == WIN_TYPE_CONTACT:
if self.contact is not None and self.contact_list.has_pub_key(self.contact.onion_pub_key):
# Reload window contact in case keys were re-exchanged.
self.contact = self.contact_list.get_contact_by_pub_key(self.contact.onion_pub_key)
self.window_contacts = [self.contact]
def select_window(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: 'QueueDict',
onion_service: 'OnionService',
gateway: 'Gateway'
) -> None:
"""Select a new window to send messages/files."""
try:
selection = user_input.plaintext.split()[1]
except (IndexError, TypeError):
raise FunctionReturn("Error: Invalid recipient.", head_clear=True)
window.select_tx_window(settings, queues, onion_service, gateway, selection=selection, cmd=True)

View File

@ -1,537 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import struct
import textwrap
import time
import typing
import zlib
from multiprocessing import Queue
from typing import Any, Dict, List, Tuple, Union
from src.common.crypto import csprng, encrypt_and_sign
from src.common.db_logs import access_logs, re_encrypt, remove_logs
from src.common.encoding import int_to_bytes, str_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import ensure_dir, get_terminal_width
from src.common.output import box_print, clear_screen, phase, print_key, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
from src.tx.commands_g import process_group_command
from src.tx.contact import add_new_contact, change_nick, contact_setting, show_fingerprints, remove_contact
from src.tx.key_exchanges import new_local_key, rxm_load_psk
from src.tx.packet import cancel_packet, queue_command, queue_message, queue_to_nh
from src.tx.user_input import UserInput
from src.tx.windows import select_window
if typing.TYPE_CHECKING:
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.tx.windows import TxWindow
def process_command(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
contact_list: 'ContactList',
group_list: 'GroupList',
master_key: 'MasterKey') -> None:
"""\
Select function based on first keyword of issued
command and pass relevant parameters to it.
"""
c = COMMAND_PACKET_QUEUE
m = MESSAGE_PACKET_QUEUE
n = NH_PACKET_QUEUE
# Keyword Function to run ( Parameters )
# ------------------------------------------------------------------------------------------------------------------------
d = {'about': (print_about, ),
'add': (add_new_contact, contact_list, group_list, settings, queues ),
'clear': (clear_screens, user_input, window, settings, queues ),
'cmd': (rxm_show_sys_win, user_input, window, settings, queues[c] ),
'cm': (cancel_packet, user_input, window, settings, queues ),
'cf': (cancel_packet, user_input, window, settings, queues ),
'exit': (exit_tfc, settings, queues ),
'export': (log_command, user_input, window, contact_list, group_list, settings, queues[c], master_key),
'fingerprints': (show_fingerprints, window ),
'fe': (export_file, settings, queues[n] ),
'fi': (import_file, settings, queues[n] ),
'fw': (rxm_show_sys_win, user_input, window, settings, queues[c] ),
'group': (process_group_command, user_input, contact_list, group_list, settings, queues, master_key),
'help': (print_help, settings ),
'history': (log_command, user_input, window, contact_list, group_list, settings, queues[c], master_key),
'localkey': (new_local_key, contact_list, settings, queues, ),
'logging': (contact_setting, user_input, window, contact_list, group_list, settings, queues[c] ),
'msg': (select_window, user_input, window, settings, queues ),
'names': (print_recipients, contact_list, group_list, ),
'nick': (change_nick, user_input, window, contact_list, group_list, settings, queues[c] ),
'notify': (contact_setting, user_input, window, contact_list, group_list, settings, queues[c] ),
'passwd': (change_master_key, user_input, contact_list, group_list, settings, queues, master_key),
'psk': (rxm_load_psk, window, contact_list, settings, queues[c] ),
'reset': (clear_screens, user_input, window, settings, queues ),
'rm': (remove_contact, user_input, window, contact_list, group_list, settings, queues, master_key),
'rmlogs': (remove_log, user_input, contact_list, settings, queues[c], master_key),
'set': (change_setting, user_input, contact_list, group_list, settings, queues ),
'settings': (settings.print_settings, ),
'store': (contact_setting, user_input, window, contact_list, group_list, settings, queues[c] ),
'unread': (rxm_display_unread, settings, queues[c] ),
'whisper': (whisper, user_input, window, settings, queues[m] ),
'wipe': (wipe, settings, queues )} # type: Dict[str, Any]
try:
cmd_key = user_input.plaintext.split()[0]
from_dict = d[cmd_key]
except KeyError:
raise FunctionReturn(f"Error: Invalid command '{cmd_key}'")
except (IndexError, UnboundLocalError):
raise FunctionReturn(f"Error: Invalid command.")
func = from_dict[0]
parameters = from_dict[1:]
func(*parameters)
def print_about() -> None:
"""Print URLs that direct to TFC's project site and documentation."""
clear_screen()
print(f"\n Tinfoil Chat {VERSION}\n\n"
" Website: https://github.com/maqp/tfc/\n"
" Wikipage: https://github.com/maqp/tfc/wiki\n"
" White paper: https://cs.helsinki.fi/u/oottela/tfc.pdf\n")
def clear_screens(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Clear/reset TxM, RxM and NH screens.
Only send unencrypted command to NH if traffic masking is disabled and
if some related IM account can be bound to active window.
Since reset command removes ephemeral message log on RxM, TxM decides
the window to reset (in case e.g. previous window selection command
packet dropped and active window state is inconsistent between TxM/RxM).
"""
cmd = user_input.plaintext.split()[0]
command = CLEAR_SCREEN_HEADER if cmd == CLEAR else RESET_SCREEN_HEADER + window.uid.encode()
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
clear_screen()
if not settings.session_traffic_masking and window.imc_name is not None:
im_window = window.imc_name.encode()
pt_cmd = UNENCRYPTED_SCREEN_CLEAR if cmd == CLEAR else UNENCRYPTED_SCREEN_RESET
packet = UNENCRYPTED_PACKET_HEADER + pt_cmd + im_window
queue_to_nh(packet, settings, queues[NH_PACKET_QUEUE])
if cmd == RESET:
os.system('reset')
def rxm_show_sys_win(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
c_queue: 'Queue') -> None:
"""Display system window on RxM until user presses Enter."""
cmd = user_input.plaintext.split()[0]
win_name = dict(cmd=LOCAL_ID, fw=WIN_TYPE_FILE)[cmd]
command = WINDOW_SELECT_HEADER + win_name.encode()
queue_command(command, settings, c_queue)
box_print(f"<Enter> returns RxM to {window.name}'s window", manual_proceed=True)
print_on_previous_line(reps=4, flush=True)
command = WINDOW_SELECT_HEADER + window.uid.encode()
queue_command(command, settings, c_queue)
def exit_tfc(settings: 'Settings', queues: Dict[bytes, 'Queue']) -> None:
"""Exit TFC on TxM/RxM/NH."""
for q in [COMMAND_PACKET_QUEUE, NH_PACKET_QUEUE]:
while queues[q].qsize() != 0:
queues[q].get()
queue_command(EXIT_PROGRAM_HEADER, settings, queues[COMMAND_PACKET_QUEUE])
if not settings.session_traffic_masking:
if settings.local_testing_mode:
time.sleep(0.8)
if settings.data_diode_sockets:
time.sleep(2.2)
else:
time.sleep(settings.race_condition_delay)
queue_to_nh(UNENCRYPTED_PACKET_HEADER + UNENCRYPTED_EXIT_COMMAND, settings, queues[NH_PACKET_QUEUE])
def log_command(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
c_queue: 'Queue',
master_key: 'MasterKey') -> None:
"""Display message logs or export them to plaintext file on TxM/RxM.
TxM processes sent messages, RxM processes sent and
received messages for all participants in active window.
"""
cmd = user_input.plaintext.split()[0]
export, header = dict(export =(True, LOG_EXPORT_HEADER),
history=(False, LOG_DISPLAY_HEADER))[cmd]
try:
msg_to_load = int(user_input.plaintext.split()[1])
except ValueError:
raise FunctionReturn("Error: Invalid number of messages.")
except IndexError:
msg_to_load = 0
if export and not yes(f"Export logs for '{window.name}' in plaintext?", head=1, tail=1):
raise FunctionReturn("Logfile export aborted.")
try:
command = header + window.uid.encode() + US_BYTE + int_to_bytes(msg_to_load)
except struct.error:
raise FunctionReturn("Error: Invalid number of messages.")
queue_command(command, settings, c_queue)
access_logs(window, contact_list, group_list, settings, master_key, msg_to_load, export)
def export_file(settings: 'Settings', nh_queue: 'Queue') -> None:
"""Encrypt and export file to NH.
This is a faster method to send large files. It is used together
with file import (/fi) command that uploads ciphertext to RxM for
RxM-side decryption. Key is generated automatically so that bad
passwords selected by users do not affect security of ciphertexts.
"""
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
path = ask_path_gui("Select file to export...", settings, get_file=True)
name = path.split('/')[-1]
data = bytearray()
data.extend(str_to_bytes(name))
if not os.path.isfile(path):
raise FunctionReturn("Error: File not found.")
if os.path.getsize(path) == 0:
raise FunctionReturn("Error: Target file is empty.")
phase("Reading data")
with open(path, 'rb') as f:
data.extend(f.read())
phase(DONE)
phase("Compressing data")
comp = bytes(zlib.compress(bytes(data), level=COMPRESSION_LEVEL))
phase(DONE)
phase("Encrypting data")
file_key = csprng()
file_ct = encrypt_and_sign(comp, key=file_key)
phase(DONE)
phase("Exporting data")
queue_to_nh(EXPORTED_FILE_HEADER + file_ct, settings, nh_queue)
phase(DONE)
print_key(f"Decryption key for file '{name}':", file_key, settings, no_split=True, file_key=True)
def import_file(settings: 'Settings', nh_queue: 'Queue') -> None:
"""\
Send unencrypted command to NH that tells it to open
RxM upload prompt for received (exported) file.
"""
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
queue_to_nh(UNENCRYPTED_PACKET_HEADER + UNENCRYPTED_IMPORT_COMMAND, settings, nh_queue)
def print_help(settings: 'Settings') -> None:
"""Print the list of commands."""
def help_printer(tuple_list: List[Union[Tuple[str, str, bool]]]) -> None:
"""Print list of commands.
Style depends on terminal width and settings.
"""
len_longest_command = max(len(t[0]) for t in tuple_list) + 1 # Add one for spacing
for help_cmd, description, display in tuple_list:
if not display:
continue
wrapper = textwrap.TextWrapper(width=max(1, terminal_width - len_longest_command))
desc_lines = wrapper.fill(description).split('\n')
desc_indent = (len_longest_command - len(help_cmd)) * ' '
print(help_cmd + desc_indent + desc_lines[0])
# Print wrapped description lines with indent
if len(desc_lines) > 1:
for line in desc_lines[1:]:
print(len_longest_command * ' ' + line)
print('')
notm = not settings.session_traffic_masking
common = [("/about", "Show links to project resources", True),
("/add", "Add new contact", notm),
("/cf", "Cancel file transmission to active contact/group", True),
("/cm", "Cancel message transmission to active contact/group", True),
("/clear, ' '", "Clear screens from TxM, RxM and IM client", True),
("/cmd, '//'", "Display command window on RxM", True),
("/exit", "Exit TFC on TxM, NH and RxM", True),
("/export (n)", "Export (n) messages from recipient's logfile", True),
("/file", "Send file to active contact/group", True),
("/fingerprints", "Print public key fingerprints of user and contact", True),
("/fe", "Encrypt and export file to NH", notm),
("/fi", "Import file from NH to RxM", notm),
("/fw", "Display file reception window on RxM", True),
("/help", "Display this list of commands", True),
("/history (n)", "Print (n) messages from recipient's logfile", True),
("/localkey", "Generate new local key pair", notm),
("/logging {on,off}(' all')", "Change message log setting (for all contacts)", True),
("/msg {A,N}", "Change active recipient to account A or nick N", notm),
("/names", "List contacts and groups", True),
("/nick N", "Change nickname of active recipient to N", True),
("/notify {on,off} (' all')", "Change notification settings (for all contacts)", True),
("/passwd {tx,rx}", "Change master password on TxM/RxM", notm),
("/psk", "Open PSK import dialog on RxM", notm),
("/reset", "Reset ephemeral session log on TxM/RxM/IM client", True),
("/rm {A,N}", "Remove account A or nick N from TxM and RxM", notm),
("/rmlogs {A,N}", "Remove log entries for A/N on TxM and RxM", True),
("/set S V", "Change setting S to value V on TxM/RxM(/NH)", True),
("/settings", "List setting names, values and descriptions", True),
("/store {on,off} (' all')", "Change file reception (for all contacts)", True),
("/unread, ' '", "List windows with unread messages on RxM", True),
("/whisper M", "Send message M, asking it not to be logged", True),
("/wipe", "Wipe all TFC/IM user data and power off systems", True),
("Shift + PgUp/PgDn", "Scroll terminal up/down", True),]
groupc = [("/group create G A₁ .. Aₙ ", "Create group G and add accounts A₁ .. Aₙ", notm),
("/group add G A₁ .. Aₙ", "Add accounts A₁ .. Aₙ to group G", notm),
("/group rm G A₁ .. Aₙ", "Remove accounts A₁ .. Aₙ from group G", notm),
("/group rm G", "Remove group G", notm)]
terminal_width = get_terminal_width()
clear_screen()
print(textwrap.fill("List of commands:", width=terminal_width))
print('')
help_printer(common)
print(terminal_width * '')
if settings.session_traffic_masking:
print('')
else:
print("Group management:\n")
help_printer(groupc)
print(terminal_width * '' + '\n')
def print_recipients(contact_list: 'ContactList', group_list: 'GroupList') -> None:
"""Print list of contacts and groups."""
contact_list.print_contacts()
group_list.print_groups()
def change_master_key(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey') -> None:
"""Change master key on TxM/RxM."""
try:
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
try:
device = user_input.plaintext.split()[1].lower()
except IndexError:
raise FunctionReturn("Error: No target system specified.")
if device not in [TX, RX]:
raise FunctionReturn("Error: Invalid target system.")
if device == RX:
queue_command(CHANGE_MASTER_K_HEADER, settings, queues[COMMAND_PACKET_QUEUE])
return None
old_master_key = master_key.master_key[:]
master_key.new_master_key()
new_master_key = master_key.master_key
phase("Re-encrypting databases")
queues[KEY_MANAGEMENT_QUEUE].put((KDB_CHANGE_MASTER_KEY_HEADER, master_key))
ensure_dir(DIR_USER_DATA)
file_name = f'{DIR_USER_DATA}{settings.software_operation}_logs'
if os.path.isfile(file_name):
re_encrypt(old_master_key, new_master_key, settings)
settings.store_settings()
contact_list.store_contacts()
group_list.store_groups()
phase(DONE)
box_print("Master key successfully changed.", head=1)
clear_screen(delay=1.5)
except KeyboardInterrupt:
raise FunctionReturn("Password change aborted.", delay=1, head=3, tail_clear=True)
def remove_log(user_input: 'UserInput',
contact_list: 'ContactList',
settings: 'Settings',
c_queue: 'Queue',
master_key: 'MasterKey') -> None:
"""Remove log entries for contact."""
try:
selection = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No contact/group specified.")
if not yes(f"Remove logs for {selection}?", head=1):
raise FunctionReturn("Logfile removal aborted.")
# Swap specified nick to rx_account
if selection in contact_list.get_list_of_nicks():
selection = contact_list.get_contact(selection).rx_account
command = LOG_REMOVE_HEADER + selection.encode()
queue_command(command, settings, c_queue)
remove_logs(selection, settings, master_key)
def change_setting(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Change setting on TxM / RxM."""
try:
setting = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No setting specified.")
if setting not in settings.key_list:
raise FunctionReturn(f"Error: Invalid setting '{setting}'")
try:
value = user_input.plaintext.split()[2]
except IndexError:
raise FunctionReturn("Error: No value for setting specified.")
pt_cmd = dict(serial_error_correction=UNENCRYPTED_EC_RATIO,
serial_baudrate =UNENCRYPTED_BAUDRATE,
disable_gui_dialog =UNENCRYPTED_GUI_DIALOG)
if setting in pt_cmd:
if settings.session_traffic_masking:
raise FunctionReturn("Error: Can't change this setting during traffic masking.")
settings.change_setting(setting, value, contact_list, group_list)
command = CHANGE_SETTING_HEADER + setting.encode() + US_BYTE + value.encode()
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
if setting in pt_cmd:
packet = UNENCRYPTED_PACKET_HEADER + pt_cmd[setting] + value.encode()
queue_to_nh(packet, settings, queues[NH_PACKET_QUEUE])
def rxm_display_unread(settings: 'Settings', c_queue: 'Queue') -> None:
"""Temporarily display list of windows with unread messages on RxM."""
queue_command(SHOW_WINDOW_ACTIVITY_HEADER, settings, c_queue)
def whisper(user_input: 'UserInput', window: 'TxWindow', settings: 'Settings', m_queue: 'Queue') -> None:
"""Send a message to contact that overrides enabled logging setting.
The functionality of this feature is impossible to enforce, but if
the recipient can be trusted, it can be used to send keys for to be
imported files as well as off-the-record messages, without worrying
they are stored into log files, ruining forward secrecy for imported
(and later deleted) files.
"""
message = user_input.plaintext[len('whisper '):]
queue_message(user_input=UserInput(message, MESSAGE),
window =window,
settings =settings,
m_queue =m_queue,
header =WHISPER_MESSAGE_HEADER,
log_as_ph =True)
def wipe(settings: 'Settings', queues: Dict[bytes, 'Queue']) -> None:
"""Reset terminals, wipe all user data from TxM/RxM/NH and power off systems.
No effective RAM overwriting tool currently exists, so as long as TxM/RxM
use FDE and DDR3 memory, recovery of user data becomes impossible very fast:
https://www1.cs.fau.de/filepool/projects/coldboot/fares_coldboot.pdf
"""
if not yes("Wipe all user data and power off systems?"):
raise FunctionReturn("Wipe command aborted.")
clear_screen()
for q in [COMMAND_PACKET_QUEUE, NH_PACKET_QUEUE]:
while queues[q].qsize() != 0:
queues[q].get()
queue_command(WIPE_USER_DATA_HEADER, settings, queues[COMMAND_PACKET_QUEUE])
if not settings.session_traffic_masking:
if settings.local_testing_mode:
time.sleep(0.8)
if settings.data_diode_sockets:
time.sleep(2.2)
else:
time.sleep(settings.race_condition_delay)
queue_to_nh(UNENCRYPTED_PACKET_HEADER + UNENCRYPTED_WIPE_COMMAND, settings, queues[NH_PACKET_QUEUE])
os.system('reset')

View File

@ -1,301 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import re
import typing
from typing import Callable, Dict, List
from src.common.db_logs import remove_logs
from src.common.exceptions import FunctionReturn
from src.common.input import yes
from src.common.misc import ignored
from src.common.output import box_print, group_management_print
from src.common.statics import *
from src.tx.user_input import UserInput
from src.tx.packet import queue_command, queue_message
from src.tx.windows import MockWindow
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
def process_group_command(user_input: 'UserInput',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey') -> None:
"""Parse group command and process it accordingly."""
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
try:
command_type = user_input.plaintext.split()[1] # type: str
except IndexError:
raise FunctionReturn("Error: Invalid group command.")
if command_type not in ['create', 'add', 'rm', 'join']:
raise FunctionReturn("Error: Invalid group command.")
try:
group_name = user_input.plaintext.split()[2] # type: str
except IndexError:
raise FunctionReturn("Error: No group name specified.")
purp_members = user_input.plaintext.split()[3:] # type: List[str]
# Swap specified nicks to rx_accounts
for i, m in enumerate(purp_members):
if m in contact_list.get_list_of_nicks():
purp_members[i] = contact_list.get_contact(m).rx_account
func_d = dict(create=group_create,
join =group_create,
add =group_add_member,
rm =group_rm_member) # type: Dict[str, Callable]
func = func_d[command_type]
func(group_name, purp_members, group_list, contact_list, settings, queues, master_key)
print('')
def validate_group_name(group_name: str, contact_list: 'ContactList', group_list: 'GroupList') -> None:
"""Check that group name is valid."""
# Avoids collision with delimiters
if not group_name.isprintable():
raise FunctionReturn("Error: Group name must be printable.")
# Length limited by database's unicode padding
if len(group_name) >= PADDING_LEN:
raise FunctionReturn("Error: Group name must be less than 255 chars long.")
if group_name == DUMMY_GROUP:
raise FunctionReturn("Error: Group name can't use name reserved for database padding.")
if re.match(ACCOUNT_FORMAT, group_name):
raise FunctionReturn("Error: Group name can't have format of an account.")
if group_name in contact_list.get_list_of_nicks():
raise FunctionReturn("Error: Group name can't be nick of contact.")
if group_name in group_list.get_list_of_group_names():
if not yes(f"Group with name '{group_name}' already exists. Overwrite?", head=1):
raise FunctionReturn("Group creation aborted.")
def group_create(group_name: str,
purp_members: List[str],
group_list: 'GroupList',
contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
_: 'MasterKey') -> None:
"""Create a new group.
Validate group name and determine what members that can be added.
"""
validate_group_name(group_name, contact_list, group_list)
accounts = set(contact_list.get_list_of_accounts())
purp_accounts = set(purp_members)
accepted = list(accounts & purp_accounts)
rejected = list(purp_accounts - accounts)
if len(accepted) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} members per group.")
if len(group_list) == settings.max_number_of_groups:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_groups} groups.")
group_list.add_group(group_name,
settings.log_messages_by_default,
settings.show_notifications_by_default,
members=[contact_list.get_contact(c) for c in accepted])
fields = [f.encode() for f in ([group_name] + accepted)]
command = GROUP_CREATE_HEADER + US_BYTE.join(fields)
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
group_management_print(NEW_GROUP, accepted, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if accepted:
if yes("Publish list of group members to participants?"):
for member in accepted:
m_list = [m for m in accepted if m != member]
queue_message(user_input=UserInput(US_STR.join([group_name] + m_list), MESSAGE),
window =MockWindow(member, [contact_list.get_contact(member)]),
settings =settings,
m_queue =queues[MESSAGE_PACKET_QUEUE],
header =GROUP_MSG_INVITEJOIN_HEADER,
log_as_ph =True)
else:
box_print(f"Created an empty group '{group_name}'", head=1)
def group_add_member(group_name: str,
purp_members: List['str'],
group_list: 'GroupList',
contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey') -> None:
"""Add new member(s) to group."""
if group_name not in group_list.get_list_of_group_names():
if yes(f"Group {group_name} was not found. Create new group?", head=1):
group_create(group_name, purp_members, group_list, contact_list, settings, queues, master_key)
return None
else:
raise FunctionReturn("Group creation aborted.")
purp_accounts = set(purp_members)
accounts = set(contact_list.get_list_of_accounts())
before_adding = set(group_list.get_group(group_name).get_list_of_member_accounts())
ok_accounts_set = set(accounts & purp_accounts)
new_in_group_set = set(ok_accounts_set - before_adding)
end_assembly = list(before_adding | new_in_group_set)
rejected = list(purp_accounts - accounts)
already_in_g = list(before_adding & purp_accounts)
new_in_group = list(new_in_group_set)
ok_accounts = list(ok_accounts_set)
if len(end_assembly) > settings.max_number_of_group_members:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_group_members} members per group.")
group = group_list.get_group(group_name)
group.add_members([contact_list.get_contact(a) for a in new_in_group])
fields = [f.encode() for f in ([group_name] + ok_accounts)]
command = GROUP_ADD_HEADER + US_BYTE.join(fields)
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
group_management_print(ADDED_MEMBERS, new_in_group, contact_list, group_name)
group_management_print(ALREADY_MEMBER, already_in_g, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if new_in_group:
if yes("Publish new list of members to involved?"):
for member in before_adding:
queue_message(user_input=UserInput(US_STR.join([group_name] + new_in_group), MESSAGE),
window =MockWindow(member, [contact_list.get_contact(member)]),
settings =settings,
m_queue =queues[MESSAGE_PACKET_QUEUE],
header =GROUP_MSG_MEMBER_ADD_HEADER,
log_as_ph =True)
for member in new_in_group:
m_list = [m for m in end_assembly if m != member]
queue_message(user_input=UserInput(US_STR.join([group_name] + m_list), MESSAGE),
window =MockWindow(member, [contact_list.get_contact(member)]),
settings =settings,
m_queue =queues[MESSAGE_PACKET_QUEUE],
header =GROUP_MSG_INVITEJOIN_HEADER,
log_as_ph =True)
def group_rm_member(group_name: str,
purp_members: List[str],
group_list: 'GroupList',
contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey') -> None:
"""Remove member(s) from group or group itself."""
if not purp_members:
group_rm_group(group_name, group_list, settings, queues, master_key)
if group_name not in group_list.get_list_of_group_names():
raise FunctionReturn(f"Group '{group_name}' does not exist.")
purp_accounts = set(purp_members)
accounts = set(contact_list.get_list_of_accounts())
before_removal = set(group_list.get_group(group_name).get_list_of_member_accounts())
ok_accounts_set = set(purp_accounts & accounts)
removable_set = set(before_removal & ok_accounts_set)
end_assembly = list(before_removal - removable_set)
not_in_group = list(ok_accounts_set - before_removal)
rejected = list(purp_accounts - accounts)
removable = list(removable_set)
ok_accounts = list(ok_accounts_set)
group = group_list.get_group(group_name)
group.remove_members(removable)
fields = [f.encode() for f in ([group_name] + ok_accounts)]
command = GROUP_REMOVE_M_HEADER + US_BYTE.join(fields)
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
group_management_print(REMOVED_MEMBERS, removable, contact_list, group_name)
group_management_print(NOT_IN_GROUP, not_in_group, contact_list, group_name)
group_management_print(UNKNOWN_ACCOUNTS, rejected, contact_list, group_name)
if removable and end_assembly and yes("Publish list of removed members to remaining members?"):
for member in end_assembly:
queue_message(user_input=UserInput(US_STR.join([group_name] + removable), MESSAGE),
window =MockWindow(member, [contact_list.get_contact(member)]),
settings =settings,
m_queue =queues[MESSAGE_PACKET_QUEUE],
header =GROUP_MSG_MEMBER_REM_HEADER,
log_as_ph =True)
def group_rm_group(group_name: str,
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey'):
"""Remove group with it's members."""
if not yes(f"Remove group '{group_name}'?", head=1):
raise FunctionReturn("Group removal aborted.")
rm_logs = yes("Also remove logs for the group?", head=1)
command = GROUP_DELETE_HEADER + group_name.encode()
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
if rm_logs:
command = LOG_REMOVE_HEADER + group_name.encode()
queue_command(command, settings, queues[COMMAND_PACKET_QUEUE])
with ignored(FunctionReturn):
remove_logs(group_name, settings, master_key)
if group_name not in group_list.get_list_of_group_names():
raise FunctionReturn(f"TxM has no group '{group_name}' to remove.")
group = group_list.get_group(group_name)
if group.has_members() and yes("Notify members about leaving the group?"):
for member in group:
queue_message(user_input=UserInput(group_name, MESSAGE),
window =MockWindow(member.rx_account, [member]),
settings =settings,
m_queue =queues[MESSAGE_PACKET_QUEUE],
header =GROUP_MSG_EXIT_GROUP_HEADER,
log_as_ph =True)
group_list.remove_group(group_name)
raise FunctionReturn(f"Removed group '{group_name}'")

View File

@ -1,253 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import typing
from typing import Dict
from src.common.db_logs import remove_logs
from src.common.exceptions import FunctionReturn
from src.common.input import box_input, yes
from src.common.misc import ignored, validate_account, validate_key_exchange, validate_nick
from src.common.output import box_print, c_print, clear_screen, print_fingerprint
from src.common.statics import *
from src.tx.key_exchanges import create_pre_shared_key, start_key_exchange
from src.tx.packet import queue_command
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_groups import GroupList
from src.common.db_masterkey import MasterKey
from src.common.db_settings import Settings
from src.tx.user_input import UserInput
from src.tx.windows import TxWindow
def add_new_contact(contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Prompt for contact account details and initialize desired key exchange."""
try:
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
if len(contact_list) >= settings.max_number_of_contacts:
raise FunctionReturn(f"Error: TFC settings only allow {settings.max_number_of_contacts} accounts.")
clear_screen()
c_print("Add new contact", head=1)
contact_account = box_input("Contact account", validator=validate_account).strip()
user_account = box_input("Your account", validator=validate_account).strip()
default_nick = contact_account.split('@')[0].capitalize()
contact_nick = box_input(f"Contact nick [{default_nick}]", default=default_nick, validator=validate_nick,
validator_args=(contact_list, group_list, contact_account)).strip()
key_exchange = box_input("Key exchange ([X25519],PSK) ", default=X25519, validator=validate_key_exchange).strip()
if key_exchange.lower() in X25519:
start_key_exchange(contact_account, user_account, contact_nick, contact_list, settings, queues)
elif key_exchange.lower() in PSK:
create_pre_shared_key(contact_account, user_account, contact_nick, contact_list, settings, queues)
except KeyboardInterrupt:
raise FunctionReturn("Contact creation aborted.", head_clear=True)
def remove_contact(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
master_key: 'MasterKey') -> None:
"""Remove contact on TxM/RxM."""
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
try:
selection = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No account specified.")
if not yes(f"Remove {selection} completely?", head=1):
raise FunctionReturn("Removal of contact aborted.")
rm_logs = yes(f"Also remove logs for {selection}?", head=1)
# Load account if selector was nick
if selection in contact_list.get_list_of_nicks():
selection = contact_list.get_contact(selection).rx_account
packet = CONTACT_REMOVE_HEADER + selection.encode()
queue_command(packet, settings, queues[COMMAND_PACKET_QUEUE])
if rm_logs:
packet = LOG_REMOVE_HEADER + selection.encode()
queue_command(packet, settings, queues[COMMAND_PACKET_QUEUE])
with ignored(FunctionReturn):
remove_logs(selection, settings, master_key)
queues[KEY_MANAGEMENT_QUEUE].put((KDB_REMOVE_ENTRY_HEADER, selection))
if selection in contact_list.get_list_of_accounts():
contact_list.remove_contact(selection)
box_print(f"Removed {selection} from contacts.", head=1, tail=1)
else:
box_print(f"TxM has no {selection} to remove.", head=1, tail=1)
if any([g.remove_members([selection]) for g in group_list]):
box_print(f"Removed {selection} from group(s).", tail=1)
if window.type == WIN_TYPE_CONTACT:
if selection == window.uid:
window.deselect_window()
if window.type == WIN_TYPE_GROUP:
for c in window:
if selection == c.rx_account:
window.update_group_win_members(group_list)
# If last member from group is removed, deselect group.
# Deselection is not done in update_group_win_members
# because it would prevent selecting the empty group
# for group related commands such as notifications.
if not window.window_contacts:
window.deselect_window()
def change_nick(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
c_queue: 'Queue') -> None:
"""Change nick of contact."""
if window.type == WIN_TYPE_GROUP:
raise FunctionReturn("Error: Group is selected.")
try:
nick = user_input.plaintext.split()[1]
except IndexError:
raise FunctionReturn("Error: No nick specified.")
rx_account = window.contact.rx_account
error_msg = validate_nick(nick, (contact_list, group_list, rx_account))
if error_msg:
raise FunctionReturn(error_msg)
window.contact.nick = nick
window.name = nick
contact_list.store_contacts()
packet = CHANGE_NICK_HEADER + rx_account.encode() + US_BYTE + nick.encode()
queue_command(packet, settings, c_queue)
def contact_setting(user_input: 'UserInput',
window: 'TxWindow',
contact_list: 'ContactList',
group_list: 'GroupList',
settings: 'Settings',
c_queue: 'Queue') -> None:
"""\
Change logging, file reception, or received message
notification setting of group or (all) contact(s).
"""
try:
parameters = user_input.plaintext.split()
cmd_key = parameters[0]
cmd_header = {LOGGING: CHANGE_LOGGING_HEADER,
STORE: CHANGE_FILE_R_HEADER,
NOTIFY: CHANGE_NOTIFY_HEADER}[cmd_key]
s_value, b_value = dict(on =(ENABLE, True),
off=(DISABLE, False))[parameters[1]]
except (IndexError, KeyError):
raise FunctionReturn("Error: Invalid command.")
# If second parameter 'all' is included, apply setting for all contacts and groups
try:
target = b''
if parameters[2] == ALL:
cmd_value = s_value.upper() + US_BYTE
else:
raise FunctionReturn("Error: Invalid command.")
except IndexError:
target = window.uid.encode()
cmd_value = s_value + US_BYTE + target
if target:
if window.type == WIN_TYPE_CONTACT:
if cmd_key == LOGGING: window.contact.log_messages = b_value
if cmd_key == STORE: window.contact.file_reception = b_value
if cmd_key == NOTIFY: window.contact.notifications = b_value
contact_list.store_contacts()
if window.type == WIN_TYPE_GROUP:
if cmd_key == LOGGING: window.group.log_messages = b_value
if cmd_key == STORE:
for c in window:
c.file_reception = b_value
if cmd_key == NOTIFY: window.group.notifications = b_value
group_list.store_groups()
else:
for contact in contact_list:
if cmd_key == LOGGING: contact.log_messages = b_value
if cmd_key == STORE: contact.file_reception = b_value
if cmd_key == NOTIFY: contact.notifications = b_value
contact_list.store_contacts()
for group in group_list:
if cmd_key == LOGGING: group.log_messages = b_value
if cmd_key == NOTIFY: group.notifications = b_value
group_list.store_groups()
packet = cmd_header + cmd_value
if settings.session_traffic_masking and cmd_key == LOGGING:
window.update_log_messages()
queue_command(packet, settings, c_queue, window)
else:
window.update_log_messages()
queue_command(packet, settings, c_queue)
def show_fingerprints(window: 'TxWindow') -> None:
"""Print domain separated fingerprints of public keys on TxM.
Comparison of fingerprints over authenticated channel can be
used to verify users are not under man-in-the-middle attack.
"""
if window.type == WIN_TYPE_GROUP:
raise FunctionReturn('Group is selected.')
if window.contact.tx_fingerprint == bytes(FINGERPRINT_LEN):
raise FunctionReturn(f"Pre-shared keys have no fingerprints.")
clear_screen()
print_fingerprint(window.contact.tx_fingerprint, " Your fingerprint (you read) ")
print_fingerprint(window.contact.rx_fingerprint, "Contact's fingerprint (they read)")
print('')

View File

@ -1,161 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of .
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import base64
import datetime
import os
import typing
import zlib
from src.common.crypto import byte_padding, csprng, encrypt_and_sign
from src.common.encoding import int_to_bytes
from src.common.exceptions import FunctionReturn
from src.common.misc import readable_size, split_byte_string
from src.common.reed_solomon import RSCodec
from src.common.statics import *
if typing.TYPE_CHECKING:
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.tx.windows import TxWindow
class File(object):
"""File object wraps methods around file data/header processing."""
def __init__(self,
path: str,
window: 'TxWindow',
settings: 'Settings',
gateway: 'Gateway') -> None:
"""Load file data from specified path and add headers."""
self.path = path
self.window = window
self.settings = settings
self.gateway = gateway
self.name = None # type: bytes
self.size = None # type: bytes
self.data = None # type: bytes
self.time_bytes = bytes(FILE_ETA_FIELD_LEN)
self.time_print = ''
self.size_print = ''
self.plaintext = b''
self.load_file_data()
self.process_file_data()
self.finalize()
def load_file_data(self) -> None:
"""Load file name, size and data from specified path."""
if not os.path.isfile(self.path):
raise FunctionReturn("Error: File not found.")
self.name = (self.path.split('/')[-1]).encode()
self.name_length_check()
byte_size = os.path.getsize(self.path)
if byte_size == 0:
raise FunctionReturn("Error: Target file is empty.")
self.size = int_to_bytes(byte_size)
self.size_print = readable_size(byte_size)
with open(self.path, 'rb') as f:
self.data = f.read()
def process_file_data(self) -> None:
"""Compress, encrypt and encode file data.
Compress file to reduce data transmission time. Add inner
layer of encryption to provide sender-based control over
partial transmission. Encode data with Base85. This prevents
inner ciphertext from colliding with file header delimiters.
"""
compressed = zlib.compress(self.data, level=COMPRESSION_LEVEL)
file_key = csprng()
encrypted = encrypt_and_sign(compressed, key=file_key)
encrypted += file_key
self.data = base64.b85encode(encrypted)
def finalize(self) -> None:
"""Finalize packet and generate plaintext."""
self.update_delivery_time()
self.plaintext = self.time_bytes + self.size + self.name + US_BYTE + self.data
def name_length_check(self) -> None:
"""Ensure that file header fits the first packet."""
header = bytes(FILE_PACKET_CTR_LEN + FILE_ETA_FIELD_LEN + FILE_SIZE_FIELD_LEN)
header += self.name + US_BYTE
if len(header) >= PADDING_LEN:
raise FunctionReturn("Error: File name is too long.")
def count_number_of_packets(self) -> int:
"""Count number of packets needed for file delivery."""
packet_data = self.time_bytes + self.size + self.name + US_BYTE + self.data
if len(packet_data) < PADDING_LEN:
return 1
else:
packet_data += bytes(FILE_PACKET_CTR_LEN)
packet_data = byte_padding(packet_data)
return len(split_byte_string(packet_data, item_len=PADDING_LEN))
def update_delivery_time(self) -> None:
"""Calculate transmission time.
Transmission time is based on average delays and settings.
"""
no_packets = self.count_number_of_packets()
if self.settings.session_traffic_masking:
avg_delay = self.settings.traffic_masking_static_delay + (self.settings.traffic_masking_random_delay / 2)
if self.settings.multi_packet_random_delay:
avg_delay += (self.settings.max_duration_of_random_delay / 2)
total_time = len(self.window) * no_packets * avg_delay
total_time *= 2 # Accommodate command packets between file packets
total_time += no_packets * TRAFFIC_MASKING_QUEUE_CHECK_DELAY
else:
# Determine total data to be transmitted over serial
rs = RSCodec(2 * self.settings.session_serial_error_correction)
total_data = 0
for c in self.window:
data = os.urandom(PACKET_LENGTH) + c.rx_account.encode() + c.tx_account.encode()
enc_data = rs.encode(data)
total_data += no_packets * len(enc_data)
# Determine time required to send all data
total_time = 0.0
if self.settings.local_testing_mode:
total_time += no_packets * LOCAL_TESTING_PACKET_DELAY
else:
total_bauds = total_data * BAUDS_PER_BYTE
total_time += total_bauds / self.settings.session_serial_baudrate
total_time += no_packets * self.settings.txm_inter_packet_delay
if self.settings.multi_packet_random_delay:
total_time += no_packets * (self.settings.max_duration_of_random_delay / 2)
# Update delivery time
self.time_bytes = int_to_bytes(int(total_time))
self.time_print = str(datetime.timedelta(seconds=int(total_time)))

View File

@ -1,331 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import time
import typing
from typing import Dict
import nacl.bindings
import nacl.encoding
import nacl.public
from src.common.crypto import argon2_kdf, csprng, encrypt_and_sign, hash_chain
from src.common.db_masterkey import MasterKey
from src.common.exceptions import FunctionReturn
from src.common.input import ask_confirmation_code, get_b58_key, nh_bypass_msg, yes
from src.common.output import box_print, c_print, clear_screen, message_printer, print_key
from src.common.output import phase, print_fingerprint, print_on_previous_line
from src.common.path import ask_path_gui
from src.common.statics import *
from src.tx.packet import queue_command, queue_to_nh
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_settings import Settings
from src.tx.windows import TxWindow
def new_local_key(contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Run Tx-side local key exchange protocol.
Local key encrypts commands and data sent from TxM to RxM. The key is
delivered to RxM in packet encrypted with an ephemeral symmetric key.
The checksummed Base58 format key decryption key is typed on RxM manually.
This prevents local key leak in following scenarios:
1. CT is intercepted by adversary on compromised NH but no visual
eavesdropping takes place.
2. CT is not intercepted by adversary on NH but visual eavesdropping
records decryption key.
3. CT is delivered from TxM to RxM (compromised NH is bypassed) and
visual eavesdropping records decryption key.
Once correct key decryption key is entered on RxM, Receiver program will
display the 1-byte confirmation code generated by Transmitter program.
The code will be entered on TxM to confirm user has successfully delivered
the key decryption key.
The protocol is completed with Transmitter program sending an ACK message
to Receiver program, that then moves to wait for public keys from contact.
"""
try:
if settings.session_traffic_masking and contact_list.has_local_contact:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
clear_screen()
c_print("Local key setup", head=1, tail=1)
c_code = os.urandom(1)
key = csprng()
hek = csprng()
kek = csprng()
packet = LOCAL_KEY_PACKET_HEADER + encrypt_and_sign(key + hek + c_code, key=kek)
nh_bypass_msg(NH_BYPASS_START, settings)
queue_to_nh(packet, settings, queues[NH_PACKET_QUEUE])
while True:
print_key("Local key decryption key (to RxM)", kek, settings)
purp_code = ask_confirmation_code()
if purp_code == c_code.hex():
break
elif purp_code == RESEND:
phase("Resending local key", head=2)
queue_to_nh(packet, settings, queues[NH_PACKET_QUEUE])
phase(DONE)
print_on_previous_line(reps=(9 if settings.local_testing_mode else 10))
else:
box_print(["Incorrect confirmation code. If RxM did not receive",
"encrypted local key, resend it by typing 'resend'."], head=1)
print_on_previous_line(reps=(11 if settings.local_testing_mode else 12), delay=2)
nh_bypass_msg(NH_BYPASS_STOP, settings)
# Add local contact to contact list database
contact_list.add_contact(LOCAL_ID, LOCAL_ID, LOCAL_ID,
bytes(FINGERPRINT_LEN), bytes(FINGERPRINT_LEN),
False, False, False)
# Add local contact to keyset database
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER, LOCAL_ID,
key, csprng(),
hek, csprng()))
# Notify RxM that confirmation code was successfully entered
queue_command(LOCAL_KEY_INSTALLED_HEADER, settings, queues[COMMAND_PACKET_QUEUE])
box_print("Successfully added a new local key.")
clear_screen(delay=1)
except KeyboardInterrupt:
raise FunctionReturn("Local key setup aborted.", delay=1, head=3, tail_clear=True)
def verify_fingerprints(tx_fp: bytes, rx_fp: bytes) -> bool:
"""\
Verify fingerprints over out-of-band channel to
detect MITM attacks against TFC's key exchange.
:param tx_fp: User's fingerprint
:param rx_fp: Contact's fingerprint
:return: True if fingerprints match, else False
"""
clear_screen()
message_printer("To verify received public key was not replaced by attacker in network, "
"call the contact over end-to-end encrypted line, preferably Signal "
"(https://signal.org/). Make sure Signal's safety numbers have been "
"verified, and then verbally compare the key fingerprints below.", head=1, tail=1)
print_fingerprint(tx_fp, " Your fingerprint (you read) ")
print_fingerprint(rx_fp, "Purported fingerprint for contact (they read)")
return yes("Is the contact's fingerprint correct?")
def start_key_exchange(account: str,
user: str,
nick: str,
contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Start X25519 key exchange with recipient.
Variable naming:
tx = user's key rx = contact's key
sk = private (secret) key pk = public key
key = message key hek = header key
dh_ssk = X25519 shared secret
:param account: The contact's account name (e.g. alice@jabber.org)
:param user: The user's account name (e.g. bob@jabber.org)
:param nick: Contact's nickname
:param contact_list: Contact list object
:param settings: Settings object
:param queues: Dictionary of multiprocessing queues
:return: None
"""
try:
tx_sk = nacl.public.PrivateKey(csprng())
tx_pk = bytes(tx_sk.public_key)
while True:
queue_to_nh(PUBLIC_KEY_PACKET_HEADER
+ tx_pk
+ user.encode()
+ US_BYTE
+ account.encode(),
settings, queues[NH_PACKET_QUEUE])
rx_pk = get_b58_key(B58_PUB_KEY, settings)
if rx_pk != RESEND.encode():
break
if rx_pk == bytes(KEY_LENGTH):
# Public key is zero with negligible probability, therefore we
# assume such key is malicious and attempts to either result in
# zero shared key (pointless considering implementation), or to
# DoS the key exchange as libsodium does not accept zero keys.
box_print(["Warning!",
"Received a malicious public key from network.",
"Aborting key exchange for your safety."], tail=1)
raise FunctionReturn("Error: Zero public key", output=False)
dh_box = nacl.public.Box(tx_sk, nacl.public.PublicKey(rx_pk))
dh_ssk = dh_box.shared_key()
# Domain separate each key with key-type specific context variable
# and with public keys that both clients know which way to place.
tx_key = hash_chain(dh_ssk + rx_pk + b'message_key')
rx_key = hash_chain(dh_ssk + tx_pk + b'message_key')
tx_hek = hash_chain(dh_ssk + rx_pk + b'header_key')
rx_hek = hash_chain(dh_ssk + tx_pk + b'header_key')
# Domain separate fingerprints of public keys by using the shared
# secret as salt. This way entities who might monitor fingerprint
# verification channel are unable to correlate spoken values with
# public keys that transit through a compromised IM server. This
# protects against de-anonymization of IM accounts in cases where
# clients connect to the compromised server via Tor. The preimage
# resistance of hash chain protects the shared secret from leaking.
tx_fp = hash_chain(dh_ssk + tx_pk + b'fingerprint')
rx_fp = hash_chain(dh_ssk + rx_pk + b'fingerprint')
if not verify_fingerprints(tx_fp, rx_fp):
box_print(["Warning!",
"Possible man-in-the-middle attack detected.",
"Aborting key exchange for your safety."], tail=1)
raise FunctionReturn("Error: Fingerprint mismatch", output=False)
packet = KEY_EX_X25519_HEADER \
+ tx_key + tx_hek \
+ rx_key + rx_hek \
+ account.encode() + US_BYTE + nick.encode()
queue_command(packet, settings, queues[COMMAND_PACKET_QUEUE])
contact_list.add_contact(account, user, nick,
tx_fp, rx_fp,
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
# Use random values as Rx-keys to prevent decryption if they're accidentally used.
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER, account,
tx_key, csprng(),
tx_hek, csprng()))
box_print(f"Successfully added {nick}.")
clear_screen(delay=1)
except KeyboardInterrupt:
raise FunctionReturn("Key exchange aborted.", delay=1, head=2, tail_clear=True)
def create_pre_shared_key(account: str,
user: str,
nick: str,
contact_list: 'ContactList',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Generate new pre-shared key for manual key delivery.
:param account: The contact's account name (e.g. alice@jabber.org)
:param user: The user's account name (e.g. bob@jabber.org)
:param nick: Nick of contact
:param contact_list: Contact list object
:param settings: Settings object
:param queues: Dictionary of multiprocessing queues
:return: None
"""
try:
tx_key = csprng()
tx_hek = csprng()
salt = csprng()
password = MasterKey.new_password("password for PSK")
phase("Deriving key encryption key", head=2)
kek, _ = argon2_kdf(password, salt, parallelism=1)
phase(DONE)
ct_tag = encrypt_and_sign(tx_key + tx_hek, key=kek)
while True:
store_d = ask_path_gui(f"Select removable media for {nick}", settings)
f_name = f"{store_d}/{user}.psk - Give to {account}"
try:
with open(f_name, 'wb+') as f:
f.write(salt + ct_tag)
break
except PermissionError:
c_print("Error: Did not have permission to write to directory.")
time.sleep(0.5)
continue
packet = KEY_EX_PSK_TX_HEADER \
+ tx_key \
+ tx_hek \
+ account.encode() + US_BYTE + nick.encode()
queue_command(packet, settings, queues[COMMAND_PACKET_QUEUE])
contact_list.add_contact(account, user, nick,
bytes(FINGERPRINT_LEN), bytes(FINGERPRINT_LEN),
settings.log_messages_by_default,
settings.accept_files_by_default,
settings.show_notifications_by_default)
queues[KEY_MANAGEMENT_QUEUE].put((KDB_ADD_ENTRY_HEADER, account,
tx_key, csprng(),
tx_hek, csprng()))
box_print(f"Successfully added {nick}.", head=1)
clear_screen(delay=1)
except KeyboardInterrupt:
raise FunctionReturn("PSK generation aborted.", delay=1, head=2, tail_clear=True)
def rxm_load_psk(window: 'TxWindow',
contact_list: 'ContactList',
settings: 'Settings',
c_queue: 'Queue') -> None:
"""Load PSK for selected contact on RxM."""
if settings.session_traffic_masking:
raise FunctionReturn("Error: Command is disabled during traffic masking.")
if window.type == WIN_TYPE_GROUP:
raise FunctionReturn("Error: Group is selected.")
if contact_list.get_contact(window.uid).tx_fingerprint != bytes(FINGERPRINT_LEN):
raise FunctionReturn("Error: Current key was exchanged with X25519.")
packet = KEY_EX_PSK_RX_HEADER + window.uid.encode()
queue_command(packet, settings, c_queue)

View File

@ -1,312 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import os
import random
import time
import typing
import zlib
from typing import Dict, List, Union
from src.common.crypto import byte_padding, csprng, encrypt_and_sign, hash_chain
from src.common.encoding import int_to_bytes
from src.common.exceptions import CriticalError, FunctionReturn
from src.common.input import yes
from src.common.misc import split_byte_string
from src.common.output import c_print
from src.common.path import ask_path_gui
from src.common.reed_solomon import RSCodec
from src.common.statics import *
from src.tx.files import File
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.tx.user_input import UserInput
from src.tx.windows import MockWindow, TxWindow
def queue_message(user_input: 'UserInput',
window: Union['MockWindow', 'TxWindow'],
settings: 'Settings',
m_queue: 'Queue',
header: bytes = b'',
log_as_ph: bool = False) -> None:
"""Prepend header, split to assembly packets and queue them."""
if not header:
if window.type == WIN_TYPE_GROUP:
group_msg_id = os.urandom(GROUP_MSG_ID_LEN)
header = GROUP_MESSAGE_HEADER + group_msg_id + window.name.encode() + US_BYTE
else:
header = PRIVATE_MESSAGE_HEADER
payload = header + user_input.plaintext.encode()
packet_list = split_to_assembly_packets(payload, MESSAGE)
queue_packets(packet_list, MESSAGE, settings, m_queue, window, log_as_ph)
def queue_file(window: 'TxWindow',
settings: 'Settings',
f_queue: 'Queue',
gateway: 'Gateway') -> None:
"""Ask file path and load file data."""
path = ask_path_gui("Select file to send...", settings, get_file=True)
file = File(path, window, settings, gateway)
packet_list = split_to_assembly_packets(file.plaintext, FILE)
if settings.confirm_sent_files:
try:
if not yes(f"Send {file.name.decode()} ({file.size_print}) to {window.type_print} {window.name} "
f"({len(packet_list)} packets, time: {file.time_print})?"):
raise FunctionReturn("File selection aborted.")
except KeyboardInterrupt:
raise FunctionReturn("File selection aborted.", head=3)
queue_packets(packet_list, FILE, settings, f_queue, window, log_as_ph=True)
def queue_command(command: bytes,
settings: 'Settings',
c_queue: 'Queue',
window: 'TxWindow' = None) -> None:
"""Split command to assembly packets and queue them for sender_loop()."""
packet_list = split_to_assembly_packets(command, COMMAND)
queue_packets(packet_list, COMMAND, settings, c_queue, window)
def queue_to_nh(packet: bytes,
settings: 'Settings',
nh_queue: 'Queue',
delay: bool = False) -> None:
"""Queue unencrypted command/exported file to NH."""
nh_queue.put((packet, delay, settings))
def split_to_assembly_packets(payload: bytes, p_type: str) -> List[bytes]:
"""Split payload to assembly packets.
Messages and commands are compressed to reduce transmission time.
Files have been compressed at earlier phase, before B85 encoding.
If the compressed message can not be sent over one packet, it is
split into multiple assembly packets with headers. Long messages
are encrypted with inner layer of XSalsa20-Poly1305 to provide
sender based control over partially transmitted data. Regardless
of packet size, files always have an inner layer of encryption,
and it is added in earlier phase. Commands do not need
sender-based control, so they are only delivered with hash that
makes integrity check easy.
First assembly packet in file transmission is prepended with 8-byte
packet counter that tells sender and receiver how many packets the
file transmission requires.
"""
s_header = {MESSAGE: M_S_HEADER, FILE: F_S_HEADER, COMMAND: C_S_HEADER}[p_type]
l_header = {MESSAGE: M_L_HEADER, FILE: F_L_HEADER, COMMAND: C_L_HEADER}[p_type]
a_header = {MESSAGE: M_A_HEADER, FILE: F_A_HEADER, COMMAND: C_A_HEADER}[p_type]
e_header = {MESSAGE: M_E_HEADER, FILE: F_E_HEADER, COMMAND: C_E_HEADER}[p_type]
if p_type in [MESSAGE, COMMAND]:
payload = zlib.compress(payload, level=COMPRESSION_LEVEL)
if len(payload) < PADDING_LEN:
padded = byte_padding(payload)
packet_list = [s_header + padded]
else:
if p_type == MESSAGE:
msg_key = csprng()
payload = encrypt_and_sign(payload, msg_key)
payload += msg_key
elif p_type == FILE:
payload = bytes(FILE_PACKET_CTR_LEN) + payload
elif p_type == COMMAND:
payload += hash_chain(payload)
padded = byte_padding(payload)
p_list = split_byte_string(padded, item_len=PADDING_LEN)
if p_type == FILE:
p_list[0] = int_to_bytes(len(p_list)) + p_list[0][FILE_PACKET_CTR_LEN:]
packet_list = ([l_header + p_list[0]] +
[a_header + p for p in p_list[1:-1]] +
[e_header + p_list[-1]])
return packet_list
def queue_packets(packet_list: List[bytes],
p_type: str,
settings: 'Settings',
queue: 'Queue',
window: Union['MockWindow', 'TxWindow'] = None,
log_as_ph: bool = False) -> None:
"""Queue assembly packets for sender_loop()."""
if p_type in [MESSAGE, FILE] and window is not None:
if settings.session_traffic_masking:
for p in packet_list:
queue.put((p, window.log_messages, log_as_ph))
else:
for c in window:
for p in packet_list:
queue.put((p, settings, c.rx_account, c.tx_account, window.log_messages, log_as_ph, window.uid))
elif p_type == COMMAND:
if settings.session_traffic_masking:
for p in packet_list:
if window is None:
log_setting = None
else:
log_setting = window.log_messages
queue.put((p, log_setting))
else:
for p in packet_list:
queue.put((p, settings))
def send_packet(key_list: 'KeyList',
gateway: 'Gateway',
log_queue: 'Queue',
packet: bytes,
settings: 'Settings',
rx_account: str = None,
tx_account: str = None,
logging: bool = None,
log_as_ph: bool = None) -> None:
"""Encrypt and send assembly packet.
:param packet: Padded plaintext assembly packet
:param key_list: Key list object
:param settings: Settings object
:param gateway: Gateway object
:param log_queue: Multiprocessing queue for logged messages
:param rx_account: Recipient account
:param tx_account: Sender's account associated with recipient's account
:param logging: When True, log the assembly packet
:param log_as_ph: When True, log assembly packet as placeholder data
:return: None
"""
if len(packet) != ASSEMBLY_PACKET_LEN:
raise CriticalError("Invalid assembly packet PT length.")
if rx_account is None:
keyset = key_list.get_keyset(LOCAL_ID)
header = COMMAND_PACKET_HEADER
trailer = b''
else:
keyset = key_list.get_keyset(rx_account)
header = MESSAGE_PACKET_HEADER
trailer = tx_account.encode() + US_BYTE + rx_account.encode()
harac_in_bytes = int_to_bytes(keyset.tx_harac)
encrypted_harac = encrypt_and_sign(harac_in_bytes, keyset.tx_hek)
encrypted_message = encrypt_and_sign(packet, keyset.tx_key)
encrypted_packet = header + encrypted_harac + encrypted_message + trailer
transmit(encrypted_packet, settings, gateway)
keyset.rotate_tx_key()
log_queue.put((logging, log_as_ph, packet, rx_account, settings, key_list.master_key))
def transmit(packet: bytes,
settings: 'Settings',
gateway: 'Gateway',
delay: bool = True) -> None:
"""Add Reed-Solomon erasure code and output packet via gateway.
Note that random.SystemRandom() uses Kernel CSPRNG (/dev/urandom),
not Python's weak RNG based on Mersenne Twister:
https://docs.python.org/2/library/random.html#random.SystemRandom
"""
rs = RSCodec(2 * settings.session_serial_error_correction)
packet = rs.encode(packet)
gateway.write(packet)
if settings.local_testing_mode:
time.sleep(LOCAL_TESTING_PACKET_DELAY)
if not settings.session_traffic_masking:
if settings.multi_packet_random_delay and delay:
random_delay = random.SystemRandom().uniform(0, settings.max_duration_of_random_delay)
time.sleep(random_delay)
def cancel_packet(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Cancel sent message/file to contact/group."""
queue, header, p_type = dict(cm=(queues[MESSAGE_PACKET_QUEUE], M_C_HEADER, 'messages'),
cf=(queues[FILE_PACKET_QUEUE], F_C_HEADER, 'files' ))[user_input.plaintext]
cancel_pt = header + bytes(PADDING_LEN)
cancel = False
if settings.session_traffic_masking:
if queue.qsize() != 0:
cancel = True
while queue.qsize() != 0:
queue.get()
log_m_dictionary = dict((c.rx_account, c.log_messages) for c in window)
queue.put((cancel_pt, log_m_dictionary, True))
message = f"Cancelled queues {p_type}." if cancel else f"No {p_type} to cancel."
c_print(message, head=1, tail=1)
else:
p_buffer = []
while queue.qsize() != 0:
q_data = queue.get()
win_uid = q_data[6]
# Put messages unrelated to active window into buffer
if win_uid != window.uid:
p_buffer.append(q_data)
else:
cancel = True
# Put cancel packets for each window contact to queue first
if cancel:
for c in window:
queue.put((cancel_pt, settings, c.rx_account, c.tx_account, c.log_messages, window.uid))
# Put buffered tuples back to queue
for p in p_buffer:
queue.put(p)
if cancel:
message = f"Cancelled queued {p_type} to {window.type_print} {window.name}."
else:
message = f"No {p_type} queued for {window.type_print} {window.name}."
c_print(message, head=1, tail=1)

View File

@ -1,189 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import time
import typing
from typing import Dict, List, Tuple
from src.common.misc import ignored
from src.common.statics import *
from src.tx.packet import send_packet, transmit
from src.tx.traffic_masking import ConstantTime
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_keys import KeyList
from src.common.db_settings import Settings
from src.common.gateway import Gateway
from src.common.db_settings import Settings
def sender_loop(queues: Dict[bytes, 'Queue'],
settings: 'Settings',
gateway: 'Gateway',
key_list: 'KeyList',
unittest: bool = False) -> None:
"""Output packets from queues based on queue priority.
Sender loop loads assembly packets from a set of queues. As
Python's multiprocessing lacks priority queues, several queues are
prioritized based on their status. Whether or not traffic masking
is enabled, files are only transmitted when no messages are being
output. This is because file transmission is usually very slow and
user might need to send messages in the meantime. When traffic
masking is disabled, commands take highest priority as they are not
output all the time. When traffic masking is enabled, commands are
output between each output message packet. This allows commands to
take effect as soon as possible but slows down message/file
delivery by half. Each contact in window is cycled in order.
Making changes to recipient list during use is prevented to protect
user from accidentally revealing use of TFC. When traffic masking
is enabled, if no packets are available in either m_queue or f_queue,
a noise assembly packet is loaded from np_queue. If no command packet
is available in c_queue, a noise command packet is loaded from
nc_queue. TFC does it's best to hide the loading times and encryption
duration by using constant time context manager with CSPRNG spawned
jitter, constant time queue status lookup, and constant time XSalsa20
cipher. However, since TFC is written with in a high-level language,
it is impossible to guarantee TxM never reveals it's user-operation
schedule to NH.
"""
m_queue = queues[MESSAGE_PACKET_QUEUE]
f_queue = queues[FILE_PACKET_QUEUE]
c_queue = queues[COMMAND_PACKET_QUEUE]
n_queue = queues[NH_PACKET_QUEUE]
l_queue = queues[LOG_PACKET_QUEUE]
km_queue = queues[KEY_MANAGEMENT_QUEUE]
np_queue = queues[NOISE_PACKET_QUEUE]
nc_queue = queues[NOISE_COMMAND_QUEUE]
ws_queue = queues[WINDOW_SELECT_QUEUE]
m_buffer = dict() # type: Dict[str, List[Tuple[bytes, Settings, str, str, bool]]]
f_buffer = dict() # type: Dict[str, List[Tuple[bytes, Settings, str, str, bool]]]
if settings.session_traffic_masking:
while ws_queue.qsize() == 0:
time.sleep(0.01)
window, log_messages = ws_queue.get()
while True:
with ignored(EOFError, KeyboardInterrupt):
with ConstantTime(settings, length=TRAFFIC_MASKING_QUEUE_CHECK_DELAY):
queue = [[m_queue, m_queue], [f_queue, np_queue]][m_queue.qsize()==0][f_queue.qsize()==0]
packet, lm, log_as_ph = queue.get()
if lm is not None: # Ignores None sent by noise_packet_loop that does not alter log setting
log_messages = lm
for c in window:
with ConstantTime(settings, d_type=TRAFFIC_MASKING):
send_packet(key_list, gateway, l_queue, packet, settings, c.rx_account, c.tx_account, log_messages, log_as_ph)
with ConstantTime(settings, d_type=TRAFFIC_MASKING):
queue = [c_queue, nc_queue][c_queue.qsize()==0]
command, lm = queue.get()
if lm is not None: # Log setting is only updated with 'logging' command
log_messages = lm
send_packet(key_list, gateway, l_queue, command, settings)
if n_queue.qsize() != 0:
packet, delay, settings = n_queue.get()
transmit(packet, settings, gateway, delay)
if packet[1:] == UNENCRYPTED_EXIT_COMMAND:
queues[EXIT_QUEUE].put(EXIT)
elif packet[1:] == UNENCRYPTED_WIPE_COMMAND:
queues[EXIT_QUEUE].put(WIPE)
if unittest:
break
else:
while True:
try:
if km_queue.qsize() != 0:
key_list.manage(*km_queue.get())
continue
# Commands to RxM
if c_queue.qsize() != 0:
if key_list.has_local_key():
send_packet(key_list, gateway, l_queue, *c_queue.get())
continue
# Commands/exported files to NH
if n_queue.qsize() != 0:
packet, delay, settings = n_queue.get()
transmit(packet, settings, gateway, delay)
if packet[1:] == UNENCRYPTED_EXIT_COMMAND:
queues[EXIT_QUEUE].put(EXIT)
elif packet[1:] == UNENCRYPTED_WIPE_COMMAND:
queues[EXIT_QUEUE].put(WIPE)
continue
# Buffered messages
for rx_account in m_buffer:
if key_list.has_keyset(rx_account) and m_buffer[rx_account]:
send_packet(key_list, gateway, l_queue, *m_buffer[rx_account].pop(0)[:-1]) # Strip window UID as it's only used to cancel packets
continue
# New messages
if m_queue.qsize() != 0:
q_data = m_queue.get()
rx_account = q_data[2]
if key_list.has_keyset(rx_account):
send_packet(key_list, gateway, l_queue, *q_data[:-1])
else:
m_buffer.setdefault(rx_account, []).append(q_data)
continue
# Buffered files
for rx_account in m_buffer:
if key_list.has_keyset(rx_account) and f_buffer[rx_account]:
send_packet(key_list, gateway, l_queue, *f_buffer[rx_account].pop(0)[:-1])
continue
# New files
if f_queue.qsize() != 0:
q_data = f_queue.get()
rx_account = q_data[2]
if key_list.has_keyset(rx_account):
send_packet(key_list, gateway, l_queue, *q_data[:-1])
else:
f_buffer.setdefault(rx_account, []).append(q_data)
if unittest and queues[UNITTEST_QUEUE].qsize() != 0:
break
time.sleep(0.01)
except (EOFError, KeyboardInterrupt):
pass

View File

@ -1,90 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import random
import threading
import time
import typing
from typing import Tuple, Union
from src.common.misc import ignored
from src.common.statics import *
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import ContactList
from src.common.db_settings import Settings
class ConstantTime(object):
"""Constant time context manager.
By joining a thread that sleeps for longer time than it takes for
the function to run, this context manager hides the actual running
time of the function.
Note that random.SystemRandom() uses Kernel CSPRNG (/dev/urandom),
not Python's weak RNG based on Mersenne Twister:
https://docs.python.org/2/library/random.html#random.SystemRandom
"""
def __init__(self,
settings: 'Settings',
d_type: str = STATIC,
length: float = 0.0) -> None:
if d_type == TRAFFIC_MASKING:
self.length = settings.traffic_masking_static_delay
self.length += random.SystemRandom().uniform(0, settings.traffic_masking_random_delay)
if settings.multi_packet_random_delay:
self.length += random.SystemRandom().uniform(0, settings.max_duration_of_random_delay)
elif d_type == STATIC:
self.length = length
def __enter__(self) -> None:
self.timer = threading.Thread(target=time.sleep, args=(self.length,))
self.timer.start()
def __exit__(self, exc_type, exc_value, traceback) -> None:
self.timer.join()
def noise_loop(header: bytes,
queue: 'Queue',
contact_list: 'ContactList' = None,
unittest: bool = False) -> None:
"""Generate noise packets and keep noise queues filled."""
packet = header + bytes(PADDING_LEN)
if contact_list is None:
content = (packet, None) # type: Union[Tuple[bytes, None], Tuple[bytes, None, bool]]
else:
content = (packet, None, True)
while True:
with ignored(EOFError, KeyboardInterrupt):
while queue.qsize() < 100:
queue.put(content)
time.sleep(0.1)
if unittest:
break

View File

@ -1,181 +0,0 @@
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
This file is part of TFC.
TFC is free software: you can redistribute it and/or modify it under the terms
of the GNU General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
TFC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
"""
import typing
from typing import Dict, Generator, Iterable, List, Sized
from src.common.exceptions import FunctionReturn
from src.common.output import clear_screen
from src.common.statics import *
from src.tx.packet import queue_command
if typing.TYPE_CHECKING:
from multiprocessing import Queue
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import Group, GroupList
from src.common.db_settings import Settings
from src.tx.user_input import UserInput
class MockWindow(Iterable):
"""Mock window simplifies queueing of message assembly packets."""
def __init__(self, uid: str, contacts: List['Contact']) -> None:
"""Create new mock window."""
self.uid = uid
self.window_contacts = contacts
self.log_messages = self.window_contacts[0].log_messages
self.type = WIN_TYPE_CONTACT
self.group = None # type: Group
self.name = None # type: str
def __iter__(self) -> Generator:
"""Iterate over contact objects in window."""
yield from self.window_contacts
class TxWindow(Iterable, Sized):
"""
TxWindow objects manages ephemeral communications
data associated with selected contact or group.
"""
def __init__(self,
contact_list: 'ContactList',
group_list: 'GroupList') -> None:
"""Create a new TxWindow object."""
self.contact_list = contact_list
self.group_list = group_list
self.window_contacts = [] # type: List[Contact]
self.group = None # type: Group
self.contact = None # type: Contact
self.name = None # type: str
self.type = None # type: str
self.type_print = None # type: str
self.uid = None # type: str
self.imc_name = None # type: str
self.log_messages = None # type: bool
def __iter__(self) -> Generator:
"""Iterate over Contact objects in window."""
yield from self.window_contacts
def __len__(self) -> int:
"""Return the number of contacts in window."""
return len(self.window_contacts)
def select_tx_window(self,
settings: 'Settings',
queues: Dict[bytes, 'Queue'],
selection: str = None,
cmd: bool = False) -> None:
"""Select specified window or ask the user to specify one."""
if selection is None:
self.contact_list.print_contacts()
self.group_list.print_groups()
selection = input("Select recipient: ").strip()
if selection in self.group_list.get_list_of_group_names():
if cmd and settings.session_traffic_masking and selection != self.uid:
raise FunctionReturn("Error: Can't change window during traffic masking.")
self.group = self.group_list.get_group(selection)
self.window_contacts = self.group.members
self.name = self.group.name
self.uid = self.name
self.log_messages = self.group.log_messages
self.type = WIN_TYPE_GROUP
self.type_print = 'group'
if self.window_contacts:
self.imc_name = self.window_contacts[0].rx_account
elif selection in self.contact_list.contact_selectors():
if cmd and settings.session_traffic_masking:
contact = self.contact_list.get_contact(selection)
if contact.rx_account != self.uid:
raise FunctionReturn("Error: Can't change window during traffic masking.")
self.contact = self.contact_list.get_contact(selection)
self.window_contacts = [self.contact]
self.name = self.contact.nick
self.uid = self.contact.rx_account
self.imc_name = self.contact.rx_account
self.log_messages = self.contact.log_messages
self.type = WIN_TYPE_CONTACT
self.type_print = 'contact'
else:
raise FunctionReturn("Error: No contact/group was found.")
if settings.session_traffic_masking and not cmd:
queues[WINDOW_SELECT_QUEUE].put((self.window_contacts, self.log_messages))
packet = WINDOW_SELECT_HEADER + self.uid.encode()
queue_command(packet, settings, queues[COMMAND_PACKET_QUEUE])
clear_screen()
def deselect_window(self) -> None:
"""Deselect active window."""
self.window_contacts = []
self.group = None # type: Group
self.contact = None # type: Contact
self.name = None # type: str
self.type = None # type: str
self.uid = None # type: str
self.imc_name = None # type: str
def is_selected(self) -> bool:
"""Return True if window is selected, else False."""
return self.name is not None
def update_log_messages(self) -> None:
"""Update window's logging setting."""
if self.type == WIN_TYPE_CONTACT:
self.log_messages = self.contact.log_messages
if self.type == WIN_TYPE_GROUP:
self.log_messages = self.group.log_messages
def update_group_win_members(self, group_list: 'GroupList') -> None:
"""Update window's group members list."""
if self.type == WIN_TYPE_GROUP:
if group_list.has_group(self.name):
self.group = group_list.get_group(self.name)
self.window_contacts = self.group.members
if self.window_contacts:
self.imc_name = self.window_contacts[0].rx_account
else:
self.deselect_window()
def select_window(user_input: 'UserInput',
window: 'TxWindow',
settings: 'Settings',
queues: Dict[bytes, 'Queue']) -> None:
"""Select new window to send messages/files to."""
try:
selection = user_input.plaintext.split()[1]
except (IndexError, TypeError):
raise FunctionReturn("Error: Invalid recipient.")
window.select_tx_window(settings, queues, selection, cmd=True)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,344 +16,325 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import binascii
import multiprocessing
import os
import unittest
import nacl.bindings
from unittest import mock
import argon2
import nacl.exceptions
import nacl.public
import nacl.utils
import argon2
from cryptography.hazmat.primitives.asymmetric.x448 import X448PrivateKey
from src.common.crypto import sha3_256, blake2s, sha256, hash_chain, argon2_kdf
from src.common.crypto import encrypt_and_sign, auth_and_decrypt
from src.common.crypto import byte_padding, rm_padding_bytes, xor
from src.common.crypto import csprng, check_kernel_entropy, check_kernel_version
from src.common.crypto import argon2_kdf, auth_and_decrypt, blake2b, byte_padding, check_kernel_entropy
from src.common.crypto import check_kernel_version, csprng, encrypt_and_sign, rm_padding_bytes, X448
from src.common.statics import *
class TestSHA3256(unittest.TestCase):
class TestBLAKE2b(unittest.TestCase):
def test_SHA3_256_KAT(self):
"""Run sanity check with official SHA3-256 KAT:
csrc.nist.gov/groups/ST/toolkit/documents/Examples/SHA3-256_Msg0.pdf
def test_blake2b_kat(self):
"""Run sanity check with an official BLAKE2b KAT:
https://github.com/BLAKE2/BLAKE2/blob/master/testvectors/blake2b-kat.txt#L259
in: 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f
key: 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f
hash: 65676d800617972fbd87e4b9514e1c67402b7a331096d3bfac22f1abb95374ab
c942f16e9ab0ead33b87c91968a6e509e119ff07787b3ef483e1dcdccf6e3022
"""
self.assertEqual(sha3_256(b''),
binascii.unhexlify('a7ffc6f8bf1ed76651c14756a061d662'
'f580ff4de43b49fa82d80a4b80f8434a'))
message = key = bytes.fromhex(
'000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f'
'202122232425262728292a2b2c2d2e2f303132333435363738393a3b3c3d3e3f')
digest = bytes.fromhex(
'65676d800617972fbd87e4b9514e1c67402b7a331096d3bfac22f1abb95374ab'
'c942f16e9ab0ead33b87c91968a6e509e119ff07787b3ef483e1dcdccf6e3022')
class TestBlake2s(unittest.TestCase):
def test_blake2s_KAT(self):
"""Run sanity check with official Blake2s KAT:
https://github.com/BLAKE2/BLAKE2/blob/master/testvectors/blake2s-kat.txt#L131
in: 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
key: 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
hash: c03bc642b20959cbe133a0303e0c1abff3e31ec8e1a328ec8565c36decff5265
"""
message = key = binascii.unhexlify('000102030405060708090a0b0c0d0e0f'
'101112131415161718191a1b1c1d1e1f')
self.assertEqual(blake2s(message, key),
binascii.unhexlify('c03bc642b20959cbe133a0303e0c1abf'
'f3e31ec8e1a328ec8565c36decff5265'))
class TestSHA256(unittest.TestCase):
def test_SHA256_KAT(self):
"""Run sanity check with official SHA256 KAT:
http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/SHA_All.pdf // page 14
"""
self.assertEqual(sha256(b'abc'),
binascii.unhexlify('ba7816bf8f01cfea414140de5dae2223'
'b00361a396177a9cb410ff61f20015ad'))
class TestHashChain(unittest.TestCase):
def test_chain(self):
"""Sanity check after verifying function. No official test vectors exist."""
self.assertEqual(hash_chain(bytes(32)),
binascii.unhexlify('8d8c36497eb93a6355112e253f705a32'
'85f3e2d82b9ac29461cd8d4f764e5d41'))
self.assertEqual(blake2b(message, key, digest_size=len(digest)),
digest)
class TestArgon2KDF(unittest.TestCase):
def test_Argon2_KAT(self):
"""The official Argon2 implementation is at
https://github.com/P-H-C/phc-winner-argon2#command-line-utility
def test_argon2d_kat(self):
"""Run sanity check with an official Argon2 KAT:
To re-produce the test vector, run
The official Argon2 implementation is at
https://github.com/P-H-C/phc-winner-argon2#command-line-utility
To reproduce the test vector, run
$ wget https://github.com/P-H-C/phc-winner-argon2/archive/master.zip
$ unzip master.zip
$ unzip master.zip
$ cd phc-winner-argon2-master/
$ make
$ echo -n "password" | ./argon2 somesalt -t 1 -m 16 -p 4 -l 32 -d
Expected output
Type: Argon2d
Iterations: 1
Memory: 65536 KiB
Type: Argon2d
Iterations: 1
Memory: 65536 KiB
Parallelism: 4
Hash: 7e12cb75695277c0ab974e4ae943b87da08e36dd065aca8de3ca009125ae8953
Encoded: $argon2d$v=19$m=65536,t=1,p=4$c29tZXNhbHQ$fhLLdWlSd8Crl05K6UO4faCONt0GWsqN48oAkSWuiVM
0.231 seconds
Verification ok
"""
key = argon2.low_level.hash_secret_raw(secret=b'password', salt=b'somesalt', time_cost=1,
memory_cost=65536, parallelism=4, hash_len=32, type=argon2.Type.D)
self.assertEqual(binascii.hexlify(key), b'7e12cb75695277c0ab974e4ae943b87da08e36dd065aca8de3ca009125ae8953')
key = argon2.low_level.hash_secret_raw(secret=b'password',
salt=b'somesalt',
time_cost=1,
memory_cost=65536,
parallelism=4,
hash_len=32,
type=argon2.Type.D)
def test_argon2_kdf(self):
key, parallelism = argon2_kdf('password', ARGON2_SALT_LEN*b'a')
self.assertEqual(key.hex(), '7e12cb75695277c0ab974e4ae943b87da08e36dd065aca8de3ca009125ae8953')
def test_argon2d_kdf(self):
key = argon2_kdf('password', ARGON2_SALT_LENGTH*b'a', rounds=1, memory=100)
self.assertIsInstance(key, bytes)
self.assertEqual(len(key), KEY_LENGTH)
self.assertEqual(parallelism, multiprocessing.cpu_count())
self.assertEqual(len(key), SYMMETRIC_KEY_LENGTH)
def test_argon2_kdf_local_testing(self):
key, parallelism = argon2_kdf('password', ARGON2_SALT_LEN*b'a', local_test=True)
self.assertIsInstance(key, bytes)
self.assertEqual(len(key), KEY_LENGTH)
self.assertEqual(parallelism, max(multiprocessing.cpu_count()//2, 1))
def test_invalid_salt_length_raises_assertion_error(self):
for salt_length in [v for v in range(1000) if v != ARGON2_SALT_LENGTH]:
with self.assertRaises(AssertionError):
argon2_kdf('password', salt_length * b'a')
class TestXSalsa20Poly1305(unittest.TestCase):
"""Test vectors:
https://cr.yp.to/highspeed/naclcrypto-20090310.pdf // page 35
class TestX448(unittest.TestCase):
"""
key_tv = binascii.unhexlify('1b27556473e985d4'
'62cd51197a9a46c7'
'6009549eac6474f2'
'06c4ee0844f68389')
X448 test vectors
https://tools.ietf.org/html/rfc7748#section-6.2
"""
sk_alice = bytes.fromhex(
'9a8f4925d1519f5775cf46b04b5800d4ee9ee8bae8bc5565d498c28d'
'd9c9baf574a9419744897391006382a6f127ab1d9ac2d8c0a598726b')
nonce_tv = binascii.unhexlify('69696ee955b62b73'
'cd62bda875fc73d6'
'8219e0036b7a0b37')
pk_alice = bytes.fromhex(
'9b08f7cc31b7e3e67d22d5aea121074a273bd2b83de09c63faa73d2c'
'22c5d9bbc836647241d953d40c5b12da88120d53177f80e532c41fa0')
pt_tv = binascii.unhexlify('be075fc53c81f2d5'
'cf141316ebeb0c7b'
'5228c52a4c62cbd4'
'4b66849b64244ffc'
'e5ecbaaf33bd751a'
'1ac728d45e6c6129'
'6cdc3c01233561f4'
'1db66cce314adb31'
'0e3be8250c46f06d'
'ceea3a7fa1348057'
'e2f6556ad6b1318a'
'024a838f21af1fde'
'048977eb48f59ffd'
'4924ca1c60902e52'
'f0a089bc76897040'
'e082f93776384864'
'5e0705')
sk_bob = bytes.fromhex(
'1c306a7ac2a0e2e0990b294470cba339e6453772b075811d8fad0d1d'
'6927c120bb5ee8972b0d3e21374c9c921b09d1b0366f10b65173992d')
ct_tv = binascii.unhexlify('f3ffc7703f9400e5'
'2a7dfb4b3d3305d9'
'8e993b9f48681273'
'c29650ba32fc76ce'
'48332ea7164d96a4'
'476fb8c531a1186a'
'c0dfc17c98dce87b'
'4da7f011ec48c972'
'71d2c20f9b928fe2'
'270d6fb863d51738'
'b48eeee314a7cc8a'
'b932164548e526ae'
'90224368517acfea'
'bd6bb3732bc0e9da'
'99832b61ca01b6de'
'56244a9e88d5f9b3'
'7973f622a43d14a6'
'599b1f654cb45a74'
'e355a5')
pk_bob = bytes.fromhex(
'3eb7a829b0cd20f5bcfc0b599b6feccf6da4627107bdb0d4f345b430'
'27d8b972fc3e34fb4232a13ca706dcb57aec3dae07bdc1c67bf33609')
def test_encrypt_and_sign_with_kat(self):
"""Test encryption with official test vectors."""
# Setup
o_nacl_utils_random = nacl.utils.random
nacl.utils.random = lambda _: self.nonce_tv
shared_secret = bytes.fromhex(
'07fff4181ac6cc95ec1c16a94a0f74d12da232ce40a77552281d282b'
'b60c0b56fd2464c335543936521c24403085d59a449a5037514a879d')
# Test
self.assertEqual(encrypt_and_sign(self.pt_tv, self.key_tv), self.nonce_tv + self.ct_tv)
def test_private_key_generation(self):
self.assertIsInstance(X448.generate_private_key(), X448PrivateKey)
# Teardown
nacl.utils.random = o_nacl_utils_random
def test_x448(self):
sk_alice_ = X448PrivateKey.from_private_bytes(TestX448.sk_alice)
sk_bob_ = X448PrivateKey.from_private_bytes(TestX448.sk_bob)
def test_auth_and_decrypt_with_kat(self):
"""Test decryption with official test vectors."""
self.assertEqual(auth_and_decrypt(self.nonce_tv + self.ct_tv, self.key_tv), self.pt_tv)
self.assertEqual(X448.derive_public_key(sk_alice_), TestX448.pk_alice)
self.assertEqual(X448.derive_public_key(sk_bob_), TestX448.pk_bob)
def test_invalid_decryption_raises_critical_error(self):
shared_secret1 = X448.shared_key(sk_alice_, TestX448.pk_bob)
shared_secret2 = X448.shared_key(sk_bob_, TestX448.pk_alice)
self.assertEqual(shared_secret1, blake2b(TestX448.shared_secret))
self.assertEqual(shared_secret2, blake2b(TestX448.shared_secret))
class TestXChaCha20Poly1305(unittest.TestCase):
"""Libsodium test vectors:
Message: https://github.com/jedisct1/libsodium/blob/master/test/default/aead_xchacha20poly1305.c#L22
Ad: https://github.com/jedisct1/libsodium/blob/master/test/default/aead_xchacha20poly1305.c#L28
Nonce: https://github.com/jedisct1/libsodium/blob/master/test/default/aead_xchacha20poly1305.c#L25
Key: https://github.com/jedisct1/libsodium/blob/master/test/default/aead_xchacha20poly1305.c#L14
CT+tag: https://github.com/jedisct1/libsodium/blob/master/test/default/aead_xchacha20poly1305.exp#L1
IETF test vectors:
https://tools.ietf.org/html/draft-arciszewski-xchacha-02#appendix-A.1
"""
plaintext = \
b"Ladies and Gentlemen of the class of '99: If I could offer you " \
b"only one tip for the future, sunscreen would be it."
ad = bytes.fromhex(
'50515253c0c1c2c3c4c5c6c7')
nonce = bytes.fromhex(
'070000004041424344454647'
'48494a4b4c4d4e4f50515253')
key = bytes.fromhex(
'8081828384858687'
'88898a8b8c8d8e8f'
'9091929394959697'
'98999a9b9c9d9e9f')
ct_tag = bytes.fromhex(
'f8ebea4875044066'
'fc162a0604e171fe'
'ecfb3d2042524856'
'3bcfd5a155dcc47b'
'bda70b86e5ab9b55'
'002bd1274c02db35'
'321acd7af8b2e2d2'
'5015e136b7679458'
'e9f43243bf719d63'
'9badb5feac03f80a'
'19a96ef10cb1d153'
'33a837b90946ba38'
'54ee74da3f2585ef'
'c7e1e170e17e15e5'
'63e77601f4f85caf'
'a8e5877614e143e6'
'8420')
nonce_ct_tag = nonce + ct_tag
# ---
ietf_nonce = bytes.fromhex(
"404142434445464748494a4b4c4d4e4f"
"5051525354555657")
ietf_ct = bytes.fromhex(
"bd6d179d3e83d43b9576579493c0e939"
"572a1700252bfaccbed2902c21396cbb"
"731c7f1b0b4aa6440bf3a82f4eda7e39"
"ae64c6708c54c216cb96b72e1213b452"
"2f8c9ba40db5d945b11b69b982c1bb9e"
"3f3fac2bc369488f76b2383565d3fff9"
"21f9664c97637da9768812f615c68b13"
"b52e")
ietf_tag = bytes.fromhex(
"c0875924c1c7987947deafd8780acf49")
ietf_nonce_ct_tag = ietf_nonce + ietf_ct + ietf_tag
@mock.patch('src.common.crypto.csprng', side_effect=[nonce, ietf_nonce])
def test_encrypt_and_sign_with_official_test_vectors(self, mock_csprng):
self.assertEqual(encrypt_and_sign(self.plaintext, self.key, self.ad),
self.nonce_ct_tag)
self.assertEqual(encrypt_and_sign(self.plaintext, self.key, self.ad),
self.ietf_nonce_ct_tag)
mock_csprng.assert_called_with(XCHACHA20_NONCE_LENGTH)
def test_auth_and_decrypt_with_official_test_vectors(self):
self.assertEqual(auth_and_decrypt(self.nonce_ct_tag, self.key, ad=self.ad),
self.plaintext)
self.assertEqual(auth_and_decrypt(self.ietf_nonce_ct_tag, self.key, ad=self.ad),
self.plaintext)
def test_database_decryption_error_raises_critical_error(self):
with self.assertRaises(SystemExit):
self.assertEqual(auth_and_decrypt(self.nonce_tv + self.ct_tv, key=bytes(KEY_LENGTH)), self.pt_tv)
auth_and_decrypt(self.nonce_ct_tag, key=bytes(SYMMETRIC_KEY_LENGTH), database='path/database_filename')
def test_invalid_decryption_raises_soft_error(self):
def test_error_in_decryption_of_data_from_contact_raises_nacl_crypto_error(self):
with self.assertRaises(nacl.exceptions.CryptoError):
self.assertEqual(auth_and_decrypt(self.nonce_tv + self.ct_tv, key=bytes(KEY_LENGTH), soft_e=True), self.pt_tv)
auth_and_decrypt(self.nonce_ct_tag, key=bytes(SYMMETRIC_KEY_LENGTH))
class TestBytePadding(unittest.TestCase):
def test_padding(self):
for s in range(0, PADDING_LEN):
string = s * b'm'
def test_padding_length_is_divisible_by_packet_length(self):
for length in range(1000):
string = length * b'm'
padded = byte_padding(string)
self.assertEqual(len(padded), PADDING_LEN)
self.assertIsInstance(padded, bytes)
self.assertEqual(len(padded) % PADDING_LENGTH, 0)
# Verify removal of padding doesn't alter the string
self.assertEqual(string, padded[:-ord(padded[-1:])])
for s in range(PADDING_LEN, 1000):
string = s * b'm'
padded = byte_padding(string)
self.assertEqual(len(padded) % PADDING_LEN, 0)
self.assertEqual(string, padded[:-ord(padded[-1:])])
def test_packet_length_equal_to_padding_size_adds_dummy_block(self):
string = PADDING_LENGTH * b'm'
padded = byte_padding(string)
self.assertEqual(len(padded), 2*PADDING_LENGTH)
class TestRmPaddingBytes(unittest.TestCase):
def test_padding_removal(self):
for i in range(0, 1000):
string = os.urandom(i)
length = PADDING_LEN - (len(string) % PADDING_LEN)
padded = string + length * bytes([length])
def test_removal_of_padding_does_not_alter_original_string(self):
for length in range(1000):
string = os.urandom(length)
padded = byte_padding(string)
self.assertEqual(rm_padding_bytes(padded), string)
class TestXOR(unittest.TestCase):
def test_length_mismatch_raises_critical_error(self):
with self.assertRaises(SystemExit):
xor(bytes(32), bytes(31))
def test_xor_of_byte_strings(self):
b1 = b'\x00\x01\x00\x01\x01'
b2 = b'\x00\x00\x01\x01\x02'
b3 = b'\x00\x01\x01\x00\x03'
self.assertEqual(xor(b2, b3), b1)
self.assertEqual(xor(b3, b2), b1)
self.assertEqual(xor(b1, b3), b2)
self.assertEqual(xor(b3, b1), b2)
self.assertEqual(xor(b1, b2), b3)
self.assertEqual(xor(b2, b1), b3)
class TestCSPRNG(unittest.TestCase):
def test_travis_mock(self):
# Setup
o_environ = os.environ
os.environ = dict(TRAVIS='true')
# Test
self.assertEqual(len(csprng()), KEY_LENGTH)
self.assertIsInstance(csprng(), bytes)
# Teardown
os.environ = o_environ
entropy = SYMMETRIC_KEY_LENGTH * b'a'
def test_key_generation(self):
self.assertEqual(len(csprng()), KEY_LENGTH)
self.assertIsInstance(csprng(), bytes)
key = csprng()
self.assertEqual(len(key), SYMMETRIC_KEY_LENGTH)
self.assertIsInstance(key, bytes)
@mock.patch('os.getrandom', return_value=entropy)
def test_function_calls_getrandom_with_correct_parameters_and_hashes_with_blake2b(self, mock_get_random):
key = csprng()
mock_get_random.assert_called_with(SYMMETRIC_KEY_LENGTH, flags=0)
self.assertEqual(key, blake2b(self.entropy))
def test_function_returns_specified_amount_of_entropy(self):
for key_size in [16, 24, 32, 56, 64]:
key = csprng(key_size)
self.assertEqual(len(key), key_size)
def test_exceeding_hash_function_max_digest_size_raises_assertion_error(self):
with self.assertRaises(AssertionError):
csprng(BLAKE2_DIGEST_LENGTH_MAX + 1)
class TestCheckKernelEntropy(unittest.TestCase):
def test_entropy_collection(self):
self.assertIsNone(check_kernel_entropy())
@mock.patch('time.sleep', return_value=None)
def test_large_enough_entropy_pool_state_returns_none(self, _):
with mock.patch('builtins.open', mock.mock_open(read_data=str(ENTROPY_THRESHOLD))):
self.assertIsNone(check_kernel_entropy())
with mock.patch('builtins.open', mock.mock_open(read_data=str(ENTROPY_THRESHOLD+1))):
self.assertIsNone(check_kernel_entropy())
@mock.patch('time.sleep', return_value=None)
def test_insufficient_entropy_pool_state_does_not_return(self, _):
with unittest.mock.patch('builtins.open', unittest.mock.mock_open(read_data=str(ENTROPY_THRESHOLD-1))):
p = multiprocessing.Process(target=check_kernel_entropy)
try:
p.start()
p.join(timeout=0.1)
self.assertTrue(p.is_alive())
finally:
p.terminate()
p.join()
self.assertFalse(p.is_alive())
class TestCheckKernelVersion(unittest.TestCase):
def setUp(self):
self.o_uname = os.uname
def tearDown(self):
os.uname = self.o_uname
def test_invalid_kernel_versions_raise_critical_error(self):
for version in ['3.9.0-52-generic', '4.7.0-52-generic']:
os.uname = lambda: ['', '', version]
invalid_versions = ['3.9.11', '3.19.8', '4.7.10']
valid_versions = ['4.8.1', '4.10.1', '5.0.0']
@mock.patch('os.uname', side_effect=[['', '', f'{i}-0-generic'] for i in invalid_versions])
def test_invalid_kernel_versions_raise_critical_error(self, _):
for _ in self.invalid_versions:
with self.assertRaises(SystemExit):
check_kernel_version()
def test_valid_kernel_versions(self):
for version in ['4.8.0-52-generic', '4.10.0-52-generic', '5.0.0-52-generic']:
os.uname = lambda: ['', '', version]
@mock.patch('os.uname', side_effect=[['', '', f'{v}-0-generic'] for v in valid_versions])
def test_valid_kernel_versions(self, _):
for _ in self.valid_versions:
self.assertIsNone(check_kernel_version())
class TestX25519(unittest.TestCase):
"""\
This test does not utilize functions in src.common.crypto
module, but tests PyNaCl's X25519 used in key exchanges.
Test vectors for X25519
https://tools.ietf.org/html/rfc7748#section-6.1
Alice's private key, a:
77076d0a7318a57d3c16c17251b26645df4c2f87ebc0992ab177fba51db92c2a
Alice's public key, X25519(a, 9):
8520f0098930a754748b7ddcb43ef75a0dbf3a0d26381af4eba4a98eaa9b4e6a
Bob's private key, b:
5dab087e624a8a4b79e17f8b83800ee66f3bb1292618b6fd1c2f8b27ff88e0eb
Bob's public key, X25519(b, 9):
de9edb7d7b7dc1b4d35b61c2ece435373f8343c85b78674dadfc7e146f882b4f
Their shared secret, K:
4a5d9d5ba4ce2de1728e3bf480350f25e07e21c947d19e3376f09b3c1e161742
Quoting PyNaCl tests:
"Since libNaCl/libsodium shared key generation adds an HSalsa20
key derivation pass on the raw shared Diffie-Hellman key, which
is not exposed by itself, we just check the shared key for equality."
TOFU style, unofficial KAT / sanity check shared secret test vector is
1b27556473e985d462cd51197a9a46c76009549eac6474f206c4ee0844f68389
"""
def test_x25519(self):
# Setup
tv_sk_a = binascii.unhexlify('77076d0a7318a57d3c16c17251b26645df4c2f87ebc0992ab177fba51db92c2a')
tv_pk_a = binascii.unhexlify('8520f0098930a754748b7ddcb43ef75a0dbf3a0d26381af4eba4a98eaa9b4e6a')
tv_sk_b = binascii.unhexlify('5dab087e624a8a4b79e17f8b83800ee66f3bb1292618b6fd1c2f8b27ff88e0eb')
tv_pk_b = binascii.unhexlify('de9edb7d7b7dc1b4d35b61c2ece435373f8343c85b78674dadfc7e146f882b4f')
ssk = binascii.unhexlify('1b27556473e985d462cd51197a9a46c76009549eac6474f206c4ee0844f68389')
# Generate known key pair for Alice
sk_alice = nacl.public.PrivateKey(tv_sk_a)
self.assertEqual(sk_alice._private_key, tv_sk_a)
self.assertEqual(bytes(sk_alice.public_key), tv_pk_a)
# Generate known key pair for Bob
sk_bob = nacl.public.PrivateKey(tv_sk_b)
self.assertEqual(sk_bob._private_key, tv_sk_b)
self.assertEqual(bytes(sk_bob.public_key), tv_pk_b)
# Test shared secrets are equal
dh_box_a = nacl.public.Box(sk_alice, sk_bob.public_key)
dh_ssk_a = dh_box_a.shared_key()
dh_box_b = nacl.public.Box(sk_bob, sk_alice.public_key)
dh_ssk_b = dh_box_b.shared_key()
self.assertEqual(dh_ssk_a, ssk)
self.assertEqual(dh_ssk_b, ssk)
if __name__ == '__main__':
unittest.main(exit=False)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
@ -25,159 +26,321 @@ from src.common.db_contacts import Contact, ContactList
from src.common.statics import *
from tests.mock_classes import create_contact, MasterKey, Settings
from tests.utils import cleanup, TFCTestCase
from tests.utils import cd_unittest, cleanup, nick_to_onion_address, nick_to_pub_key, tamper_file, TFCTestCase
class TestContact(unittest.TestCase):
def setUp(self):
self.contact = Contact(nick_to_pub_key('Bob'),
'Bob',
FINGERPRINT_LENGTH * b'\x01',
FINGERPRINT_LENGTH * b'\x02',
KEX_STATUS_UNVERIFIED,
log_messages =True,
file_reception=True,
notifications =True)
def test_contact_serialization_length_and_type(self):
serialized = create_contact().serialize_c()
serialized = self.contact.serialize_c()
self.assertEqual(len(serialized), CONTACT_LENGTH)
self.assertIsInstance(serialized, bytes)
def test_uses_psk(self):
for kex_status in [KEX_STATUS_NO_RX_PSK, KEX_STATUS_HAS_RX_PSK]:
self.contact.kex_status = kex_status
self.assertTrue(self.contact.uses_psk())
for kex_status in [KEX_STATUS_NONE, KEX_STATUS_PENDING, KEX_STATUS_UNVERIFIED,
KEX_STATUS_VERIFIED, KEX_STATUS_LOCAL_KEY]:
self.contact.kex_status = kex_status
self.assertFalse(self.contact.uses_psk())
class TestContactList(TFCTestCase):
def setUp(self):
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.file_name = f'{DIR_USER_DATA}{self.settings.software_operation}_contacts'
self.contact_list = ContactList(self.master_key, self.settings)
self.contact_list.contacts = list(map(create_contact, ['Alice', 'Benny', 'Charlie', 'David', 'Eric']))
self.full_contact_list = ['Alice', 'Bob', 'Charlie', 'David', 'Eric', LOCAL_ID]
self.contact_list.contacts = list(map(create_contact, self.full_contact_list))
self.real_contact_list = self.full_contact_list[:]
self.real_contact_list.remove(LOCAL_ID)
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_contact_list_iterates_over_contact_objects(self):
for c in self.contact_list:
self.assertIsInstance(c, Contact)
def test_len_returns_number_of_contacts(self):
self.assertEqual(len(self.contact_list), 5)
def test_len_returns_the_number_of_contacts_and_excludes_the_local_key(self):
self.assertEqual(len(self.contact_list),
len(self.real_contact_list))
def test_storing_and_loading_of_contacts(self):
# Test store
self.contact_list.store_contacts()
self.assertTrue(os.path.isfile(f'{DIR_USER_DATA}ut_contacts'))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_contacts'),
XSALSA20_NONCE_LEN
+ self.settings.max_number_of_contacts * CONTACT_LENGTH
+ POLY1305_TAG_LEN)
self.assertEqual(os.path.getsize(self.file_name),
XCHACHA20_NONCE_LENGTH
+ (self.settings.max_number_of_contacts + 1) * CONTACT_LENGTH
+ POLY1305_TAG_LENGTH)
# Test load
contact_list2 = ContactList(self.master_key, self.settings)
self.assertEqual(len(contact_list2), 5)
self.assertEqual(len(contact_list2), len(self.real_contact_list))
self.assertEqual(len(contact_list2.contacts), len(self.full_contact_list))
for c in contact_list2:
self.assertIsInstance(c, Contact)
def test_load_of_modified_database_raises_critical_error(self):
self.contact_list.store_contacts()
# Test reading works normally
self.assertIsInstance(ContactList(self.master_key, self.settings), ContactList)
# Test loading of tampered database raises CriticalError
tamper_file(self.file_name, tamper_size=1)
with self.assertRaises(SystemExit):
ContactList(self.master_key, self.settings)
def test_generate_dummy_contact(self):
dummy_contact = ContactList.generate_dummy_contact()
self.assertIsInstance(dummy_contact, Contact)
self.assertEqual(len(dummy_contact.serialize_c()), CONTACT_LENGTH)
def test_dummy_contacts(self):
dummies = self.contact_list._dummy_contacts()
self.assertEqual(len(dummies), self.settings.max_number_of_contacts - len(self.real_contact_list))
for c in dummies:
self.assertIsInstance(c, Contact)
def test_add_contact(self):
self.assertIsNone(self.contact_list.add_contact(f'faye@jabber.org', 'bob@jabber.org', f'Faye',
FINGERPRINT_LEN * b'\x03',
FINGERPRINT_LEN * b'\x04',
True, True, True))
tx_fingerprint = FINGERPRINT_LENGTH * b'\x03'
rx_fingerprint = FINGERPRINT_LENGTH * b'\x04'
self.assertIsNone(self.contact_list.add_contact(nick_to_pub_key('Faye'),
'Faye',
tx_fingerprint,
rx_fingerprint,
KEX_STATUS_UNVERIFIED,
self.settings.log_messages_by_default,
self.settings.accept_files_by_default,
self.settings.show_notifications_by_default))
# Test new contact was stored by loading
# the database from file to another object
contact_list2 = ContactList(MasterKey(), Settings())
c_alice = contact_list2.get_contact('Alice')
c_faye = contact_list2.get_contact('Faye')
faye = contact_list2.get_contact_by_pub_key(nick_to_pub_key('Faye'))
self.assertEqual(len(self.contact_list), 6)
self.assertIsInstance(c_alice, Contact)
self.assertEqual(c_alice.tx_fingerprint, FINGERPRINT_LEN * b'\x01')
self.assertEqual(c_faye.tx_fingerprint, FINGERPRINT_LEN * b'\x03')
self.assertEqual(len(self.contact_list), len(self.real_contact_list)+1)
self.assertIsInstance(faye, Contact)
def test_replace_existing_contact(self):
c_alice = self.contact_list.get_contact('Alice')
self.assertEqual(c_alice.tx_fingerprint, FINGERPRINT_LEN * b'\x01')
self.assertEqual(faye.tx_fingerprint, tx_fingerprint)
self.assertEqual(faye.rx_fingerprint, rx_fingerprint)
self.assertEqual(faye.kex_status, KEX_STATUS_UNVERIFIED)
self.assertIsNone(self.contact_list.add_contact(f'alice@jabber.org', 'bob@jabber.org', f'Alice',
FINGERPRINT_LEN * b'\x03',
FINGERPRINT_LEN * b'\x04',
True, True, True))
self.assertEqual(faye.log_messages, self.settings.log_messages_by_default)
self.assertEqual(faye.file_reception, self.settings.accept_files_by_default)
self.assertEqual(faye.notifications, self.settings.show_notifications_by_default)
def test_add_contact_that_replaces_an_existing_contact(self):
alice = self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Alice'))
new_nick = 'Alice2'
new_tx_fingerprint = FINGERPRINT_LENGTH * b'\x03'
new_rx_fingerprint = FINGERPRINT_LENGTH * b'\x04'
# Verify that existing nick, kex status and fingerprints are
# different from those that will replace the existing data
self.assertNotEqual(alice.nick, new_nick)
self.assertNotEqual(alice.tx_fingerprint, new_tx_fingerprint)
self.assertNotEqual(alice.rx_fingerprint, new_rx_fingerprint)
self.assertNotEqual(alice.kex_status, KEX_STATUS_UNVERIFIED)
# Make sure each contact setting is opposite from default value
alice.log_messages = not self.settings.log_messages_by_default
alice.file_reception = not self.settings.accept_files_by_default
alice.notifications = not self.settings.show_notifications_by_default
# Replace the existing contact
self.assertIsNone(self.contact_list.add_contact(nick_to_pub_key('Alice'),
new_nick,
new_tx_fingerprint,
new_rx_fingerprint,
KEX_STATUS_UNVERIFIED,
self.settings.log_messages_by_default,
self.settings.accept_files_by_default,
self.settings.show_notifications_by_default))
# Load database to another object from
# file to verify new contact was stored
contact_list2 = ContactList(MasterKey(), Settings())
c_alice = contact_list2.get_contact('Alice')
alice = contact_list2.get_contact_by_pub_key(nick_to_pub_key('Alice'))
self.assertEqual(len(self.contact_list), 5)
self.assertIsInstance(c_alice, Contact)
self.assertEqual(c_alice.tx_fingerprint, FINGERPRINT_LEN * b'\x03')
# Verify the content of loaded data
self.assertEqual(len(contact_list2), len(self.real_contact_list))
self.assertIsInstance(alice, Contact)
def test_remove_contact(self):
self.assertTrue(self.contact_list.has_contact('Benny'))
self.assertTrue(self.contact_list.has_contact('Charlie'))
# Test replaced contact replaced nick, fingerprints and kex status
self.assertEqual(alice.nick, new_nick)
self.assertEqual(alice.tx_fingerprint, new_tx_fingerprint)
self.assertEqual(alice.rx_fingerprint, new_rx_fingerprint)
self.assertEqual(alice.kex_status, KEX_STATUS_UNVERIFIED)
self.contact_list.remove_contact('benny@jabber.org')
self.assertFalse(self.contact_list.has_contact('Benny'))
# Test replaced contact kept settings set
# to be opposite from default settings
self.assertNotEqual(alice.log_messages, self.settings.log_messages_by_default)
self.assertNotEqual(alice.file_reception, self.settings.accept_files_by_default)
self.assertNotEqual(alice.notifications, self.settings.show_notifications_by_default)
self.contact_list.remove_contact('Charlie')
self.assertFalse(self.contact_list.has_contact('Charlie'))
def test_remove_contact_by_pub_key(self):
# Verify both contacts exist
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
def test_get_contact(self):
for selector in ['benny@jabber.org', 'Benny']:
contact = self.contact_list.get_contact(selector)
self.assertIsInstance(contact, Contact)
self.assertEqual(contact.rx_account, 'benny@jabber.org')
self.assertIsNone(self.contact_list.remove_contact_by_pub_key(nick_to_pub_key('Bob')))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
def test_remove_contact_by_address_or_nick(self):
# Verify both contacts exist
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
# Test removal with address
self.assertIsNone(self.contact_list.remove_contact_by_address_or_nick(nick_to_onion_address('Bob')))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
# Test removal with nick
self.assertIsNone(self.contact_list.remove_contact_by_address_or_nick('Charlie'))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
def test_get_contact_by_pub_key(self):
self.assertIs(self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Bob')),
self.contact_list.get_contact_by_address_or_nick('Bob'))
def test_get_contact_by_address_or_nick_returns_same_contact_with_address_and_nick(self):
for selector in [nick_to_onion_address('Bob'), 'Bob']:
self.assertIsInstance(self.contact_list.get_contact_by_address_or_nick(selector), Contact)
self.assertIs(self.contact_list.get_contact_by_address_or_nick('Bob'),
self.contact_list.get_contact_by_address_or_nick(nick_to_onion_address('Bob')))
def test_get_list_of_contacts(self):
self.assertEqual(len(self.contact_list.get_list_of_contacts()),
len(self.real_contact_list))
for c in self.contact_list.get_list_of_contacts():
self.assertIsInstance(c, Contact)
def test_get_list_of_accounts(self):
self.assertEqual(self.contact_list.get_list_of_accounts(),
['alice@jabber.org', 'benny@jabber.org',
'charlie@jabber.org', 'david@jabber.org',
'eric@jabber.org'])
def test_get_list_of_addresses(self):
self.assertEqual(self.contact_list.get_list_of_addresses(),
[nick_to_onion_address('Alice'),
nick_to_onion_address('Bob'),
nick_to_onion_address('Charlie'),
nick_to_onion_address('David'),
nick_to_onion_address('Eric')])
def test_get_list_of_nicks(self):
self.assertEqual(self.contact_list.get_list_of_nicks(),
['Alice', 'Benny', 'Charlie', 'David', 'Eric'])
['Alice', 'Bob', 'Charlie', 'David', 'Eric'])
def test_get_list_of_users_accounts(self):
self.assertEqual(self.contact_list.get_list_of_users_accounts(), ['user@jabber.org'])
def test_get_list_of_pub_keys(self):
self.assertEqual(self.contact_list.get_list_of_pub_keys(),
[nick_to_pub_key('Alice'),
nick_to_pub_key('Bob'),
nick_to_pub_key('Charlie'),
nick_to_pub_key('David'),
nick_to_pub_key('Eric')])
def test_get_list_of_pending_pub_keys(self):
# Set key exchange statuses to pending
for nick in ['Alice', 'Bob']:
contact = self.contact_list.get_contact_by_address_or_nick(nick)
contact.kex_status = KEX_STATUS_PENDING
# Test pending contacts are returned
self.assertEqual(self.contact_list.get_list_of_pending_pub_keys(),
[nick_to_pub_key('Alice'),
nick_to_pub_key('Bob')])
def test_get_list_of_existing_pub_keys(self):
self.contact_list.get_contact_by_address_or_nick('Alice').kex_status = KEX_STATUS_UNVERIFIED
self.contact_list.get_contact_by_address_or_nick('Bob').kex_status = KEX_STATUS_VERIFIED
self.contact_list.get_contact_by_address_or_nick('Charlie').kex_status = KEX_STATUS_HAS_RX_PSK
self.contact_list.get_contact_by_address_or_nick('David').kex_status = KEX_STATUS_NO_RX_PSK
self.contact_list.get_contact_by_address_or_nick('Eric').kex_status = KEX_STATUS_PENDING
self.assertEqual(self.contact_list.get_list_of_existing_pub_keys(),
[nick_to_pub_key('Alice'),
nick_to_pub_key('Bob'),
nick_to_pub_key('Charlie'),
nick_to_pub_key('David')])
def test_contact_selectors(self):
self.assertEqual(self.contact_list.contact_selectors(),
['alice@jabber.org', 'benny@jabber.org', 'charlie@jabber.org',
'david@jabber.org', 'eric@jabber.org',
'Alice', 'Benny', 'Charlie', 'David', 'Eric'])
[nick_to_onion_address('Alice'),
nick_to_onion_address('Bob'),
nick_to_onion_address('Charlie'),
nick_to_onion_address('David'),
nick_to_onion_address('Eric'),
'Alice', 'Bob', 'Charlie', 'David', 'Eric'])
def test_has_contacts(self):
self.assertTrue(self.contact_list.has_contacts())
self.contact_list.contacts = []
self.assertFalse(self.contact_list.has_contacts())
def test_has_contact(self):
def test_has_only_pending_contacts(self):
# Change all to pending
for contact in self.contact_list.get_list_of_contacts():
contact.kex_status = KEX_STATUS_PENDING
self.assertTrue(self.contact_list.has_only_pending_contacts())
# Change one from pending
alice = self.contact_list.get_contact_by_address_or_nick('Alice')
alice.kex_status = KEX_STATUS_UNVERIFIED
self.assertFalse(self.contact_list.has_only_pending_contacts())
def test_has_pub_key(self):
self.contact_list.contacts = []
self.assertFalse(self.contact_list.has_contact('Benny'))
self.assertFalse(self.contact_list.has_contact('bob@jabber.org'))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertFalse(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.contact_list.contacts = list(map(create_contact, ['Bob', 'Charlie']))
self.assertTrue(self.contact_list.has_contact('Bob'))
self.assertTrue(self.contact_list.has_contact('charlie@jabber.org'))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Bob')))
self.assertTrue(self.contact_list.has_pub_key(nick_to_pub_key('Charlie')))
def test_has_local_contact(self):
self.contact_list.contacts = []
self.assertFalse(self.contact_list.has_local_contact())
self.contact_list.contacts.append(create_contact(LOCAL_ID))
self.contact_list.contacts = [create_contact(LOCAL_ID)]
self.assertTrue(self.contact_list.has_local_contact())
def test_contact_printing(self):
def test_print_contacts(self):
self.contact_list.contacts.append(create_contact(LOCAL_ID))
self.contact_list.get_contact('Alice').log_messages = False
self.contact_list.get_contact('Benny').notifications = False
self.contact_list.get_contact('Charlie').file_reception = False
self.contact_list.get_contact('David').tx_fingerprint = bytes(FINGERPRINT_LEN)
self.assertPrints(CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + """\
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Alice')).log_messages = False
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Alice')).kex_status = KEX_STATUS_PENDING
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Bob')).notifications = False
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Charlie')).kex_status = KEX_STATUS_UNVERIFIED
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Bob')).file_reception = False
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('Bob')).kex_status = KEX_STATUS_VERIFIED
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('David')).rx_fingerprint = bytes(FINGERPRINT_LENGTH)
self.contact_list.get_contact_by_pub_key(nick_to_pub_key('David')).kex_status = bytes(KEX_STATUS_NO_RX_PSK)
self.assert_prints(CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Contact Logging Notify Files Key Ex Account
Contact Account Logging Notify Files Key Ex
Alice No Yes Accept X25519 alice@jabber.org
Benny Yes No Accept X25519 benny@jabber.org
Charlie Yes Yes Reject X25519 charlie@jabber.org
David Yes Yes Accept PSK david@jabber.org
Eric Yes Yes Accept X25519 eric@jabber.org
Alice hpcra No Yes Accept {ECDHE} (Pending)
Bob zwp3d Yes No Reject {ECDHE} (Verified)
Charlie n2a3c Yes Yes Accept {ECDHE} (Unverified)
David u22uy Yes Yes Accept {PSK} (No contact key)
Eric jszzy Yes Yes Accept {ECDHE} (Verified)
""", self.contact_list.print_contacts)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,7 +16,7 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os
@ -23,223 +24,331 @@ import unittest
from src.common.db_contacts import Contact, ContactList
from src.common.db_groups import Group, GroupList
from src.common.encoding import b58encode
from src.common.misc import ensure_dir
from src.common.statics import *
from tests.mock_classes import create_contact, MasterKey, Settings
from tests.utils import cleanup, TFCTestCase
from tests.mock_classes import create_contact, group_name_to_group_id, MasterKey, nick_to_pub_key, Settings
from tests.utils import cd_unittest, cleanup, tamper_file, TFCTestCase
class TestGroup(unittest.TestCase):
def setUp(self):
members = list(map(create_contact, ['Alice', 'Bob', 'Charlie']))
self.settings = Settings()
self.group = Group('testgroup', False, False, members, self.settings, lambda: None)
self.unittest_dir = cd_unittest()
self.nicks = ['Alice', 'Bob', 'Charlie']
members = list(map(create_contact, self.nicks))
self.settings = Settings()
self.group = Group(name ='test_group',
group_id =group_name_to_group_id('test_group'),
log_messages =False,
notifications=False,
members =members,
settings =self.settings,
store_groups =lambda: None)
ensure_dir(DIR_USER_DATA)
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_group_iterates_over_contact_objects(self):
for c in self.group:
self.assertIsInstance(c, Contact)
def test_len_returns_number_of_members(self):
self.assertEqual(len(self.group), 3)
def test_len_returns_the_number_of_members(self):
self.assertEqual(len(self.group), len(self.nicks))
def test_serialize_g(self):
def test_group_serialization_length_and_type(self):
serialized = self.group.serialize_g()
self.assertIsInstance(serialized, bytes)
self.assertEqual(len(serialized),
PADDED_UTF32_STR_LEN
+ (2 * BOOLEAN_SETTING_LEN)
+ (self.settings.max_number_of_group_members * PADDED_UTF32_STR_LEN))
self.assertEqual(len(serialized), GROUP_STATIC_LENGTH + (self.settings.max_number_of_group_members
* ONION_SERVICE_PUBLIC_KEY_LENGTH))
def test_add_members(self):
self.group.members = []
self.assertFalse(self.group.has_member('david@jabber.org'))
self.assertFalse(self.group.has_member('eric@jabber.org'))
# Test members to be added are not already in group
self.assertFalse(self.group.has_member(nick_to_pub_key('David')))
self.assertFalse(self.group.has_member(nick_to_pub_key('Eric')))
self.group.add_members([create_contact(n) for n in ['David', 'Eric']])
self.assertTrue(self.group.has_member('david@jabber.org'))
self.assertTrue(self.group.has_member('eric@jabber.org'))
self.assertIsNone(self.group.add_members(list(map(create_contact, ['Alice', 'David', 'Eric']))))
# Test new members were added
self.assertTrue(self.group.has_member(nick_to_pub_key('David')))
self.assertTrue(self.group.has_member(nick_to_pub_key('Eric')))
# Test Alice was not added twice
self.assertEqual(len(self.group), len(['Alice', 'Bob', 'Charlie', 'David', 'Eric']))
def test_remove_members(self):
self.assertTrue(self.group.has_member('alice@jabber.org'))
self.assertTrue(self.group.has_member('bob@jabber.org'))
self.assertTrue(self.group.has_member('charlie@jabber.org'))
# Test members to be removed are part of group
self.assertTrue(self.group.has_member(nick_to_pub_key('Alice')))
self.assertTrue(self.group.has_member(nick_to_pub_key('Bob')))
self.assertTrue(self.group.has_member(nick_to_pub_key('Charlie')))
self.assertTrue(self.group.remove_members(['charlie@jabber.org', 'eric@jabber.org']))
self.assertFalse(self.group.remove_members(['charlie@jabber.org', 'eric@jabber.org']))
# Test first attempt to remove returns True (because Charlie was removed)
self.assertTrue(self.group.remove_members([nick_to_pub_key('Charlie'), nick_to_pub_key('Unknown')]))
self.assertTrue(self.group.has_member('alice@jabber.org'))
self.assertTrue(self.group.has_member('bob@jabber.org'))
self.assertFalse(self.group.has_member('charlie@jabber.org'))
# Test second attempt to remove returns False (because no-one was removed)
self.assertFalse(self.group.remove_members([nick_to_pub_key('Charlie'), nick_to_pub_key('Unknown')]))
def test_get_list_of_member_accounts(self):
self.assertEqual(self.group.get_list_of_member_accounts(),
['alice@jabber.org', 'bob@jabber.org', 'charlie@jabber.org'])
# Test Charlie was removed
self.assertFalse(self.group.has_member(nick_to_pub_key('Charlie')))
def test_get_list_of_member_nicks(self):
self.assertEqual(self.group.get_list_of_member_nicks(), ['Alice', 'Bob', 'Charlie'])
# Test no other members were removed
self.assertTrue(self.group.has_member(nick_to_pub_key('Alice')))
self.assertTrue(self.group.has_member(nick_to_pub_key('Bob')))
def test_get_list_of_member_pub_keys(self):
self.assertEqual(first=self.group.get_list_of_member_pub_keys(),
second=[nick_to_pub_key('Alice'),
nick_to_pub_key('Bob'),
nick_to_pub_key('Charlie')])
def test_has_member(self):
self.assertTrue(self.group.has_member('charlie@jabber.org'))
self.assertFalse(self.group.has_member('david@jabber.org'))
self.assertTrue(self.group.has_member(nick_to_pub_key('Charlie')))
self.assertFalse(self.group.has_member(nick_to_pub_key('David')))
def test_has_members(self):
self.assertTrue(self.group.has_members())
self.assertFalse(self.group.empty())
self.group.members = []
self.assertFalse(self.group.has_members())
self.assertTrue(self.group.empty())
class TestGroupList(TFCTestCase):
def setUp(self):
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.file_name = f'{DIR_USER_DATA}{self.settings.software_operation}_groups'
self.contact_list = ContactList(self.master_key, self.settings)
self.group_list = GroupList(self.master_key, self.settings, self.contact_list)
members = [create_contact(n) for n in ['Alice', 'Bob', 'Charlie', 'David', 'Eric',
'Fido', 'Guido', 'Heidi', 'Ivan', 'Joana', 'Karol']]
self.nicks = ['Alice', 'Bob', 'Charlie', 'David', 'Eric',
'Fido', 'Guido', 'Heidi', 'Ivan', 'Joana', 'Karol']
self.group_names = ['test_group_1', 'test_group_2', 'test_group_3', 'test_group_4', 'test_group_5',
'test_group_6', 'test_group_7', 'test_group_8', 'test_group_9', 'test_group_10',
'test_group_11']
members = list(map(create_contact, self.nicks))
self.contact_list.contacts = members
groups = [Group(n, False, False, members, self.settings, self.group_list.store_groups)
for n in ['testgroup_1', 'testgroup_2', 'testgroup_3', 'testgroup_4', 'testgroup_5',
'testgroup_6', 'testgroup_7', 'testgroup_8', 'testgroup_9', 'testgroup_10',
'testgroup_11']]
self.group_list.groups = \
[Group(name =name,
group_id =group_name_to_group_id(name),
log_messages =False,
notifications=False,
members =members,
settings =self.settings,
store_groups =self.group_list.store_groups)
for name in self.group_names]
self.group_list.groups = groups
self.group_list.store_groups()
self.single_member_data = (PADDED_UTF32_STR_LEN
+ (2 * BOOLEAN_SETTING_LEN)
+ (self.settings.max_number_of_group_members * PADDED_UTF32_STR_LEN))
self.single_member_data_len = (GROUP_STATIC_LENGTH
+ self.settings.max_number_of_group_members * ONION_SERVICE_PUBLIC_KEY_LENGTH)
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_group_list_iterates_over_group_objects(self):
for g in self.group_list:
self.assertIsInstance(g, Group)
def test_len_returns_number_of_groups(self):
self.assertEqual(len(self.group_list), 11)
def test_len_returns_the_number_of_groups(self):
self.assertEqual(len(self.group_list), len(self.group_names))
def test_database_size(self):
self.assertTrue(os.path.isfile(f'{DIR_USER_DATA}ut_groups'))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_groups'),
XSALSA20_NONCE_LEN
+ GROUP_DB_HEADER_LEN
+ self.settings.max_number_of_groups * self.single_member_data
+ POLY1305_TAG_LEN)
def test_storing_and_loading_of_groups(self):
self.group_list.store_groups()
self.assertTrue(os.path.isfile(self.file_name))
self.assertEqual(os.path.getsize(self.file_name),
XCHACHA20_NONCE_LENGTH
+ GROUP_DB_HEADER_LENGTH
+ self.settings.max_number_of_groups * self.single_member_data_len
+ POLY1305_TAG_LENGTH)
# Reduce setting values from 20 to 10
self.settings.max_number_of_groups = 10
self.settings.max_number_of_group_members = 10
group_list2 = GroupList(self.master_key, self.settings, self.contact_list)
self.assertEqual(len(group_list2), 11)
# Check that load_groups() function increases setting values with larger db
self.assertEqual(self.settings.max_number_of_groups, 20)
# Check that `_load_groups()` increased setting values back to 20 so it fits the 11 groups
self.assertEqual(self.settings.max_number_of_groups, 20)
self.assertEqual(self.settings.max_number_of_group_members, 20)
# Check that removed contact from contact list updates group
self.contact_list.remove_contact('Alice')
self.contact_list.remove_contact_by_address_or_nick('Alice')
group_list3 = GroupList(self.master_key, self.settings, self.contact_list)
self.assertEqual(len(group_list3.get_group('testgroup_1').members), 10)
self.assertEqual(len(group_list3.get_group('test_group_1').members), 10)
group_list4 = GroupList(self.master_key, self.settings, self.contact_list)
self.assertEqual(len(group_list4.get_group('testgroup_2').members), 10)
def test_load_of_modified_database_raises_critical_error(self):
self.group_list.store_groups()
# Test reading works normally
self.assertIsInstance(GroupList(self.master_key, self.settings, self.contact_list), GroupList)
# Test loading of the tampered database raises CriticalError
tamper_file(self.file_name, tamper_size=1)
with self.assertRaises(SystemExit):
GroupList(self.master_key, self.settings, self.contact_list)
def test_check_db_settings(self):
self.assertFalse(self.group_list._check_db_settings(
number_of_actual_groups=self.settings.max_number_of_groups,
members_in_largest_group=self.settings.max_number_of_group_members))
self.assertTrue(self.group_list._check_db_settings(
number_of_actual_groups=self.settings.max_number_of_groups + 1,
members_in_largest_group=self.settings.max_number_of_group_members))
self.assertTrue(self.group_list._check_db_settings(
number_of_actual_groups=self.settings.max_number_of_groups,
members_in_largest_group=self.settings.max_number_of_group_members + 1))
def test_generate_group_db_header(self):
header = self.group_list.generate_group_db_header()
self.assertEqual(len(header), GROUP_DB_HEADER_LEN)
header = self.group_list._generate_group_db_header()
self.assertEqual(len(header), GROUP_DB_HEADER_LENGTH)
self.assertIsInstance(header, bytes)
def test_generate_dummy_group(self):
dummy_group = self.group_list.generate_dummy_group()
self.assertEqual(len(dummy_group.serialize_g()), self.single_member_data)
dummy_group = self.group_list._generate_dummy_group()
self.assertIsInstance(dummy_group, Group)
self.assertEqual(len(dummy_group.serialize_g()), self.single_member_data_len)
def test_dummy_groups(self):
dummies = self.group_list._dummy_groups()
self.assertEqual(len(dummies), self.settings.max_number_of_contacts - len(self.nicks))
for g in dummies:
self.assertIsInstance(g, Group)
def test_add_group(self):
members = [create_contact('Laura')]
self.group_list.add_group('testgroup_12', False, False, members)
self.group_list.add_group('testgroup_12', False, True, members)
self.assertTrue(self.group_list.get_group('testgroup_12').notifications)
self.assertEqual(len(self.group_list), 12)
self.group_list.add_group('test_group_12', bytes(GROUP_ID_LENGTH), False, False, members)
self.group_list.add_group('test_group_12', bytes(GROUP_ID_LENGTH), False, True, members)
self.assertTrue(self.group_list.get_group('test_group_12').notifications)
self.assertEqual(len(self.group_list), len(self.group_names)+1)
def test_remove_group(self):
self.assertEqual(len(self.group_list), 11)
def test_remove_group_by_name(self):
self.assertEqual(len(self.group_list), len(self.group_names))
self.assertIsNone(self.group_list.remove_group('testgroup_12'))
self.assertEqual(len(self.group_list), 11)
# Remove non-existing group
self.assertIsNone(self.group_list.remove_group_by_name('test_group_12'))
self.assertEqual(len(self.group_list), len(self.group_names))
self.assertIsNone(self.group_list.remove_group('testgroup_11'))
self.assertEqual(len(self.group_list), 10)
# Remove existing group
self.assertIsNone(self.group_list.remove_group_by_name('test_group_11'))
self.assertEqual(len(self.group_list), len(self.group_names)-1)
def test_get_list_of_group_names(self):
g_names = ['testgroup_1', 'testgroup_2', 'testgroup_3', 'testgroup_4', 'testgroup_5', 'testgroup_6',
'testgroup_7', 'testgroup_8', 'testgroup_9', 'testgroup_10', 'testgroup_11']
self.assertEqual(self.group_list.get_list_of_group_names(), g_names)
def test_remove_group_by_id(self):
self.assertEqual(len(self.group_list), len(self.group_names))
# Remove non-existing group
self.assertIsNone(self.group_list.remove_group_by_id(group_name_to_group_id('test_group_12')))
self.assertEqual(len(self.group_list), len(self.group_names))
# Remove existing group
self.assertIsNone(self.group_list.remove_group_by_id(group_name_to_group_id('test_group_11')))
self.assertEqual(len(self.group_list), len(self.group_names)-1)
def test_get_group(self):
self.assertEqual(self.group_list.get_group('testgroup_3').name, 'testgroup_3')
self.assertEqual(self.group_list.get_group('test_group_3').name, 'test_group_3')
def test_get_group_by_id(self):
members = [create_contact('Laura')]
group_id = os.urandom(GROUP_ID_LENGTH)
self.group_list.add_group('test_group_12', group_id, False, False, members)
self.assertEqual(self.group_list.get_group_by_id(group_id).name, 'test_group_12')
def test_get_list_of_group_names(self):
self.assertEqual(self.group_list.get_list_of_group_names(), self.group_names)
def test_get_list_of_group_ids(self):
self.assertEqual(self.group_list.get_list_of_group_ids(),
list(map(group_name_to_group_id, self.group_names)))
def test_get_list_of_hr_group_ids(self):
self.assertEqual(self.group_list.get_list_of_hr_group_ids(),
[b58encode(gid) for gid in list(map(group_name_to_group_id, self.group_names))])
def test_get_group_members(self):
members = self.group_list.get_group_members('testgroup_1')
members = self.group_list.get_group_members(group_name_to_group_id('test_group_1'))
for c in members:
self.assertIsInstance(c, Contact)
def test_has_group(self):
self.assertTrue(self.group_list.has_group('testgroup_11'))
self.assertFalse(self.group_list.has_group('testgroup_12'))
self.assertTrue(self.group_list.has_group('test_group_11'))
self.assertFalse(self.group_list.has_group('test_group_12'))
def test_has_groups(self):
self.assertTrue(self.group_list.has_groups())
self.group_list.groups = []
self.assertFalse(self.group_list.has_groups())
def test_has_group_id(self):
members = [create_contact('Laura')]
group_id = os.urandom(GROUP_ID_LENGTH)
self.assertFalse(self.group_list.has_group_id(group_id))
self.group_list.add_group('test_group_12', group_id, False, False, members)
self.assertTrue(self.group_list.has_group_id(group_id))
def test_largest_group(self):
self.assertEqual(self.group_list.largest_group(), 11)
self.assertEqual(self.group_list.largest_group(), len(self.nicks))
def test_print_group(self):
self.group_list.get_group("testgroup_1").log_messages = True
self.group_list.get_group("testgroup_2").notifications = True
self.group_list.get_group("testgroup_3").members = []
self.assertPrints("""\
Group Logging Notify Members
self.group_list.get_group("test_group_1").name = "group"
self.group_list.get_group("test_group_2").log_messages = True
self.group_list.get_group("test_group_3").notifications = True
self.group_list.get_group("test_group_4").log_messages = True
self.group_list.get_group("test_group_4").notifications = True
self.group_list.get_group("test_group_5").members = []
self.group_list.get_group("test_group_6").members = list(map(create_contact, ['Alice', 'Bob', 'Charlie',
'David', 'Eric', 'Fido']))
self.assert_prints("""\
Group Group ID Logging Notify Members
testgroup_1 Yes No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
group 2drs4c4VcDdrP No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_2 No Yes Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_2 2dnGTyhkThmPi Yes No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_3 No No <Empty group>
test_group_3 2df7s3LZhwLDw No Yes Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_4 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_4 2djy3XwUQVR8q Yes Yes Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_5 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_5 2dvbcgnjiLLMo No No <Empty group>
testgroup_6 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_6 2dwBRWAqWKHWv No No Alice, Bob, Charlie,
David, Eric, Fido
testgroup_7 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_7 2eDPg5BAM6qF4 No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_8 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_8 2dqdayy5TJKcf No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_9 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_9 2e45bLYvSX3C8 No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_10 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_10 2dgkncX9xRibh No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
testgroup_11 No No Alice, Bob, Charlie, David, Eric, Fido,
Guido, Heidi, Ivan, Joana, Karol
test_group_11 2e6vAGmHmSEEJ No No Alice, Bob, Charlie,
David, Eric, Fido,
Guido, Heidi, Ivan,
Joana, Karol
""", self.group_list.print_groups)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,134 +16,227 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import os.path
import unittest
from src.common.crypto import hash_chain
from src.common.db_keys import KeyList, KeySet
from src.common.statics import *
from src.common.crypto import blake2b
from src.common.db_keys import KeyList, KeySet
from src.common.encoding import int_to_bytes
from src.common.statics import *
from tests.mock_classes import create_keyset, MasterKey, Settings
from tests.utils import cleanup
from tests.mock_classes import create_keyset, MasterKey, nick_to_pub_key, Settings
from tests.utils import cd_unittest, cleanup, tamper_file
class TestKeySet(unittest.TestCase):
def setUp(self):
self.keyset = KeySet('alice@jabber.org',
KEY_LENGTH * b'\x00',
KEY_LENGTH * b'\x00',
KEY_LENGTH * b'\x00',
KEY_LENGTH * b'\x00',
0, 0, lambda: None)
self.keyset = KeySet(onion_pub_key=nick_to_pub_key('Alice'),
tx_mk=bytes(SYMMETRIC_KEY_LENGTH),
rx_mk=bytes(SYMMETRIC_KEY_LENGTH),
tx_hk=bytes(SYMMETRIC_KEY_LENGTH),
rx_hk=bytes(SYMMETRIC_KEY_LENGTH),
tx_harac=INITIAL_HARAC,
rx_harac=INITIAL_HARAC,
store_keys=lambda: None)
def test_keyset_serialization_length_and_type(self):
serialized = self.keyset.serialize_k()
self.assertEqual(len(serialized), KEYSET_LENGTH)
self.assertIsInstance(serialized, bytes)
def test_rotate_tx_key(self):
self.assertIsNone(self.keyset.rotate_tx_key())
self.assertEqual(self.keyset.tx_key, hash_chain(KEY_LENGTH * b'\x00'))
def test_rotate_tx_mk(self):
self.assertIsNone(self.keyset.rotate_tx_mk())
self.assertEqual(self.keyset.tx_mk, blake2b(bytes(SYMMETRIC_KEY_LENGTH) + int_to_bytes(INITIAL_HARAC),
digest_size=SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.rx_mk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.tx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.rx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.tx_harac, 1)
self.assertEqual(self.keyset.rx_harac, INITIAL_HARAC)
def test_update_tx_key(self):
self.keyset.update_key(TX, KEY_LENGTH * b'\x01', 2)
self.assertEqual(self.keyset.tx_key, KEY_LENGTH * b'\x01')
self.assertEqual(self.keyset.rx_key, KEY_LENGTH * b'\x00')
self.assertEqual(self.keyset.tx_hek, KEY_LENGTH * b'\x00')
self.assertEqual(self.keyset.rx_hek, KEY_LENGTH * b'\x00')
def test_update_tx_mk(self):
self.keyset.update_mk(TX, SYMMETRIC_KEY_LENGTH * b'\x01', 2)
self.assertEqual(self.keyset.tx_mk, SYMMETRIC_KEY_LENGTH * b'\x01')
self.assertEqual(self.keyset.rx_mk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.tx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.rx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.tx_harac, 2)
self.assertEqual(self.keyset.rx_harac, INITIAL_HARAC)
def test_update_rx_key(self):
self.keyset.update_key(RX, KEY_LENGTH * b'\x01', 2)
self.assertEqual(self.keyset.tx_key, KEY_LENGTH * b'\x00')
self.assertEqual(self.keyset.rx_key, KEY_LENGTH * b'\x01')
self.assertEqual(self.keyset.tx_hek, KEY_LENGTH * b'\x00')
self.assertEqual(self.keyset.rx_hek, KEY_LENGTH * b'\x00')
def test_update_rx_mk(self):
self.keyset.update_mk(RX, SYMMETRIC_KEY_LENGTH * b'\x01', 2)
self.assertEqual(self.keyset.tx_mk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.rx_mk, SYMMETRIC_KEY_LENGTH * b'\x01')
self.assertEqual(self.keyset.tx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.rx_hk, bytes(SYMMETRIC_KEY_LENGTH))
self.assertEqual(self.keyset.tx_harac, INITIAL_HARAC)
self.assertEqual(self.keyset.rx_harac, 2)
def test_invalid_direction_raises_critical_error(self):
invalid_direction = 'sx'
with self.assertRaises(SystemExit):
self.keyset.update_key('sx', KEY_LENGTH * b'\x01', 2)
self.keyset.update_mk(invalid_direction, SYMMETRIC_KEY_LENGTH * b'\x01', 2)
class TestKeyList(unittest.TestCase):
def setUp(self):
self.master_key = MasterKey()
self.settings = Settings()
self.keylist = KeyList(MasterKey(), Settings())
self.keylist.keysets = [create_keyset(n, store_f=self.keylist.store_keys) for n in ['Alice', 'Bob', 'Charlie']]
self.keylist.store_keys()
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.file_name = f'{DIR_USER_DATA}{self.settings.software_operation}_keys'
self.keylist = KeyList(self.master_key, self.settings)
self.full_contact_list = ['Alice', 'Bob', 'Charlie', LOCAL_ID]
self.keylist.keysets = [create_keyset(n, store_f=self.keylist.store_keys) for n in self.full_contact_list]
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_storing_and_loading_of_keysets(self):
# Test Store
self.assertTrue(os.path.isfile(f'{DIR_USER_DATA}ut_keys'))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_keys'),
XSALSA20_NONCE_LEN
+ self.settings.max_number_of_contacts * KEYSET_LENGTH
+ POLY1305_TAG_LEN)
# Test store
self.keylist.store_keys()
self.assertEqual(os.path.getsize(self.file_name),
XCHACHA20_NONCE_LENGTH
+ (self.settings.max_number_of_contacts+1) * KEYSET_LENGTH
+ POLY1305_TAG_LENGTH)
# Test load
keylist2 = KeyList(MasterKey(), Settings())
self.assertEqual(len(keylist2.keysets), 3)
key_list2 = KeyList(MasterKey(), Settings())
self.assertEqual(len(key_list2.keysets), len(self.full_contact_list))
def test_change_master_key(self):
key = KEY_LENGTH * b'\x01'
masterkey2 = MasterKey(master_key=key)
self.keylist.change_master_key(masterkey2)
self.assertEqual(self.keylist.master_key.master_key, key)
def test_load_of_modified_database_raises_critical_error(self):
self.keylist.store_keys()
# Test reading works normally
self.assertIsInstance(KeyList(self.master_key, self.settings), KeyList)
# Test loading of the tampered database raises CriticalError
tamper_file(self.file_name, tamper_size=1)
with self.assertRaises(SystemExit):
KeyList(self.master_key, self.settings)
def test_generate_dummy_keyset(self):
dummy_keyset = self.keylist.generate_dummy_keyset()
self.assertEqual(len(dummy_keyset.serialize_k()), KEYSET_LENGTH)
self.assertIsInstance(dummy_keyset, KeySet)
def test_dummy_keysets(self):
dummies = self.keylist._dummy_keysets()
self.assertEqual(len(dummies), (self.settings.max_number_of_contacts+1) - len(self.full_contact_list))
for c in dummies:
self.assertIsInstance(c, KeySet)
def test_add_keyset(self):
new_key = bytes(SYMMETRIC_KEY_LENGTH)
self.keylist.keysets = [create_keyset(LOCAL_ID)]
# Check that KeySet exists and that its keys are different
self.assertNotEqual(self.keylist.keysets[0].rx_hk, new_key)
# Replace existing KeySet
self.assertIsNone(self.keylist.add_keyset(LOCAL_PUBKEY,
new_key, new_key,
new_key, new_key))
# Check that new KeySet replaced the old one
self.assertEqual(self.keylist.keysets[0].onion_pub_key, LOCAL_PUBKEY)
self.assertEqual(self.keylist.keysets[0].rx_hk, new_key)
def test_remove_keyset(self):
# Test KeySet for Bob exists
self.assertTrue(self.keylist.has_keyset(nick_to_pub_key('Bob')))
# Remove KeySet for Bob
self.assertIsNone(self.keylist.remove_keyset(nick_to_pub_key('Bob')))
# Test KeySet was removed
self.assertFalse(self.keylist.has_keyset(nick_to_pub_key('Bob')))
def test_change_master_key(self):
key = SYMMETRIC_KEY_LENGTH * b'\x01'
master_key2 = MasterKey(master_key=key)
# Test that new key is different from existing one
self.assertNotEqual(key, self.master_key.master_key)
# Change master key
self.assertIsNone(self.keylist.change_master_key(master_key2))
# Test that master key has changed
self.assertEqual(self.keylist.master_key.master_key, key)
# Test that loading of the database with new key succeeds
self.assertIsInstance(KeyList(master_key2, self.settings), KeyList)
def test_update_database(self):
self.assertEqual(os.path.getsize(self.file_name), 9016)
self.assertIsNone(self.keylist.manage(KDB_UPDATE_SIZE_HEADER, Settings(max_number_of_contacts=100)))
self.assertEqual(os.path.getsize(self.file_name), 17816)
self.assertEqual(self.keylist.settings.max_number_of_contacts, 100)
def test_get_keyset(self):
keyset = self.keylist.get_keyset('alice@jabber.org')
keyset = self.keylist.get_keyset(nick_to_pub_key('Alice'))
self.assertIsInstance(keyset, KeySet)
def test_has_local_key_and_add_keyset(self):
self.assertFalse(self.keylist.has_local_key())
self.assertIsNone(self.keylist.add_keyset(LOCAL_ID,
bytes(KEY_LENGTH), bytes(KEY_LENGTH),
bytes(KEY_LENGTH), bytes(KEY_LENGTH)))
self.assertIsNone(self.keylist.add_keyset(LOCAL_ID,
bytes(KEY_LENGTH), bytes(KEY_LENGTH),
bytes(KEY_LENGTH), bytes(KEY_LENGTH)))
self.assertTrue(self.keylist.has_local_key())
def test_get_list_of_pub_keys(self):
self.assertEqual(self.keylist.get_list_of_pub_keys(),
[nick_to_pub_key("Alice"),
nick_to_pub_key("Bob"),
nick_to_pub_key("Charlie")])
def test_has_keyset_and_remove_keyset(self):
self.assertTrue(self.keylist.has_keyset('bob@jabber.org'))
self.assertIsNone(self.keylist.remove_keyset('bob@jabber.org'))
self.assertFalse(self.keylist.has_keyset('bob@jabber.org'))
def test_has_keyset(self):
self.keylist.keysets = []
self.assertFalse(self.keylist.has_keyset(nick_to_pub_key("Alice")))
def test_has_rx_key(self):
self.assertTrue(self.keylist.has_rx_key('bob@jabber.org'))
self.keylist.get_keyset('bob@jabber.org').rx_key = bytes(KEY_LENGTH)
self.keylist.get_keyset('bob@jabber.org').rx_hek = bytes(KEY_LENGTH)
self.assertFalse(self.keylist.has_rx_key('bob@jabber.org'))
self.keylist.keysets = [create_keyset('Alice')]
self.assertTrue(self.keylist.has_keyset(nick_to_pub_key("Alice")))
def test_manage_keylist(self):
self.assertFalse(self.keylist.has_keyset('david@jabber.org'))
self.assertIsNone(self.keylist.manage(KDB_ADD_ENTRY_HEADER, 'david@jabber.org',
bytes(KEY_LENGTH), bytes(KEY_LENGTH),
bytes(KEY_LENGTH), bytes(KEY_LENGTH)))
self.assertTrue(self.keylist.has_keyset('david@jabber.org'))
def test_has_rx_mk(self):
self.assertTrue(self.keylist.has_rx_mk(nick_to_pub_key('Bob')))
self.keylist.get_keyset(nick_to_pub_key('Bob')).rx_mk = bytes(SYMMETRIC_KEY_LENGTH)
self.keylist.get_keyset(nick_to_pub_key('Bob')).rx_hk = bytes(SYMMETRIC_KEY_LENGTH)
self.assertFalse(self.keylist.has_rx_mk(nick_to_pub_key('Bob')))
self.assertIsNone(self.keylist.manage(KDB_REMOVE_ENTRY_HEADER, 'david@jabber.org'))
self.assertFalse(self.keylist.has_keyset('david@jabber.org'))
def test_has_local_keyset(self):
self.keylist.keysets = []
self.assertFalse(self.keylist.has_local_keyset())
self.assertIsNone(self.keylist.manage(KDB_CHANGE_MASTER_KEY_HEADER, MasterKey(master_key=KEY_LENGTH * b'\x01')))
self.assertEqual(self.keylist.master_key.master_key, KEY_LENGTH * b'\x01')
self.assertIsNone(self.keylist.add_keyset(LOCAL_PUBKEY,
bytes(SYMMETRIC_KEY_LENGTH), bytes(SYMMETRIC_KEY_LENGTH),
bytes(SYMMETRIC_KEY_LENGTH), bytes(SYMMETRIC_KEY_LENGTH)))
self.assertTrue(self.keylist.has_local_keyset())
def test_manage(self):
# Test that KeySet for David does not exist
self.assertFalse(self.keylist.has_keyset(nick_to_pub_key('David')))
# Test adding KeySet
self.assertIsNone(self.keylist.manage(KDB_ADD_ENTRY_HEADER, nick_to_pub_key('David'),
bytes(SYMMETRIC_KEY_LENGTH), bytes(SYMMETRIC_KEY_LENGTH),
bytes(SYMMETRIC_KEY_LENGTH), bytes(SYMMETRIC_KEY_LENGTH)))
self.assertTrue(self.keylist.has_keyset(nick_to_pub_key('David')))
# Test removing KeySet
self.assertIsNone(self.keylist.manage(KDB_REMOVE_ENTRY_HEADER, nick_to_pub_key('David')))
self.assertFalse(self.keylist.has_keyset(nick_to_pub_key('David')))
# Test changing master key
new_key = SYMMETRIC_KEY_LENGTH * b'\x01'
self.assertNotEqual(self.master_key.master_key, new_key)
self.assertIsNone(self.keylist.manage(KDB_CHANGE_MASTER_KEY_HEADER, MasterKey(master_key=new_key)))
self.assertEqual(self.keylist.master_key.master_key, new_key)
# Test updating key_database with new settings changes database size.
self.assertEqual(os.path.getsize(self.file_name), 9016)
self.assertIsNone(self.keylist.manage(KDB_UPDATE_SIZE_HEADER, Settings(max_number_of_contacts=100)))
self.assertEqual(os.path.getsize(self.file_name), 17816)
# Test invalid KeyList management command raises Critical Error
with self.assertRaises(SystemExit):
self.keylist.manage('invalid_key', None)

View File

@ -2,7 +2,8 @@
# -*- coding: utf-8 -*-
"""
Copyright (C) 2013-2017 Markus Ottela
TFC - Onion-routed, endpoint secure messaging system
Copyright (C) 2013-2019 Markus Ottela
This file is part of TFC.
@ -15,138 +16,219 @@ without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with TFC. If not, see <http://www.gnu.org/licenses/>.
along with TFC. If not, see <https://www.gnu.org/licenses/>.
"""
import binascii
import os
import os.path
import time
import struct
import threading
import time
import unittest
from datetime import datetime
from multiprocessing import Queue
from unittest import mock
from src.common.db_contacts import ContactList
from src.common.db_logs import access_logs, log_writer_loop, re_encrypt, remove_logs, write_log_entry
from src.common.db_logs import access_logs, change_log_db_key, log_writer_loop, remove_logs, write_log_entry
from src.common.encoding import bytes_to_timestamp
from src.common.statics import *
from tests.mock_classes import create_contact, GroupList, MasterKey, RxWindow, Settings
from tests.utils import assembly_packet_creator, cleanup, ignored, TFCTestCase
from tests.utils import assembly_packet_creator, cd_unittest, cleanup, group_name_to_group_id, nick_to_pub_key
from tests.utils import nick_to_short_address, tear_queues, TFCTestCase, gen_queue_dict
TIMESTAMP_BYTES = bytes.fromhex('08ceae02')
STATIC_TIMESTAMP = bytes_to_timestamp(TIMESTAMP_BYTES).strftime('%H:%M:%S.%f')[:-TIMESTAMP_LENGTH]
class TestLogWriterLoop(unittest.TestCase):
def setUp(self):
self.unittest_dir = cd_unittest()
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_function_logs_normal_data(self):
# Setup
settings = Settings()
master_key = MasterKey()
queues = {LOG_PACKET_QUEUE: Queue(),
UNITTEST_QUEUE: Queue()}
queues = gen_queue_dict()
def queue_delayer():
"""Place messages to queue one at a time."""
for p in [(False, False, M_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key), # Do not log message (boolean)
(True, False, C_S_HEADER + bytes(PADDING_LEN), None, settings, master_key), # Do not log command
(True, True, P_N_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key), # Do not log noise packet
(True, True, F_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key), # Do not log file packet
(True, False, M_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key)]: # Log message (boolean)
time.sleep(0.1)
for p in [(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), False, False, master_key),
(None, C_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key),
(nick_to_pub_key('Alice'), P_N_HEADER + bytes(PADDING_LENGTH), True, True, master_key),
(nick_to_pub_key('Alice'), F_S_HEADER + bytes(PADDING_LENGTH), True, True, master_key),
(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key)]:
queues[LOG_PACKET_QUEUE].put(p)
time.sleep(0.1)
time.sleep(0.02)
queues[UNITTEST_QUEUE].put(EXIT)
time.sleep(0.1)
queues[LOG_PACKET_QUEUE].put((True, False, M_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key)) # Log message (boolean)
time.sleep(0.02)
queues[LOG_PACKET_QUEUE].put((
nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key))
time.sleep(0.02)
# Test
threading.Thread(target=queue_delayer).start()
log_writer_loop(queues, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 2*LOG_ENTRY_LENGTH)
log_writer_loop(queues, settings, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}{settings.software_operation}_logs'), 2*LOG_ENTRY_LENGTH)
# Teardown
for key in queues:
while not queues[key].empty():
queues[key].get()
time.sleep(0.1)
queues[key].close()
tear_queues(queues)
def test_function_logs_traffic_masking_data(self):
# Setup
settings = Settings(log_file_placeholder_data=False,
logfile_masking=True,
session_traffic_masking=True)
settings = Settings(log_file_masking=True,
traffic_masking=False)
master_key = MasterKey()
queues = {LOG_PACKET_QUEUE: Queue(),
UNITTEST_QUEUE: Queue()}
queues = gen_queue_dict()
queues[TRAFFIC_MASKING_QUEUE].put(True)
def queue_delayer():
"""Place messages to queue one at a time."""
for p in [(False, False, M_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key), # Do not log message (boolean)
(True, False, C_S_HEADER + bytes(PADDING_LEN), None, settings, master_key), # Do not log command
(True, True, F_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key), # Log placeholder data
(True, False, M_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key)]: # Log message (boolean)
time.sleep(0.1)
for p in [(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), False, False, master_key),
(None, C_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key),
(nick_to_pub_key('Alice'), F_S_HEADER + bytes(PADDING_LENGTH), True, True, master_key),
(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key)]:
queues[LOG_PACKET_QUEUE].put(p)
time.sleep(0.1)
time.sleep(0.02)
queues[UNITTEST_QUEUE].put(EXIT)
time.sleep(0.1)
queues[LOG_PACKET_QUEUE].put((True, True, P_N_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', settings, master_key)) # Log noise packet
time.sleep(0.02)
queues[LOG_PACKET_QUEUE].put(
(nick_to_pub_key('Alice'), P_N_HEADER + bytes(PADDING_LENGTH), True, True, master_key))
time.sleep(0.02)
# Test
threading.Thread(target=queue_delayer).start()
log_writer_loop(queues, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 3*LOG_ENTRY_LENGTH)
log_writer_loop(queues, settings, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}{settings.software_operation}_logs'), 3*LOG_ENTRY_LENGTH)
# Teardown
for key in queues:
while not queues[key].empty():
queues[key].get()
time.sleep(0.1)
queues[key].close()
tear_queues(queues)
def test_function_log_file_masking_queue_controls_log_file_masking(self):
# Setup
settings = Settings(log_file_masking=False,
traffic_masking=True)
master_key = MasterKey()
queues = gen_queue_dict()
def queue_delayer():
"""Place messages to queue one at a time."""
for p in [(None, C_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key),
(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), False, False, master_key),
(nick_to_pub_key('Alice'), F_S_HEADER + bytes(PADDING_LENGTH), True, True, master_key)]:
queues[LOG_PACKET_QUEUE].put(p)
time.sleep(0.02)
queues[LOGFILE_MASKING_QUEUE].put(True) # Start logging noise packets
time.sleep(0.02)
for _ in range(2):
queues[LOG_PACKET_QUEUE].put(
(nick_to_pub_key('Alice'), F_S_HEADER + bytes(PADDING_LENGTH), True, True, master_key))
time.sleep(0.02)
queues[UNITTEST_QUEUE].put(EXIT)
time.sleep(0.02)
queues[LOG_PACKET_QUEUE].put(
(nick_to_pub_key('Alice'), M_S_HEADER + bytes(PADDING_LENGTH), True, False, master_key))
time.sleep(0.02)
# Test
threading.Thread(target=queue_delayer).start()
log_writer_loop(queues, settings, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}{settings.software_operation}_logs'), 3*LOG_ENTRY_LENGTH)
# Teardown
tear_queues(queues)
def test_function_allows_control_of_noise_packets_based_on_log_setting_queue(self):
# Setup
settings = Settings(log_file_masking=True,
traffic_masking=True)
master_key = MasterKey()
queues = gen_queue_dict()
noise_tuple = (nick_to_pub_key('Alice'), P_N_HEADER + bytes(PADDING_LENGTH), True, True, master_key)
def queue_delayer():
"""Place packets to log into queue after delay."""
for _ in range(5):
queues[LOG_PACKET_QUEUE].put(noise_tuple) # Not logged because logging_state is False by default
time.sleep(0.02)
queues[LOG_SETTING_QUEUE].put(True)
for _ in range(2):
queues[LOG_PACKET_QUEUE].put(noise_tuple) # Log two packets
time.sleep(0.02)
queues[LOG_SETTING_QUEUE].put(False)
for _ in range(3):
queues[LOG_PACKET_QUEUE].put(noise_tuple) # Not logged because logging_state is False
time.sleep(0.02)
queues[UNITTEST_QUEUE].put(EXIT)
queues[LOG_SETTING_QUEUE].put(True)
queues[LOG_PACKET_QUEUE].put(noise_tuple) # Log third packet
# Test
threading.Thread(target=queue_delayer).start()
log_writer_loop(queues, settings, unittest=True)
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}{settings.software_operation}_logs'), 3*LOG_ENTRY_LENGTH)
# Teardown
tear_queues(queues)
class TestWriteLogEntry(unittest.TestCase):
def setUp(self):
self.masterkey = MasterKey()
self.settings = Settings()
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.log_file = f'{DIR_USER_DATA}{self.settings.software_operation}_logs'
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_log_entry_is_concatenated(self):
self.assertIsNone(write_log_entry(F_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', self.settings, self.masterkey))
self.assertTrue(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), LOG_ENTRY_LENGTH)
self.assertIsNone(write_log_entry(F_S_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', self.settings, self.masterkey))
self.assertTrue(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 2*LOG_ENTRY_LENGTH)
for i in range(5):
assembly_p = F_S_HEADER + bytes(PADDING_LENGTH)
self.assertIsNone(write_log_entry(assembly_p, nick_to_pub_key('Alice'), self.settings, self.master_key))
self.assertTrue(os.path.getsize(self.log_file), (i+1)*LOG_ENTRY_LENGTH)
class TestAccessHistoryAndPrintLogs(TFCTestCase):
def setUp(self):
self.masterkey = MasterKey()
self.settings = Settings()
self.window = RxWindow(type=WIN_TYPE_CONTACT, uid='alice@jabber.org', name='Alice')
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.window = RxWindow(type=WIN_TYPE_CONTACT,
uid=nick_to_pub_key('Alice'),
name='Alice',
type_print='contact')
self.contact_list = ContactList(self.masterkey, self.settings)
self.contact_list = ContactList(self.master_key, self.settings)
self.contact_list.contacts = list(map(create_contact, ['Alice', 'Charlie']))
self.time = datetime.fromtimestamp(struct.unpack('<L', binascii.unhexlify('08ceae02'))[0]).strftime('%H:%M')
self.time = STATIC_TIMESTAMP
self.group_list = GroupList(groups=['test_group'])
group = self.group_list.get_group('test_group')
group.members = self.contact_list.contacts
self.o_struct = struct.pack
struct.pack = lambda *_: binascii.unhexlify('08ceae02')
self.group_list = GroupList(groups=['test_group'])
self.group = self.group_list.get_group('test_group')
self.group.members = self.contact_list.contacts
self.args = self.window, self.contact_list, self.group_list, self.settings, self.master_key
self.msg = ("Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean condimentum consectetur purus quis"
" dapibus. Fusce venenatis lacus ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed bibendu"
@ -157,305 +239,338 @@ class TestAccessHistoryAndPrintLogs(TFCTestCase):
"utrum, vel malesuada lorem rhoncus. Cras finibus in neque eu euismod. Nulla facilisi. Nunc nec ali"
"quam quam, quis ullamcorper leo. Nunc egestas lectus eget est porttitor, in iaculis felis sceleris"
"que. In sem elit, fringilla id viverra commodo, sagittis varius purus. Pellentesque rutrum loborti"
"s neque a facilisis. Mauris id tortor placerat, aliquam dolor ac, venenatis arcu.").encode()
"s neque a facilisis. Mauris id tortor placerat, aliquam dolor ac, venenatis arcu.")
def tearDown(self):
struct.pack = self.o_struct
cleanup()
with ignored(OSError):
os.remove("UtM - Plaintext log (Alice)")
cleanup(self.unittest_dir)
def test_missing_log_file_raises_fr(self):
self.assertFR(f"Error: Could not find log database.",
access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
self.assert_fr("No log database available.", access_logs, *self.args)
def test_empty_log_file(self):
# Setup
open(f'{DIR_USER_DATA}{self.settings.software_operation}_logs', 'wb+').close()
# Test
self.assertFR(f"No logged messages for '{self.window.uid}'",
access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
self.assert_fr(f"No logged messages for contact '{self.window.name}'.", access_logs, *self.args)
def test_display_short_private_message(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_display_short_private_message(self, _):
# Setup
# Add a message for different contact that the function should skip.
for p in assembly_packet_creator(MESSAGE, b'A short message'):
write_log_entry(p, 'bob@jabber.org', self.settings, self.masterkey)
# Add a message from user (Bob) to different contact (Charlie). access_logs should not display this message.
for p in assembly_packet_creator(MESSAGE, 'Hi Charlie'):
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
for p in assembly_packet_creator(MESSAGE, b'Hi Bob'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
for p in assembly_packet_creator(MESSAGE, b'Hi Alice'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
# Add a message from contact Alice to user (Bob).
for p in assembly_packet_creator(MESSAGE, 'Hi Bob'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
# Add a message from user (Bob) to Alice.
for p in assembly_packet_creator(MESSAGE, 'Hi Alice'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Test
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from Alice
self.assert_prints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Log file of message(s) sent to contact Alice
{self.time} Alice: Hi Bob
{self.time} Me: Hi Alice
<End of logfile>
<End of log file>
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
"""), access_logs, *self.args)
def test_export_short_private_message(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_export_short_private_message(self, _):
# Setup
for p in assembly_packet_creator(MESSAGE, b'Hi Bob'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
for p in assembly_packet_creator(MESSAGE, b'Hi Alice'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
# Test title displayed by the Receiver program.
self.settings.software_operation = RX
# Add a message from contact Alice to user (Bob).
for p in assembly_packet_creator(MESSAGE, 'Hi Bob'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
# Add a message from user (Bob) to Alice.
for p in assembly_packet_creator(MESSAGE, 'Hi Alice'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Test
self.assertIsNone(access_logs(self.window, self.contact_list, self.group_list, self.settings, self.masterkey, export=True))
self.assertIsNone(access_logs(*self.args, export=True))
with open("UtM - Plaintext log (Alice)") as f:
exported_log = f.read()
self.assertEqual(exported_log, f"""\
Logfile of messages to/from Alice
with open("Receiver - Plaintext log (Alice)") as f:
self.assertEqual(f.read(), f"""\
Log file of message(s) to/from contact Alice
{self.time} Alice: Hi Bob
{self.time} Me: Hi Alice
<End of logfile>
<End of log file>
""")
def test_long_private_message(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_long_private_message(self, _):
# Setup
# Add an assembly packet sequence for contact containing cancel packet that the function should skip
# Add an assembly packet sequence sent to contact Alice containing cancel packet. access_logs should skip this.
packets = assembly_packet_creator(MESSAGE, self.msg)
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LEN)]
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LENGTH)]
for p in packets:
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add an orphaned 'append' assembly packet that the function should skip
write_log_entry(M_A_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', self.settings, self.masterkey)
# Add an orphaned 'append' assembly packet the function should skip.
write_log_entry(M_A_HEADER + bytes(PADDING_LENGTH), nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add a group message that the function should skip
for p in assembly_packet_creator(MESSAGE, b'This is a short message', group_name='test_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
# Add a group message for a different group the function should skip.
for p in assembly_packet_creator(MESSAGE, 'This is a short message', group_id=GROUP_ID_LENGTH * b'1'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add normal messages for contact and user that should be displayed
# Add a message from contact Alice to user (Bob).
for p in assembly_packet_creator(MESSAGE, self.msg):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
# Add a message from user (Bob) to Alice.
for p in assembly_packet_creator(MESSAGE, self.msg):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Test
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from Alice
self.assert_prints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Log file of message(s) sent to contact Alice
{self.time} Alice: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean
condimentum consectetur purus quis dapibus. Fusce venenatis lacus
ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed bibendum
velit maximus in. Aliquam ac metus risus. Sed cursus ornare luctus.
Integer aliquet lectus id massa blandit imperdiet. Ut sed massa
eget quam facilisis rutrum. Mauris eget luctus nisl. Sed ut elit
iaculis, faucibus lacus eget, sodales magna. Nunc sed commodo arcu.
In hac habitasse platea dictumst. Integer luctus aliquam justo, at
vestibulum dolor iaculis ac. Etiam laoreet est eget odio rutrum,
vel malesuada lorem rhoncus. Cras finibus in neque eu euismod.
Nulla facilisi. Nunc nec aliquam quam, quis ullamcorper leo. Nunc
egestas lectus eget est porttitor, in iaculis felis scelerisque. In
sem elit, fringilla id viverra commodo, sagittis varius purus.
Pellentesque rutrum lobortis neque a facilisis. Mauris id tortor
placerat, aliquam dolor ac, venenatis arcu.
{self.time} Me: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean
condimentum consectetur purus quis dapibus. Fusce venenatis lacus
ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed bibendum
velit maximus in. Aliquam ac metus risus. Sed cursus ornare luctus.
Integer aliquet lectus id massa blandit imperdiet. Ut sed massa
eget quam facilisis rutrum. Mauris eget luctus nisl. Sed ut elit
iaculis, faucibus lacus eget, sodales magna. Nunc sed commodo arcu.
In hac habitasse platea dictumst. Integer luctus aliquam justo, at
vestibulum dolor iaculis ac. Etiam laoreet est eget odio rutrum,
vel malesuada lorem rhoncus. Cras finibus in neque eu euismod.
Nulla facilisi. Nunc nec aliquam quam, quis ullamcorper leo. Nunc
egestas lectus eget est porttitor, in iaculis felis scelerisque. In
sem elit, fringilla id viverra commodo, sagittis varius purus.
Pellentesque rutrum lobortis neque a facilisis. Mauris id tortor
placerat, aliquam dolor ac, venenatis arcu.
<End of logfile>
{self.time} Alice: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean condimentum consectetur purus quis dapibus. Fusce
venenatis lacus ut rhoncus faucibus. Cras sollicitudin
commodo sapien, sed bibendum velit maximus in. Aliquam ac
metus risus. Sed cursus ornare luctus. Integer aliquet lectus
id massa blandit imperdiet. Ut sed massa eget quam facilisis
rutrum. Mauris eget luctus nisl. Sed ut elit iaculis,
faucibus lacus eget, sodales magna. Nunc sed commodo arcu. In
hac habitasse platea dictumst. Integer luctus aliquam justo,
at vestibulum dolor iaculis ac. Etiam laoreet est eget odio
rutrum, vel malesuada lorem rhoncus. Cras finibus in neque eu
euismod. Nulla facilisi. Nunc nec aliquam quam, quis
ullamcorper leo. Nunc egestas lectus eget est porttitor, in
iaculis felis scelerisque. In sem elit, fringilla id viverra
commodo, sagittis varius purus. Pellentesque rutrum lobortis
neque a facilisis. Mauris id tortor placerat, aliquam dolor
ac, venenatis arcu.
{self.time} Me: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean condimentum consectetur purus quis dapibus. Fusce
venenatis lacus ut rhoncus faucibus. Cras sollicitudin
commodo sapien, sed bibendum velit maximus in. Aliquam ac
metus risus. Sed cursus ornare luctus. Integer aliquet lectus
id massa blandit imperdiet. Ut sed massa eget quam facilisis
rutrum. Mauris eget luctus nisl. Sed ut elit iaculis,
faucibus lacus eget, sodales magna. Nunc sed commodo arcu. In
hac habitasse platea dictumst. Integer luctus aliquam justo,
at vestibulum dolor iaculis ac. Etiam laoreet est eget odio
rutrum, vel malesuada lorem rhoncus. Cras finibus in neque eu
euismod. Nulla facilisi. Nunc nec aliquam quam, quis
ullamcorper leo. Nunc egestas lectus eget est porttitor, in
iaculis felis scelerisque. In sem elit, fringilla id viverra
commodo, sagittis varius purus. Pellentesque rutrum lobortis
neque a facilisis. Mauris id tortor placerat, aliquam dolor
ac, venenatis arcu.
<End of log file>
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
"""), access_logs, *self.args)
def test_short_group_message(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_short_group_message(self, _):
# Setup
self.window = RxWindow(type=WIN_TYPE_GROUP, uid='test_group', name='test_group')
self.window = RxWindow(type=WIN_TYPE_GROUP,
uid=group_name_to_group_id('test_group'),
name='test_group',
group=self.group,
type_print='group',
group_list=self.group_list)
for p in assembly_packet_creator(MESSAGE, b'This is a short message', group_name='test_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
# Add messages to Alice and Charlie. Add duplicate of outgoing message that should be skipped by access_logs.
for p in assembly_packet_creator(MESSAGE, 'This is a short message', group_id=self.window.uid):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
# Test
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from test_group
self.assert_prints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Log file of message(s) sent to group test_group
{self.time} Me: This is a short message
{self.time} Alice: This is a short message
{self.time} Charlie: This is a short message
<End of logfile>
<End of log file>
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.master_key)
def test_long_group_message(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_long_group_message(self, _):
# Setup
self.window = RxWindow(type=WIN_TYPE_GROUP, uid='test_group', name='test_group')
# Test title displayed by the Receiver program.
self.settings.software_operation = RX
# Add an assembly packet sequence for contact containing cancel packet that the function should skip
packets = assembly_packet_creator(MESSAGE, self.msg)
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LEN)]
self.window = RxWindow(type=WIN_TYPE_GROUP,
uid=group_name_to_group_id('test_group'),
name='test_group',
group=self.group,
type_print='group')
# Add an assembly packet sequence sent to contact Alice in group containing cancel packet.
# Access_logs should skip this.
packets = assembly_packet_creator(MESSAGE, self.msg, group_id=group_name_to_group_id('test_group'))
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LENGTH)]
for p in packets:
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add an orphaned 'append' assembly packet that the function should skip
write_log_entry(M_A_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', self.settings, self.masterkey)
# Add an orphaned 'append' assembly packet. access_logs should skip this.
write_log_entry(M_A_HEADER + bytes(PADDING_LENGTH), nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add a private message that the function should skip
for p in assembly_packet_creator(MESSAGE, b'This is a short message'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
# Add a private message. access_logs should skip this.
for p in assembly_packet_creator(MESSAGE, 'This is a short private message'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add a group management message that the function should skip
message = US_BYTE.join([b'test_group', b'alice@jabber.org'])
for p in assembly_packet_creator(MESSAGE, message, header=GROUP_MSG_INVITEJOIN_HEADER):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
# Add a group message for a different group. access_logs should skip this.
for p in assembly_packet_creator(MESSAGE, 'This is a short group message', group_id=GROUP_ID_LENGTH * b'1'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add a group message that the function should skip
for p in assembly_packet_creator(MESSAGE, b'This is a short message', group_name='different_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
for p in assembly_packet_creator(MESSAGE, self.msg, group_name='test_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey, origin=ORIGIN_CONTACT_HEADER)
# Add messages to Alice and Charlie in group.
# Add duplicate of outgoing message that should be skipped by access_logs.
for p in assembly_packet_creator(MESSAGE, self.msg, group_id=group_name_to_group_id('test_group')):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key, origin=ORIGIN_CONTACT_HEADER)
# Test
access_logs(self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from test_group
self.assert_prints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Log file of message(s) to/from group test_group
{self.time} Me: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean
condimentum consectetur purus quis dapibus. Fusce venenatis lacus
ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed
bibendum velit maximus in. Aliquam ac metus risus. Sed cursus
ornare luctus. Integer aliquet lectus id massa blandit imperdiet.
Ut sed massa eget quam facilisis rutrum. Mauris eget luctus nisl.
Sed ut elit iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer luctus
aliquam justo, at vestibulum dolor iaculis ac. Etiam laoreet est
eget odio rutrum, vel malesuada lorem rhoncus. Cras finibus in
neque eu euismod. Nulla facilisi. Nunc nec aliquam quam, quis
ullamcorper leo. Nunc egestas lectus eget est porttitor, in
iaculis felis scelerisque. In sem elit, fringilla id viverra
commodo, sagittis varius purus. Pellentesque rutrum lobortis
neque a facilisis. Mauris id tortor placerat, aliquam dolor ac,
venenatis arcu.
{self.time} Alice: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean
condimentum consectetur purus quis dapibus. Fusce venenatis lacus
ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed
bibendum velit maximus in. Aliquam ac metus risus. Sed cursus
ornare luctus. Integer aliquet lectus id massa blandit imperdiet.
Ut sed massa eget quam facilisis rutrum. Mauris eget luctus nisl.
Sed ut elit iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer luctus
aliquam justo, at vestibulum dolor iaculis ac. Etiam laoreet est
eget odio rutrum, vel malesuada lorem rhoncus. Cras finibus in
neque eu euismod. Nulla facilisi. Nunc nec aliquam quam, quis
ullamcorper leo. Nunc egestas lectus eget est porttitor, in
iaculis felis scelerisque. In sem elit, fringilla id viverra
commodo, sagittis varius purus. Pellentesque rutrum lobortis
neque a facilisis. Mauris id tortor placerat, aliquam dolor ac,
venenatis arcu.
{self.time} Charlie: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean
condimentum consectetur purus quis dapibus. Fusce venenatis lacus
ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed
bibendum velit maximus in. Aliquam ac metus risus. Sed cursus
ornare luctus. Integer aliquet lectus id massa blandit imperdiet.
Ut sed massa eget quam facilisis rutrum. Mauris eget luctus nisl.
Sed ut elit iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer luctus
aliquam justo, at vestibulum dolor iaculis ac. Etiam laoreet est
eget odio rutrum, vel malesuada lorem rhoncus. Cras finibus in
neque eu euismod. Nulla facilisi. Nunc nec aliquam quam, quis
ullamcorper leo. Nunc egestas lectus eget est porttitor, in
iaculis felis scelerisque. In sem elit, fringilla id viverra
commodo, sagittis varius purus. Pellentesque rutrum lobortis
neque a facilisis. Mauris id tortor placerat, aliquam dolor ac,
venenatis arcu.
<End of logfile>
{self.time} Me: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean condimentum consectetur purus quis dapibus. Fusce
venenatis lacus ut rhoncus faucibus. Cras sollicitudin
commodo sapien, sed bibendum velit maximus in. Aliquam ac
metus risus. Sed cursus ornare luctus. Integer aliquet
lectus id massa blandit imperdiet. Ut sed massa eget quam
facilisis rutrum. Mauris eget luctus nisl. Sed ut elit
iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer
luctus aliquam justo, at vestibulum dolor iaculis ac. Etiam
laoreet est eget odio rutrum, vel malesuada lorem rhoncus.
Cras finibus in neque eu euismod. Nulla facilisi. Nunc nec
aliquam quam, quis ullamcorper leo. Nunc egestas lectus
eget est porttitor, in iaculis felis scelerisque. In sem
elit, fringilla id viverra commodo, sagittis varius purus.
Pellentesque rutrum lobortis neque a facilisis. Mauris id
tortor placerat, aliquam dolor ac, venenatis arcu.
{self.time} Alice: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean condimentum consectetur purus quis dapibus. Fusce
venenatis lacus ut rhoncus faucibus. Cras sollicitudin
commodo sapien, sed bibendum velit maximus in. Aliquam ac
metus risus. Sed cursus ornare luctus. Integer aliquet
lectus id massa blandit imperdiet. Ut sed massa eget quam
facilisis rutrum. Mauris eget luctus nisl. Sed ut elit
iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer
luctus aliquam justo, at vestibulum dolor iaculis ac. Etiam
laoreet est eget odio rutrum, vel malesuada lorem rhoncus.
Cras finibus in neque eu euismod. Nulla facilisi. Nunc nec
aliquam quam, quis ullamcorper leo. Nunc egestas lectus
eget est porttitor, in iaculis felis scelerisque. In sem
elit, fringilla id viverra commodo, sagittis varius purus.
Pellentesque rutrum lobortis neque a facilisis. Mauris id
tortor placerat, aliquam dolor ac, venenatis arcu.
{self.time} Charlie: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Aenean condimentum consectetur purus quis dapibus. Fusce
venenatis lacus ut rhoncus faucibus. Cras sollicitudin
commodo sapien, sed bibendum velit maximus in. Aliquam ac
metus risus. Sed cursus ornare luctus. Integer aliquet
lectus id massa blandit imperdiet. Ut sed massa eget quam
facilisis rutrum. Mauris eget luctus nisl. Sed ut elit
iaculis, faucibus lacus eget, sodales magna. Nunc sed
commodo arcu. In hac habitasse platea dictumst. Integer
luctus aliquam justo, at vestibulum dolor iaculis ac. Etiam
laoreet est eget odio rutrum, vel malesuada lorem rhoncus.
Cras finibus in neque eu euismod. Nulla facilisi. Nunc nec
aliquam quam, quis ullamcorper leo. Nunc egestas lectus
eget est porttitor, in iaculis felis scelerisque. In sem
elit, fringilla id viverra commodo, sagittis varius purus.
Pellentesque rutrum lobortis neque a facilisis. Mauris id
tortor placerat, aliquam dolor ac, venenatis arcu.
<End of log file>
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.masterkey)
"""), access_logs, self.window, self.contact_list, self.group_list, self.settings, self.master_key)
class TestReEncrypt(TFCTestCase):
def setUp(self):
self.unittest_dir = cd_unittest()
self.old_key = MasterKey()
self.new_key = MasterKey(master_key=os.urandom(32))
self.new_key = MasterKey(master_key=os.urandom(SYMMETRIC_KEY_LENGTH))
self.settings = Settings()
self.o_struct_pack = struct.pack
self.time = datetime.fromtimestamp(struct.unpack('<L', binascii.unhexlify('08ceae02'))[0]).strftime('%H:%M')
struct.pack = lambda *_: binascii.unhexlify('08ceae02')
self.tmp_file_name = f"{DIR_USER_DATA}{self.settings.software_operation}_logs_temp"
self.time = STATIC_TIMESTAMP
def tearDown(self):
cleanup()
struct.pack = self.o_struct_pack
cleanup(self.unittest_dir)
def test_missing_log_database_raises_fr(self):
self.assertFR(f"Error: Could not find log database.",
re_encrypt, self.old_key.master_key, self.new_key.master_key, self.settings)
self.assert_fr(f"Error: Could not find log database.",
change_log_db_key, self.old_key.master_key, self.new_key.master_key, self.settings)
def test_database_encryption_with_another_key(self):
@mock.patch('struct.pack', return_value=TIMESTAMP_BYTES)
def test_database_encryption_with_another_key(self, _):
# Setup
window = RxWindow(type=WIN_TYPE_CONTACT, uid='alice@jabber.org', name='Alice')
window = RxWindow(type=WIN_TYPE_CONTACT,
uid=nick_to_pub_key('Alice'),
name='Alice',
type_print='contact')
contact_list = ContactList(self.old_key, self.settings)
contact_list.contacts = [create_contact()]
contact_list.contacts = [create_contact('Alice')]
group_list = GroupList()
# Create temp file that must be removed
with open("user_data/ut_logs_temp", 'wb+') as f:
# Create temp file that must be removed.
with open(self.tmp_file_name, 'wb+') as f:
f.write(os.urandom(LOG_ENTRY_LENGTH))
for p in assembly_packet_creator(MESSAGE, b'This is a short message'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.old_key, origin=ORIGIN_CONTACT_HEADER)
for p in assembly_packet_creator(MESSAGE, b'This is a short message'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.old_key)
# Add a message from contact Alice to user (Bob).
for p in assembly_packet_creator(MESSAGE, 'This is a short message'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.old_key, origin=ORIGIN_CONTACT_HEADER)
# Test
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from Alice
# Add a message from user (Bob) to Alice.
for p in assembly_packet_creator(MESSAGE, 'This is a short message'):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.old_key)
# Check logfile content.
message = (CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Log file of message(s) sent to contact Alice
{self.time} Alice: This is a short message
{self.time} Me: This is a short message
<End of logfile>
<End of log file>
"""), access_logs, window, contact_list, group_list, self.settings, self.old_key)
""")
self.assert_prints(message, access_logs, window, contact_list, group_list, self.settings, self.old_key)
self.assertIsNone(re_encrypt(self.old_key.master_key, self.new_key.master_key, self.settings))
self.assertIsNone(change_log_db_key(self.old_key.master_key, self.new_key.master_key, self.settings))
# Test that decryption works with new key
self.assertPrints((CLEAR_ENTIRE_SCREEN + CURSOR_LEFT_UP_CORNER + f"""\
Logfile of messages to/from Alice
{self.time} Alice: This is a short message
{self.time} Me: This is a short message
<End of logfile>
# Test that decryption with new key is identical.
self.assert_prints(message, access_logs, window, contact_list, group_list, self.settings, self.new_key)
"""), access_logs, window, contact_list, group_list, self.settings, self.new_key)
# Test that temp file is removed
self.assertFalse(os.path.isfile("user_data/ut_logs_temp"))
# Test that temp file is removed.
self.assertFalse(os.path.isfile(self.tmp_file_name))
class TestRemoveLog(TFCTestCase):
def setUp(self):
self.masterkey = MasterKey()
self.settings = Settings()
self.time = datetime.fromtimestamp(struct.unpack('<L', binascii.unhexlify('08ceae02'))[0]).strftime('%H:%M')
self.unittest_dir = cd_unittest()
self.master_key = MasterKey()
self.settings = Settings()
self.time = STATIC_TIMESTAMP
self.contact_list = ContactList(self.master_key, self.settings)
self.group_list = GroupList(groups=['test_group'])
self.file_name = f'{DIR_USER_DATA}{self.settings.software_operation}_logs'
self.tmp_file_name = self.file_name + "_temp"
self.args = self.contact_list, self.group_list, self.settings, self.master_key
self.msg = ("Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean condimentum consectetur purus quis"
" dapibus. Fusce venenatis lacus ut rhoncus faucibus. Cras sollicitudin commodo sapien, sed bibendu"
"m velit maximus in. Aliquam ac metus risus. Sed cursus ornare luctus. Integer aliquet lectus id ma"
@ -465,91 +580,107 @@ class TestRemoveLog(TFCTestCase):
"utrum, vel malesuada lorem rhoncus. Cras finibus in neque eu euismod. Nulla facilisi. Nunc nec ali"
"quam quam, quis ullamcorper leo. Nunc egestas lectus eget est porttitor, in iaculis felis sceleris"
"que. In sem elit, fringilla id viverra commodo, sagittis varius purus. Pellentesque rutrum loborti"
"s neque a facilisis. Mauris id tortor placerat, aliquam dolor ac, venenatis arcu.").encode()
"s neque a facilisis. Mauris id tortor placerat, aliquam dolor ac, venenatis arcu.")
def tearDown(self):
cleanup()
cleanup(self.unittest_dir)
def test_missing_log_file_raises_fr(self):
self.assertFR(f"Error: Could not find log database.",
remove_logs, 'alice@jabber.org', self.settings, self.masterkey)
self.assert_fr("No log database available.", remove_logs, *self.args, nick_to_pub_key('Alice'))
def test_removal_of_group_logs(self):
# Setup
short_msg = b"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
short_msg = "Lorem ipsum dolor sit amet, consectetur adipiscing elit."
for p in assembly_packet_creator(MESSAGE, self.msg, group_name='test_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
# Add long message from user (Bob) to Alice and Charlie. These should be removed.
for p in assembly_packet_creator(MESSAGE, self.msg, group_id=group_name_to_group_id('test_group')):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
for p in assembly_packet_creator(MESSAGE, short_msg, group_name='test_group'):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
# Add short message from user (Bob) to Alice and Charlie. These should be removed.
for p in assembly_packet_creator(MESSAGE, short_msg, group_id=group_name_to_group_id('test_group')):
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
# Add short message from user (Bob) to David. This should be kept.
for p in assembly_packet_creator(MESSAGE, short_msg):
write_log_entry(p, 'david@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('David'), self.settings, self.master_key)
# Add long message from user (Bob) to David. These should be kept.
for p in assembly_packet_creator(MESSAGE, self.msg):
write_log_entry(p, 'david@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('David'), self.settings, self.master_key)
for p in assembly_packet_creator(MESSAGE, short_msg, group_name='test_group_2'):
write_log_entry(p, 'david@jabber.org', self.settings, self.masterkey)
# Add short message from user (Bob) to David in a group. This should be kept as group is different.
for p in assembly_packet_creator(MESSAGE, short_msg, group_id=group_name_to_group_id('different_group')):
write_log_entry(p, nick_to_pub_key('David'), self.settings, self.master_key)
# Add an orphaned 'append' assembly packet that the function should skip
write_log_entry(M_A_HEADER + bytes(PADDING_LEN), 'alice@jabber.org', self.settings, self.masterkey)
# Add an orphaned 'append' assembly packet. This should be removed as it's corrupted.
write_log_entry(M_A_HEADER + bytes(PADDING_LENGTH), nick_to_pub_key('Alice'), self.settings, self.master_key)
# Add packet cancelled half-way
packets = assembly_packet_creator(MESSAGE, self.msg, group_name='test_group')
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LEN)]
# Add long message to group member David, canceled half-way. This should be removed as unviewable.
packets = assembly_packet_creator(MESSAGE, self.msg, group_id=group_name_to_group_id('test_group'))
packets = packets[2:] + [M_C_HEADER + bytes(PADDING_LENGTH)]
for p in packets:
write_log_entry(p, 'david@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('David'), self.settings, self.master_key)
# Add a group management message for different group that the function should keep
message = US_BYTE.join([b'test_group_2', b'alice@jabber.org'])
for p in assembly_packet_creator(MESSAGE, message, header=GROUP_MSG_INVITEJOIN_HEADER):
write_log_entry(p, 'bob@jabber.org', self.settings, self.masterkey)
# Add a group management message for group that the function should remove
message = US_BYTE.join([b'test_group', b'alice@jabber.org'])
for p in assembly_packet_creator(MESSAGE, message, header=GROUP_MSG_INVITEJOIN_HEADER):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
for p in assembly_packet_creator(MESSAGE, self.msg, group_name='test_group_2'):
write_log_entry(p, 'david@jabber.org', self.settings, self.masterkey)
# Add long message to group member David, remove_logs should keep these as group is different.
for p in assembly_packet_creator(MESSAGE, self.msg, group_id=group_name_to_group_id('different_group')):
write_log_entry(p, nick_to_pub_key('David'), self.settings, self.master_key)
# Test
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 21*LOG_ENTRY_LENGTH)
self.assertEqual(os.path.getsize(self.file_name), 19 * LOG_ENTRY_LENGTH)
self.assertIsNone(remove_logs('test_group', self.settings, self.masterkey))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 9*LOG_ENTRY_LENGTH)
# Test log entries were found.
self.assert_fr("Removed log entries for group 'test_group'.",
remove_logs, *self.args, selector=group_name_to_group_id('test_group'))
self.assertEqual(os.path.getsize(self.file_name), 8 * LOG_ENTRY_LENGTH)
self.assertIsNone(remove_logs('test_group_2', self.settings, self.masterkey))
self.assertFR(f"Found no log entries for contact 'alice@jabber.org'",
remove_logs, 'alice@jabber.org', self.settings, self.masterkey)
# Test log entries were not found when removing group again.
self.assert_fr("Found no log entries for group 'test_group'.",
remove_logs, *self.args, selector=group_name_to_group_id('test_group'))
self.assertEqual(os.path.getsize(self.file_name), 8 * LOG_ENTRY_LENGTH)
def test_removal_of_contact_logs(self):
# Setup
short_msg = b"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
short_msg = "Lorem ipsum dolor sit amet, consectetur adipiscing elit."
# Create temp file that must be removed.
with open(self.tmp_file_name, 'wb+') as f:
f.write(os.urandom(LOG_ENTRY_LENGTH))
# Add a long message sent to both Alice and Bob.
for p in assembly_packet_creator(MESSAGE, self.msg):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
# Add a short message sent to both Alice and Bob.
for p in assembly_packet_creator(MESSAGE, short_msg):
write_log_entry(p, 'alice@jabber.org', self.settings, self.masterkey)
write_log_entry(p, 'charlie@jabber.org', self.settings, self.masterkey)
write_log_entry(p, nick_to_pub_key('Alice'), self.settings, self.master_key)
write_log_entry(p, nick_to_pub_key('Charlie'), self.settings, self.master_key)
# Test
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 8*LOG_ENTRY_LENGTH)
self.assertEqual(os.path.getsize(self.file_name), 8 * LOG_ENTRY_LENGTH)
self.assertIsNone(remove_logs('alice@jabber.org', self.settings, self.masterkey))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 4*LOG_ENTRY_LENGTH)
self.assert_fr(f"Removed log entries for contact '{nick_to_short_address('Alice')}'.",
remove_logs, *self.args, selector=nick_to_pub_key('Alice'))
self.assertIsNone(remove_logs('charlie@jabber.org', self.settings, self.masterkey))
self.assertEqual(os.path.getsize(f'{DIR_USER_DATA}ut_logs'), 0)
self.assertEqual(os.path.getsize(self.file_name), 4 * LOG_ENTRY_LENGTH)
self.assertFR(f"Found no log entries for contact 'alice@jabber.org'",
remove_logs, 'alice@jabber.org', self.settings, self.masterkey)
self.assert_fr(f"Removed log entries for contact '{nick_to_short_address('Charlie')}'.",
remove_logs, *self.args, selector=nick_to_pub_key('Charlie'))
self.assertEqual(os.path.getsize(self.file_name), 0)
self.assert_fr(f"Found no log entries for contact '{nick_to_short_address('Alice')}'.",
remove_logs, *self.args, selector=nick_to_pub_key('Alice'))
self.contact_list.contacts = [create_contact('Alice')]
self.assert_fr(f"Found no log entries for contact 'Alice'.",
remove_logs, *self.args, selector=nick_to_pub_key('Alice'))
self.assert_fr(f"Found no log entries for group '2e8b2Wns7dWjB'.",
remove_logs, *self.args, selector=group_name_to_group_id('searched_group'))
if __name__ == '__main__':

Some files were not shown because too many files have changed in this diff Show More