Compare commits
44 Commits
@ -1,21 +1,674 @@
|
|||||||
MIT License
|
GNU GENERAL PUBLIC LICENSE
|
||||||
|
Version 3, 29 June 2007
|
||||||
Copyright (c) 2018 Thomas Wilson
|
|
||||||
|
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
of this license document, but changing it is not allowed.
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
Preamble
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
The GNU General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works.
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
The licenses for most software and other practical works are designed
|
||||||
|
to take away your freedom to share and change the works. By contrast,
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
the GNU General Public License is intended to guarantee your freedom to
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
share and change all versions of a program--to make sure it remains free
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
software for all its users. We, the Free Software Foundation, use the
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
GNU General Public License for most of our software; it applies also to
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
any other work released this way by its authors. You can apply it to
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
your programs, too.
|
||||||
SOFTWARE.
|
|
||||||
|
When we speak of free software, we are referring to freedom, not
|
||||||
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
To protect your rights, we need to prevent others from denying you
|
||||||
|
these rights or asking you to surrender the rights. Therefore, you have
|
||||||
|
certain responsibilities if you distribute copies of the software, or if
|
||||||
|
you modify it: responsibilities to respect the freedom of others.
|
||||||
|
|
||||||
|
For example, if you distribute copies of such a program, whether
|
||||||
|
gratis or for a fee, you must pass on to the recipients the same
|
||||||
|
freedoms that you received. You must make sure that they, too, receive
|
||||||
|
or can get the source code. And you must show them these terms so they
|
||||||
|
know their rights.
|
||||||
|
|
||||||
|
Developers that use the GNU GPL protect your rights with two steps:
|
||||||
|
(1) assert copyright on the software, and (2) offer you this License
|
||||||
|
giving you legal permission to copy, distribute and/or modify it.
|
||||||
|
|
||||||
|
For the developers' and authors' protection, the GPL clearly explains
|
||||||
|
that there is no warranty for this free software. For both users' and
|
||||||
|
authors' sake, the GPL requires that modified versions be marked as
|
||||||
|
changed, so that their problems will not be attributed erroneously to
|
||||||
|
authors of previous versions.
|
||||||
|
|
||||||
|
Some devices are designed to deny users access to install or run
|
||||||
|
modified versions of the software inside them, although the manufacturer
|
||||||
|
can do so. This is fundamentally incompatible with the aim of
|
||||||
|
protecting users' freedom to change the software. The systematic
|
||||||
|
pattern of such abuse occurs in the area of products for individuals to
|
||||||
|
use, which is precisely where it is most unacceptable. Therefore, we
|
||||||
|
have designed this version of the GPL to prohibit the practice for those
|
||||||
|
products. If such problems arise substantially in other domains, we
|
||||||
|
stand ready to extend this provision to those domains in future versions
|
||||||
|
of the GPL, as needed to protect the freedom of users.
|
||||||
|
|
||||||
|
Finally, every program is threatened constantly by software patents.
|
||||||
|
States should not allow patents to restrict development and use of
|
||||||
|
software on general-purpose computers, but in those that do, we wish to
|
||||||
|
avoid the special danger that patents applied to a free program could
|
||||||
|
make it effectively proprietary. To prevent this, the GPL assures that
|
||||||
|
patents cannot be used to render the program non-free.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and
|
||||||
|
modification follow.
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
0. Definitions.
|
||||||
|
|
||||||
|
"This License" refers to version 3 of the GNU General Public License.
|
||||||
|
|
||||||
|
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||||
|
works, such as semiconductor masks.
|
||||||
|
|
||||||
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
|
"recipients" may be individuals or organizations.
|
||||||
|
|
||||||
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
|
in a fashion requiring copyright permission, other than the making of an
|
||||||
|
exact copy. The resulting work is called a "modified version" of the
|
||||||
|
earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
|
A "covered work" means either the unmodified Program or a work based
|
||||||
|
on the Program.
|
||||||
|
|
||||||
|
To "propagate" a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To "convey" a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user through
|
||||||
|
a computer network, with no transfer of a copy, is not conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays "Appropriate Legal Notices"
|
||||||
|
to the extent that it includes a convenient and prominently visible
|
||||||
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
|
tells the user that there is no warranty for the work (except to the
|
||||||
|
extent that warranties are provided), that licensees may convey the
|
||||||
|
work under this License, and how to view a copy of this License. If
|
||||||
|
the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
1. Source Code.
|
||||||
|
|
||||||
|
The "source code" for a work means the preferred form of the work
|
||||||
|
for making modifications to it. "Object code" means any non-source
|
||||||
|
form of a work.
|
||||||
|
|
||||||
|
A "Standard Interface" means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of
|
||||||
|
interfaces specified for a particular programming language, one that
|
||||||
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
|
The "System Libraries" of an executable work include anything, other
|
||||||
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
|
packaging a Major Component, but which is not part of that Major
|
||||||
|
Component, and (b) serves only to enable use of the work with that
|
||||||
|
Major Component, or to implement a Standard Interface for which an
|
||||||
|
implementation is available to the public in source code form. A
|
||||||
|
"Major Component", in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system
|
||||||
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
|
The "Corresponding Source" for a work in object code form means all
|
||||||
|
the source code needed to generate, install, and (for an executable
|
||||||
|
work) run the object code and to modify the work, including scripts to
|
||||||
|
control those activities. However, it does not include the work's
|
||||||
|
System Libraries, or general-purpose tools or generally available free
|
||||||
|
programs which are used unmodified in performing those activities but
|
||||||
|
which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for
|
||||||
|
the work, and the source code for shared libraries and dynamically
|
||||||
|
linked subprograms that the work is specifically designed to require,
|
||||||
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users
|
||||||
|
can regenerate automatically from other parts of the Corresponding
|
||||||
|
Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that
|
||||||
|
same work.
|
||||||
|
|
||||||
|
2. Basic Permissions.
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of
|
||||||
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
|
permission to run the unmodified Program. The output from running a
|
||||||
|
covered work is covered by this License only if the output, given its
|
||||||
|
content, constitutes a covered work. This License acknowledges your
|
||||||
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not
|
||||||
|
convey, without conditions so long as your license otherwise remains
|
||||||
|
in force. You may convey covered works to others for the sole purpose
|
||||||
|
of having them make modifications exclusively for you, or provide you
|
||||||
|
with facilities for running those works, provided that you comply with
|
||||||
|
the terms of this License in conveying all material for which you do
|
||||||
|
not control copyright. Those thus making or running the covered works
|
||||||
|
for you must do so exclusively on your behalf, under your direction
|
||||||
|
and control, on terms that prohibit them from making any copies of
|
||||||
|
your copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under
|
||||||
|
the conditions stated below. Sublicensing is not allowed; section 10
|
||||||
|
makes it unnecessary.
|
||||||
|
|
||||||
|
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological
|
||||||
|
measure under any applicable law fulfilling obligations under article
|
||||||
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
|
similar laws prohibiting or restricting circumvention of such
|
||||||
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such circumvention
|
||||||
|
is effected by exercising rights under this License with respect to
|
||||||
|
the covered work, and you disclaim any intention to limit operation or
|
||||||
|
modification of the work as a means of enforcing, against the work's
|
||||||
|
users, your or third parties' legal rights to forbid circumvention of
|
||||||
|
technological measures.
|
||||||
|
|
||||||
|
4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these conditions:
|
||||||
|
|
||||||
|
a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
|
||||||
|
b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under section
|
||||||
|
7. This requirement modifies the requirement in section 4 to
|
||||||
|
"keep intact all notices".
|
||||||
|
|
||||||
|
c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
|
||||||
|
d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms
|
||||||
|
of sections 4 and 5, provided that you also convey the
|
||||||
|
machine-readable Corresponding Source under the terms of this License,
|
||||||
|
in one of these ways:
|
||||||
|
|
||||||
|
a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
|
||||||
|
b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the
|
||||||
|
Corresponding Source from a network server at no charge.
|
||||||
|
|
||||||
|
c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
|
||||||
|
d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
|
||||||
|
e) Convey the object code using peer-to-peer transmission, provided
|
||||||
|
you inform other peers where the object code and Corresponding
|
||||||
|
Source of the work are being offered to the general public at no
|
||||||
|
charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal, family,
|
||||||
|
or household purposes, or (2) anything designed or sold for incorporation
|
||||||
|
into a dwelling. In determining whether a product is a consumer product,
|
||||||
|
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||||
|
product received by a particular user, "normally used" refers to a
|
||||||
|
typical or common use of that class of product, regardless of the status
|
||||||
|
of the particular user or of the way in which the particular user
|
||||||
|
actually uses, or expects or is expected to use, the product. A product
|
||||||
|
is a consumer product regardless of whether the product has substantial
|
||||||
|
commercial, industrial or non-consumer uses, unless such uses represent
|
||||||
|
the only significant mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to install
|
||||||
|
and execute modified versions of a covered work in that User Product from
|
||||||
|
a modified version of its Corresponding Source. The information must
|
||||||
|
suffice to ensure that the continued functioning of the modified object
|
||||||
|
code is in no case prevented or interfered with solely because
|
||||||
|
modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or updates
|
||||||
|
for a work that has been modified or installed by the recipient, or for
|
||||||
|
the User Product in which it has been modified or installed. Access to a
|
||||||
|
network may be denied when the modification itself materially and
|
||||||
|
adversely affects the operation of the network or violates the rules and
|
||||||
|
protocols for communication across the network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders of
|
||||||
|
that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
|
||||||
|
b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
|
||||||
|
c) Prohibiting misrepresentation of the origin of that material, or
|
||||||
|
requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
|
||||||
|
d) Limiting the use for publicity purposes of names of licensors or
|
||||||
|
authors of the material; or
|
||||||
|
|
||||||
|
e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
|
||||||
|
f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions of
|
||||||
|
it) with contractual assumptions of liability to the recipient, for
|
||||||
|
any liability that these contractual assumptions directly impose on
|
||||||
|
those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions;
|
||||||
|
the above requirements apply either way.
|
||||||
|
|
||||||
|
8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your
|
||||||
|
license from a particular copyright holder is reinstated (a)
|
||||||
|
provisionally, unless and until the copyright holder explicitly and
|
||||||
|
finally terminates your license, and (b) permanently, if the copyright
|
||||||
|
holder fails to notify you of the violation by some reasonable means
|
||||||
|
prior to 60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or
|
||||||
|
run a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims
|
||||||
|
owned or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within
|
||||||
|
the scope of its coverage, prohibits the exercise of, or is
|
||||||
|
conditioned on the non-exercise of one or more of the rights that are
|
||||||
|
specifically granted under this License. You may not convey a covered
|
||||||
|
work if you are a party to an arrangement with a third party that is
|
||||||
|
in the business of distributing software, under which you make payment
|
||||||
|
to the third party based on the extent of your activity of conveying
|
||||||
|
the work, and under which the third party grants, to any of the
|
||||||
|
parties who would receive the covered work from you, a discriminatory
|
||||||
|
patent license (a) in connection with copies of the covered work
|
||||||
|
conveyed by you (or copies made from those copies), or (b) primarily
|
||||||
|
for and in connection with specific products or compilations that
|
||||||
|
contain the covered work, unless you entered into that arrangement,
|
||||||
|
or that patent license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under this
|
||||||
|
License and any other pertinent obligations, then as a consequence you may
|
||||||
|
not convey it at all. For example, if you agree to terms that obligate you
|
||||||
|
to collect a royalty for further conveying from those to whom you convey
|
||||||
|
the Program, the only way you could satisfy both those terms and this
|
||||||
|
License would be to refrain entirely from conveying the Program.
|
||||||
|
|
||||||
|
13. Use with the GNU Affero General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU Affero General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the special requirements of the GNU Affero General Public License,
|
||||||
|
section 13, concerning interaction through a network will apply to the
|
||||||
|
combination as such.
|
||||||
|
|
||||||
|
14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions of
|
||||||
|
the GNU General Public License from time to time. Such new versions will
|
||||||
|
be similar in spirit to the present version, but may differ in detail to
|
||||||
|
address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the
|
||||||
|
Program specifies that a certain numbered version of the GNU General
|
||||||
|
Public License "or any later version" applies to it, you have the
|
||||||
|
option of following the terms and conditions either of that numbered
|
||||||
|
version or of any later version published by the Free Software
|
||||||
|
Foundation. If the Program does not specify a version number of the
|
||||||
|
GNU General Public License, you may choose any version ever published
|
||||||
|
by the Free Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future
|
||||||
|
versions of the GNU General Public License can be used, that proxy's
|
||||||
|
public statement of acceptance of a version permanently authorizes you
|
||||||
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||||
|
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||||
|
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||||
|
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||||
|
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||||
|
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
|
||||||
|
16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||||
|
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||||
|
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||||
|
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||||
|
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||||
|
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||||
|
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||||
|
SUCH DAMAGES.
|
||||||
|
|
||||||
|
17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest
|
||||||
|
to attach them to the start of each source file to most effectively
|
||||||
|
state the exclusion of warranty; and each file should have at least
|
||||||
|
the "copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU General Public License as published by
|
||||||
|
the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper mail.
|
||||||
|
|
||||||
|
If the program does terminal interaction, make it output a short
|
||||||
|
notice like this when it starts in an interactive mode:
|
||||||
|
|
||||||
|
<program> Copyright (C) <year> <name of author>
|
||||||
|
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||||
|
This is free software, and you are welcome to redistribute it
|
||||||
|
under certain conditions; type `show c' for details.
|
||||||
|
|
||||||
|
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||||
|
parts of the General Public License. Of course, your program's commands
|
||||||
|
might be different; for a GUI interface, you would use an "about box".
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or school,
|
||||||
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
|
For more information on this, and how to apply and follow the GNU GPL, see
|
||||||
|
<https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
The GNU General Public License does not permit incorporating your program
|
||||||
|
into proprietary programs. If your program is a subroutine library, you
|
||||||
|
may consider it more useful to permit linking proprietary applications with
|
||||||
|
the library. If this is what you want to do, use the GNU Lesser General
|
||||||
|
Public License instead of this License. But first, please read
|
||||||
|
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||||
|
|||||||
@ -1,10 +0,0 @@
|
|||||||
[shepherd]
|
|
||||||
plugin_path = "~/shepherd/"
|
|
||||||
plugins = ["picam","test"]
|
|
||||||
root_dir = "~/shepherd/"
|
|
||||||
conf_edit_path = "~/shepherd.toml"
|
|
||||||
test =1
|
|
||||||
[picam]
|
|
||||||
[[picam.trigger]]
|
|
||||||
hour = "00-23"
|
|
||||||
minute = "*"
|
|
||||||
@ -0,0 +1,7 @@
|
|||||||
|
from .agent.plugin import PluginInterface # noqa
|
||||||
|
from .agent.plugin import plugin_class # noqa
|
||||||
|
from .agent.plugin import plugin_function # noqa
|
||||||
|
from .agent.plugin import plugin_hook # noqa
|
||||||
|
from .agent.plugin import plugin_attachment # noqa
|
||||||
|
from .agent.plugin import plugin_init # noqa
|
||||||
|
from .agent.plugin import plugin_run # noqa
|
||||||
@ -0,0 +1,3 @@
|
|||||||
|
if __name__ == '__main__':
|
||||||
|
from .agent.cli import cli
|
||||||
|
cli()
|
||||||
@ -0,0 +1,378 @@
|
|||||||
|
|
||||||
|
|
||||||
|
import logging
|
||||||
|
# import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
import glob
|
||||||
|
from types import SimpleNamespace
|
||||||
|
from datetime import datetime
|
||||||
|
import inspect
|
||||||
|
# from pprint import pprint
|
||||||
|
import pkg_resources
|
||||||
|
|
||||||
|
|
||||||
|
# import chromalog
|
||||||
|
import click
|
||||||
|
import toml
|
||||||
|
|
||||||
|
from . import core, plugin
|
||||||
|
|
||||||
|
# chromalog.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
|
||||||
|
|
||||||
|
log = logging.getLogger("shepherd.cli")
|
||||||
|
|
||||||
|
|
||||||
|
def echo_heading(title, on_nl=True):
|
||||||
|
if on_nl:
|
||||||
|
click.echo("")
|
||||||
|
click.echo(click.style(".: ", fg='blue', bold=True) +
|
||||||
|
click.style(title, fg='white', bold=True) +
|
||||||
|
click.style(" :.", fg='blue', bold=True))
|
||||||
|
|
||||||
|
|
||||||
|
def echo_section(title, input_text=None, on_nl=True):
|
||||||
|
if on_nl:
|
||||||
|
click.echo("")
|
||||||
|
click.secho(":: ", bold=True, fg='blue', nl=False)
|
||||||
|
click.secho(title, bold=True, nl=False)
|
||||||
|
if input_text:
|
||||||
|
click.secho(F" {input_text}", fg='green', bold=True, nl=False)
|
||||||
|
click.echo("")
|
||||||
|
|
||||||
|
|
||||||
|
@click.group(invoke_without_command=True)
|
||||||
|
@click.option('-c', '--config', 'default_config_path', type=click.Path(),
|
||||||
|
help="Shepherd config TOML file to be used as default config layer."
|
||||||
|
" Overrides default './shepherd*.toml' search")
|
||||||
|
@click.option('-l', '--local', 'local_operation', is_flag=True,
|
||||||
|
help="Only use the local config layers (default and custom), and disable all"
|
||||||
|
" Shepherd Control remote features")
|
||||||
|
@click.option('-d', '--default-config-only', 'only_default_layer', is_flag=True,
|
||||||
|
help="Ignore the custom config layer (still uses the Control config above that)")
|
||||||
|
@click.option('-n', '--new-device-mode', 'new_device_mode', is_flag=True,
|
||||||
|
help="Clear existing device identity and cached Shepherd Control config layer."
|
||||||
|
" Also triggered by the presence of a shepherd.new file in the"
|
||||||
|
" same directory as the custom config layer file.")
|
||||||
|
@click.pass_context
|
||||||
|
def cli(ctx, default_config_path, local_operation, only_default_layer, new_device_mode):
|
||||||
|
"""
|
||||||
|
Core service. If default config file is not provided with '-c' option, the first filename
|
||||||
|
in the current working directory beginning with "shepherd" and
|
||||||
|
ending with ".toml" will be used.
|
||||||
|
"""
|
||||||
|
|
||||||
|
version_text = pkg_resources.get_distribution("shepherd")
|
||||||
|
log.info(F"Shepherd Agent [{version_text}]")
|
||||||
|
|
||||||
|
# Drop down to subcommand if it doesn't need default config file processing
|
||||||
|
if ctx.invoked_subcommand in ["template", "info"]:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get a default config path to use
|
||||||
|
if not default_config_path:
|
||||||
|
default_config_path = sorted(glob.glob("./shepherd*.toml"))[:1]
|
||||||
|
if default_config_path:
|
||||||
|
default_config_path = default_config_path[0]
|
||||||
|
log.info(F"No default config file provided, found {default_config_path}")
|
||||||
|
with open(default_config_path, 'r+') as f:
|
||||||
|
content = f.read()
|
||||||
|
if "Compiled Shepherd config" in content:
|
||||||
|
log.warning("Default config file looks like it is full compiled config"
|
||||||
|
" file generated by Shepherd and picked up due to accidental"
|
||||||
|
" name match")
|
||||||
|
else:
|
||||||
|
log.error("No default config file provided, and no 'shepherd*.toml' could be"
|
||||||
|
" found in the current directory")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Establish what config layers we're going to try and use
|
||||||
|
control_enabled = True
|
||||||
|
use_custom_config = True
|
||||||
|
|
||||||
|
if local_operation or (ctx.invoked_subcommand == "test"):
|
||||||
|
control_enabled = False
|
||||||
|
log.info("Running in local only mode")
|
||||||
|
|
||||||
|
if only_default_layer:
|
||||||
|
use_custom_config = False
|
||||||
|
|
||||||
|
agent = core.Agent(default_config_path, use_custom_config, control_enabled, new_device_mode)
|
||||||
|
ctx.ensure_object(SimpleNamespace)
|
||||||
|
ctx.obj.agent = agent
|
||||||
|
|
||||||
|
# Drop down to subcommands that needed a config compiled
|
||||||
|
if ctx.invoked_subcommand == "test":
|
||||||
|
return
|
||||||
|
|
||||||
|
print(str(datetime.now()))
|
||||||
|
|
||||||
|
agent.start()
|
||||||
|
|
||||||
|
|
||||||
|
@cli.command()
|
||||||
|
@click.argument('plugin_name', required=False)
|
||||||
|
@click.argument('interface_function', required=False)
|
||||||
|
@click.pass_context
|
||||||
|
def test(ctx, plugin_name, interface_function):
|
||||||
|
agent = ctx.obj.agent
|
||||||
|
plugin_configs = agent.applied_config.copy()
|
||||||
|
del plugin_configs['shepherd']
|
||||||
|
|
||||||
|
echo_heading("Shepherd - Test")
|
||||||
|
|
||||||
|
if not plugin_name:
|
||||||
|
log.info("Test initialisation of all plugins in config...")
|
||||||
|
|
||||||
|
echo_section("Plugins loaded:")
|
||||||
|
if len(plugin_configs) == 0:
|
||||||
|
click.echo("---none---")
|
||||||
|
for plugin_name, config in plugin_configs.items():
|
||||||
|
click.secho(F" {plugin_name}", fg='green')
|
||||||
|
|
||||||
|
echo_section("Core config:")
|
||||||
|
print(toml.dumps(agent.core_config))
|
||||||
|
# pprint(agent.core_config)
|
||||||
|
|
||||||
|
echo_section("Plugin configs:")
|
||||||
|
if len(plugin_configs) == 0:
|
||||||
|
click.echo("---none---")
|
||||||
|
for name, config in plugin_configs.items():
|
||||||
|
click.secho(F" {plugin_name}", fg='green')
|
||||||
|
print(toml.dumps(config))
|
||||||
|
# pprint(config)
|
||||||
|
|
||||||
|
click.echo("")
|
||||||
|
|
||||||
|
log.info("Initialising plugins...")
|
||||||
|
plugin.init_plugins(agent.applied_config)
|
||||||
|
log.info("Plugin initialisation done")
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
echo_section("Target plugin:", input_text=plugin_name, on_nl=False)
|
||||||
|
# TODO find plugin dependancies
|
||||||
|
|
||||||
|
if plugin_name not in plugin_configs:
|
||||||
|
log.error(F"Supplied plugin name '{plugin_name}' is not loaded"
|
||||||
|
" (not present in config)")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
echo_section(F"Config [{plugin_name}]:")
|
||||||
|
print(toml.dumps(plugin_configs[plugin_name]))
|
||||||
|
# pprint(plugin_configs[plugin_name])
|
||||||
|
|
||||||
|
interface = plugin.load_plugin(plugin_name)
|
||||||
|
|
||||||
|
if not interface_function:
|
||||||
|
echo_section(F"Interface functions [{plugin_name}]:", on_nl=False)
|
||||||
|
|
||||||
|
for name in interface._functions:
|
||||||
|
click.echo(F" {name}")
|
||||||
|
return
|
||||||
|
|
||||||
|
echo_section("Target interface function:", input_text=interface_function, on_nl=False)
|
||||||
|
if interface_function not in interface._functions:
|
||||||
|
log.error(F"Supplied interface function name '{interface_function}' is not present in"
|
||||||
|
F" plugin {plugin_name}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
log.info("Initialising plugins...")
|
||||||
|
# TODO Going to need to add 'shepherd' to this, so that its public plugin interface also gets
|
||||||
|
# init - functions and hooks
|
||||||
|
plugin.init_plugins({plugin_name: plugin_configs[plugin_name]})
|
||||||
|
log.info("Plugin initialisation done")
|
||||||
|
|
||||||
|
# TODO look for a spec on the interface function, and parse cmdline values if it's there
|
||||||
|
|
||||||
|
print(interface._functions[interface_function]())
|
||||||
|
|
||||||
|
|
||||||
|
class BlankEncoder(toml.TomlEncoder):
|
||||||
|
"""
|
||||||
|
A TOML encoder that emit empty keys (values of None). This isn't valid TOML,
|
||||||
|
but is useful for generating templates.
|
||||||
|
"""
|
||||||
|
class BlankValue:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def __init__(self, _dict=dict, preserve=False):
|
||||||
|
super().__init__(_dict, preserve)
|
||||||
|
self.dump_funcs[self.BlankValue] = lambda v: ''
|
||||||
|
|
||||||
|
def dump_sections(self, o, sup):
|
||||||
|
for section in o:
|
||||||
|
if o[section] is None:
|
||||||
|
o[section] = self.BlankValue()
|
||||||
|
return super().dump_sections(o, sup)
|
||||||
|
|
||||||
|
def dump_value(self, v):
|
||||||
|
if v is None:
|
||||||
|
v = self.BlankValue()
|
||||||
|
return super().dump_value(v)
|
||||||
|
|
||||||
|
|
||||||
|
@cli.command()
|
||||||
|
@click.argument('plugin_name', required=False)
|
||||||
|
@click.option('-a', '--include-all', is_flag=True,
|
||||||
|
help="Include all optional fields in the template")
|
||||||
|
@click.option('-c', '--config', 'config_path', type=click.Path(),
|
||||||
|
help="Path to append or create config tempalate")
|
||||||
|
@click.option('-d', '--plugin-dir', type=click.Path(),
|
||||||
|
help="Directory to search for plugin modules, in addition to built in Shepherd"
|
||||||
|
" plugins and the global import path. Defaults to current directory.")
|
||||||
|
@click.pass_context
|
||||||
|
def template(ctx, plugin_name, include_all, config_path, plugin_dir):
|
||||||
|
"""
|
||||||
|
Generate a template config TOML file for PLUGIN_NAME, or for the Shepherd core if
|
||||||
|
PLUGIN_NAME is not provided.
|
||||||
|
|
||||||
|
If config path is provided ("-c"), append to that file (if it exists) or write to
|
||||||
|
a new file (if it doesn't yet exist).
|
||||||
|
"""
|
||||||
|
|
||||||
|
echo_heading("Shepherd - Template")
|
||||||
|
|
||||||
|
if not plugin_dir:
|
||||||
|
plugin_dir = Path.cwd()
|
||||||
|
|
||||||
|
confspec = None
|
||||||
|
if not plugin_name:
|
||||||
|
plugin_name = "shepherd"
|
||||||
|
|
||||||
|
try:
|
||||||
|
plugin_interface = plugin.load_plugin(plugin_name, plugin_dir)
|
||||||
|
except plugin.PluginLoadError as e:
|
||||||
|
log.error(e.args[0])
|
||||||
|
sys.exit(1)
|
||||||
|
confspec = plugin_interface.confspec
|
||||||
|
|
||||||
|
template_dict = confspec.get_template(include_all)
|
||||||
|
template_toml = toml.dumps({plugin_name: template_dict}, encoder=BlankEncoder())
|
||||||
|
|
||||||
|
if include_all:
|
||||||
|
log.info("Including all optional fields")
|
||||||
|
else:
|
||||||
|
log.info("Including required fields only")
|
||||||
|
|
||||||
|
echo_section("Config template for", input_text=F"[{plugin_name}]")
|
||||||
|
click.echo("")
|
||||||
|
click.echo(template_toml)
|
||||||
|
|
||||||
|
if not config_path:
|
||||||
|
# reuse parent "-c" for convenience
|
||||||
|
config_path = ctx.parent.params["default_config_path"]
|
||||||
|
|
||||||
|
if not config_path:
|
||||||
|
return
|
||||||
|
|
||||||
|
if Path(config_path).is_file():
|
||||||
|
try:
|
||||||
|
existing_config = toml.load(config_path)
|
||||||
|
except Exception:
|
||||||
|
click.confirm(
|
||||||
|
F"File {config_path} already exists and is not a valid TOML file. Overwrite?",
|
||||||
|
default=True, abort=True)
|
||||||
|
|
||||||
|
click.echo(F"Writing [{plugin_name}] template to {config_path}")
|
||||||
|
with open(config_path, 'w+') as f:
|
||||||
|
f.write(template_toml)
|
||||||
|
else:
|
||||||
|
if plugin_name in existing_config:
|
||||||
|
click.confirm(F"Overwrite [{plugin_name}] section in {config_path}?",
|
||||||
|
default=True, abort=True)
|
||||||
|
click.echo(F"Overwriting [{plugin_name}] section in {config_path}")
|
||||||
|
else:
|
||||||
|
click.confirm(F"Add [{plugin_name}] section to {config_path}?",
|
||||||
|
default=True, abort=True)
|
||||||
|
click.echo(F"Adding [{plugin_name}] section to {config_path}")
|
||||||
|
existing_config[plugin_name] = template_dict
|
||||||
|
with open(config_path, 'w+') as f:
|
||||||
|
f.write(toml.dumps(existing_config))
|
||||||
|
else:
|
||||||
|
click.echo(F"Writing [{plugin_name}] template to {config_path}")
|
||||||
|
with open(config_path, 'w+') as f:
|
||||||
|
f.write(template_toml)
|
||||||
|
|
||||||
|
|
||||||
|
@cli.command()
|
||||||
|
@click.argument('plugin_name', required=False)
|
||||||
|
@click.option('-d', '--plugin-dir', type=click.Path(),
|
||||||
|
help="Directory to search for plugin modules, in addition to built in Shepherd"
|
||||||
|
" plugins and the global import path. Defaults to current directory.")
|
||||||
|
@click.pass_context
|
||||||
|
def info(ctx, plugin_name, plugin_dir):
|
||||||
|
"""
|
||||||
|
Show plugin information.
|
||||||
|
|
||||||
|
If plugin_name is not provided, shows list of all discovered plugins and their sources. Note
|
||||||
|
that this will detect _all_ valid python modules in the plugin_dir as custom plugins, as these
|
||||||
|
are not validated as proper Shepherd plugins until they are loaded.
|
||||||
|
|
||||||
|
If plugin_name is provided, attempts to load (but not initialise) the desired plugin and show
|
||||||
|
all registered plugin features (interface functions, hooks, attachments, and
|
||||||
|
config specification).
|
||||||
|
"""
|
||||||
|
|
||||||
|
echo_heading("Shepherd - Info")
|
||||||
|
|
||||||
|
if not plugin_dir:
|
||||||
|
plugin_dir = Path.cwd()
|
||||||
|
|
||||||
|
if not plugin_name:
|
||||||
|
log.info("Running plugin discovery...")
|
||||||
|
base_plugins = plugin.discover_base_plugins()
|
||||||
|
custom_plugins = plugin.discover_custom_plugins(plugin_dir)
|
||||||
|
installed_plugins = plugin.discover_installed_plugins()
|
||||||
|
|
||||||
|
echo_section("Discovered base plugins:")
|
||||||
|
if len(base_plugins) == 0:
|
||||||
|
click.echo("---none---")
|
||||||
|
for name in base_plugins:
|
||||||
|
click.secho(F" {name}", fg='green')
|
||||||
|
|
||||||
|
echo_section("Discovered custom plugins:")
|
||||||
|
if len(custom_plugins) == 0:
|
||||||
|
click.echo("---none---")
|
||||||
|
for name in custom_plugins:
|
||||||
|
click.secho(F" {name}", fg='green')
|
||||||
|
|
||||||
|
echo_section("Discovered installed plugins:")
|
||||||
|
if len(installed_plugins) == 0:
|
||||||
|
click.echo("---none---")
|
||||||
|
for name in installed_plugins:
|
||||||
|
click.secho(F" {name}", fg='green')
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
# Plugin name supplied, so load it
|
||||||
|
plugin_interface = None
|
||||||
|
log.info(F"Attempting to load plugin {plugin_name}...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
plugin_interface = plugin.load_plugin(plugin_name, plugin_dir)
|
||||||
|
except plugin.PluginLoadError as e:
|
||||||
|
log.error(e.args[0])
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
echo_section("Plugin info for", input_text=plugin_name)
|
||||||
|
|
||||||
|
# template_dict = confspec.get_template(include_all)
|
||||||
|
# template_toml = toml.dumps({plugin_name: template_dict}, encoder=BlankEncoder())
|
||||||
|
|
||||||
|
echo_section("Interface functions:")
|
||||||
|
for ifunc_name, ifunc in plugin_interface._functions.items():
|
||||||
|
args = ""
|
||||||
|
if ifunc.remote:
|
||||||
|
args = F"{ifunc.spec}"
|
||||||
|
else:
|
||||||
|
args = F"{inspect.signature(ifunc.func)}"
|
||||||
|
|
||||||
|
click.echo(F"{ifunc_name} {args}")
|
||||||
|
|
||||||
|
echo_section("Hooks:")
|
||||||
|
for hook in plugin_interface._hooks:
|
||||||
|
click.echo(hook)
|
||||||
|
|
||||||
|
echo_section("Config:")
|
||||||
|
click.echo(plugin_interface._confspec)
|
||||||
@ -0,0 +1,353 @@
|
|||||||
|
import threading
|
||||||
|
import secrets
|
||||||
|
from types import SimpleNamespace
|
||||||
|
from pathlib import Path
|
||||||
|
from urllib.parse import urlparse, urlunparse, urljoin
|
||||||
|
from hashlib import blake2b
|
||||||
|
import time
|
||||||
|
import logging
|
||||||
|
import toml
|
||||||
|
import requests
|
||||||
|
from configspec import *
|
||||||
|
import statesman
|
||||||
|
|
||||||
|
|
||||||
|
# Namespace of types intended for server-side use.
|
||||||
|
def get_export():
|
||||||
|
from . import plugin
|
||||||
|
export = SimpleNamespace()
|
||||||
|
export.InterfaceCall = plugin.InterfaceCall
|
||||||
|
return export
|
||||||
|
|
||||||
|
|
||||||
|
log = logging.getLogger("shepherd.agent.control")
|
||||||
|
|
||||||
|
_control_update_required = threading.Condition()
|
||||||
|
|
||||||
|
|
||||||
|
def _update_required_callback():
|
||||||
|
with _control_update_required:
|
||||||
|
_control_update_required.notify()
|
||||||
|
|
||||||
|
|
||||||
|
def register_on(core_interface):
|
||||||
|
"""
|
||||||
|
Register the control confspec on the core interface.
|
||||||
|
"""
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("server", StringSpec())
|
||||||
|
confspec.add_spec("intro_key", StringSpec())
|
||||||
|
|
||||||
|
core_interface.confspec.add_spec("control", confspec, optional=True)
|
||||||
|
|
||||||
|
|
||||||
|
class CoreUpdateState():
|
||||||
|
"""
|
||||||
|
A container for all state that might need communicating remotely to Control. Abstracts the
|
||||||
|
Statesman topics away from other parts of the Agent.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, cmd_reader, cmd_result_writer):
|
||||||
|
"""
|
||||||
|
Control update handler for the `/update` core endpoint. Needs a reference to the
|
||||||
|
CommandRunner
|
||||||
|
"""
|
||||||
|
self.topic_bundle = statesman.TopicBundle({
|
||||||
|
'status': statesman.StateWriter(),
|
||||||
|
'config-spec': statesman.StateWriter(),
|
||||||
|
'device-config': statesman.StateWriter(),
|
||||||
|
'applied-config': statesman.StateWriter(),
|
||||||
|
'control-commands': cmd_reader,
|
||||||
|
'command-results': cmd_result_writer})
|
||||||
|
|
||||||
|
self.topic_bundle.set_update_required_callback(_update_required_callback)
|
||||||
|
|
||||||
|
def set_static_state(self, local_config, applied_config, confspec):
|
||||||
|
# These should all effectively be static
|
||||||
|
self.topic_bundle['device-config'].set_state(local_config)
|
||||||
|
self.topic_bundle['applied-config'].set_state(applied_config)
|
||||||
|
self.topic_bundle['config-spec'].set_state(confspec)
|
||||||
|
|
||||||
|
def set_status(self, status_dict):
|
||||||
|
self.topic_bundle['status'].set_state(status_dict)
|
||||||
|
|
||||||
|
|
||||||
|
class CommandRunner():
|
||||||
|
def __init__(self, interface_functions):
|
||||||
|
self.cmd_reader = statesman.SequenceReader(
|
||||||
|
new_message_callback=self.on_new_command_message)
|
||||||
|
self.cmd_result_writer = statesman.SequenceWriter()
|
||||||
|
self._functions = interface_functions
|
||||||
|
self.current_commands = {}
|
||||||
|
|
||||||
|
def on_new_command_message(self, message):
|
||||||
|
# This should be a single list, where the first value is the command ID and the second
|
||||||
|
# value is a plugin.FunctionCall
|
||||||
|
commandID = message[0]
|
||||||
|
command_call = message[1]
|
||||||
|
|
||||||
|
command_thread = threading.Thread(target=self._process_command,
|
||||||
|
args=(commandID, command_call))
|
||||||
|
command_thread.start()
|
||||||
|
|
||||||
|
def _process_command(self, commandID, command_call):
|
||||||
|
if commandID in self.current_commands:
|
||||||
|
raise ValueError(F"Already running a command with ID {commandID}")
|
||||||
|
self.current_commands[commandID] = threading.current_thread()
|
||||||
|
|
||||||
|
try:
|
||||||
|
command_call.resolve(self._functions)
|
||||||
|
result = command_call.call()
|
||||||
|
|
||||||
|
self.cmd_result_writer.add_message([commandID, result])
|
||||||
|
finally:
|
||||||
|
self.current_commands.pop(commandID)
|
||||||
|
|
||||||
|
|
||||||
|
class PluginUpdateState():
|
||||||
|
def __init__(self):
|
||||||
|
self.topic_bundle = statesman.TopicBundle()
|
||||||
|
|
||||||
|
# config-spec should be static, but isn't known yet when this is created
|
||||||
|
self.topic_bundle.add('status', statesman.StateWriter())
|
||||||
|
self.topic_bundle.add('config-spec', statesman.StateWriter())
|
||||||
|
self.topic_bundle.add('command-spec', statesman.StateWriter())
|
||||||
|
# Why is config split out into plugins? Just like the device config and applied config,
|
||||||
|
# it's only loaded once at the start. Is this purely because it's easy to get at from the
|
||||||
|
# PluginInterface where this object is created?
|
||||||
|
|
||||||
|
self.topic_bundle.set_update_required_callback(_update_required_callback)
|
||||||
|
|
||||||
|
def set_status(self, status_dict):
|
||||||
|
self.topic_bundle['status'].set_state(status_dict)
|
||||||
|
|
||||||
|
def set_confspec(self, config_spec):
|
||||||
|
self.topic_bundle['config-spec'].set_state(config_spec)
|
||||||
|
|
||||||
|
def set_commandspec(self, command_spec):
|
||||||
|
self.topic_bundle['command-spec'].set_state(command_spec)
|
||||||
|
|
||||||
|
|
||||||
|
def clean_https_url(dirty_url):
|
||||||
|
"""
|
||||||
|
Take a url with or without the leading "https://" scheme, and convert it to one that
|
||||||
|
does. Change HTTP to HTTPS if present.
|
||||||
|
"""
|
||||||
|
# Some weirdness with URL parsing means that by default most urls (like www.google.com)
|
||||||
|
# get treated as relative
|
||||||
|
# https://stackoverflow.com/questions/53816559/python-3-netloc-value-in-urllib-parse-is-empty-if-url-doesnt-have
|
||||||
|
|
||||||
|
if "//" not in dirty_url:
|
||||||
|
dirty_url = "//"+dirty_url
|
||||||
|
return urlunparse(urlparse(dirty_url)._replace(scheme="https"))
|
||||||
|
|
||||||
|
|
||||||
|
def load_device_identity(root_dir):
|
||||||
|
"""
|
||||||
|
Attempt to load the device identity from the shepherd.identity file. Will throw exceptions if
|
||||||
|
this fails. Returns a tuple of (device_secret, device_id)
|
||||||
|
"""
|
||||||
|
identity_filepath = Path(root_dir, 'shepherd.identity')
|
||||||
|
if not identity_filepath.exists():
|
||||||
|
log.warning(F"{identity_filepath} file does not exist")
|
||||||
|
raise FileNotFoundError()
|
||||||
|
|
||||||
|
with identity_filepath.open() as identity_file:
|
||||||
|
identity_dict = toml.load(identity_file)
|
||||||
|
|
||||||
|
dev_secret = identity_dict["device_secret"]
|
||||||
|
|
||||||
|
dev_secret_bytes = bytes.fromhex(dev_secret)
|
||||||
|
|
||||||
|
if len(dev_secret_bytes) != 16:
|
||||||
|
log.error(F"Device secret loaded from file {identity_filepath} does not contain the "
|
||||||
|
"required 16 bytes")
|
||||||
|
raise ValueError()
|
||||||
|
|
||||||
|
secret_hash = blake2b(dev_secret_bytes, digest_size=16).hexdigest()
|
||||||
|
dev_id = secret_hash[:8]
|
||||||
|
log.info(F"Loaded device identity. ID: {dev_id}")
|
||||||
|
return (dev_secret, dev_id)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_device_identity(root_dir):
|
||||||
|
"""
|
||||||
|
Generate a new device identity and save it to the shepherd.identity file.
|
||||||
|
Returns a tuple of (device_secret, device_id).
|
||||||
|
"""
|
||||||
|
|
||||||
|
dev_secret = secrets.token_hex(16)
|
||||||
|
|
||||||
|
identity_dict = {}
|
||||||
|
identity_dict['device_secret'] = dev_secret
|
||||||
|
|
||||||
|
identity_filepath = Path(root_dir, 'shepherd.identity')
|
||||||
|
with identity_filepath.open('w+') as identity_file:
|
||||||
|
toml.dump(identity_dict, identity_file)
|
||||||
|
|
||||||
|
dev_secret_bytes = bytes.fromhex(dev_secret)
|
||||||
|
secret_hash = blake2b(dev_secret_bytes, digest_size=16).hexdigest()
|
||||||
|
dev_id = secret_hash[:8]
|
||||||
|
|
||||||
|
log.info(F"Generated new device identity. ID: {dev_id}")
|
||||||
|
return (dev_secret, dev_id)
|
||||||
|
|
||||||
|
|
||||||
|
_update_thread_init_done = threading.Event()
|
||||||
|
|
||||||
|
_stop_event = threading.Event()
|
||||||
|
|
||||||
|
|
||||||
|
def stop():
|
||||||
|
_stop_event.set()
|
||||||
|
_update_required_callback()
|
||||||
|
log.info("Control thread stop requested.")
|
||||||
|
|
||||||
|
|
||||||
|
def start_control(config, root_dir, core_update_state, plugin_update_states):
|
||||||
|
"""
|
||||||
|
Start the Control update thread and initialise the Shepherd Control systems.
|
||||||
|
"""
|
||||||
|
_stop_event.clear()
|
||||||
|
_update_thread_init_done.clear()
|
||||||
|
control_thread = threading.Thread(target=_control_update_loop, args=(
|
||||||
|
config, root_dir, core_update_state, plugin_update_states))
|
||||||
|
control_thread.start()
|
||||||
|
|
||||||
|
# Wait for init so our log makes sense
|
||||||
|
_update_thread_init_done.wait()
|
||||||
|
|
||||||
|
return control_thread
|
||||||
|
|
||||||
|
|
||||||
|
def _control_update_loop(config, root_dir, core_update_state, plugin_update_states):
|
||||||
|
control_api_url = urljoin(clean_https_url(config["server"]), "/agent")
|
||||||
|
log.info(F"Control server API endpoint is {control_api_url}")
|
||||||
|
intro_key = config["intro_key"]
|
||||||
|
log.info(F"Using intro key: {intro_key}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
device_secret, device_id = load_device_identity(root_dir)
|
||||||
|
except Exception:
|
||||||
|
log.warning("Could not load device identity from shepherd.identity file")
|
||||||
|
device_secret, device_id = generate_device_identity(root_dir)
|
||||||
|
|
||||||
|
_update_thread_init_done.set()
|
||||||
|
|
||||||
|
update_rate_limiter = SmoothTokenBucketLimit(10, 10*60, 3, time.monotonic())
|
||||||
|
|
||||||
|
session = requests.Session()
|
||||||
|
# r=session.post('https://api.shepherd.test/agent/update')
|
||||||
|
while True:
|
||||||
|
# Spin here until something needs updating
|
||||||
|
with _control_update_required:
|
||||||
|
|
||||||
|
new_endpoint_updates = {} # a dict of url:topic_bundle pairs
|
||||||
|
while True:
|
||||||
|
if core_update_state.topic_bundle.is_update_required():
|
||||||
|
new_endpoint_updates['/update'] = core_update_state.topic_bundle
|
||||||
|
|
||||||
|
for plugin_name, state in plugin_update_states.items():
|
||||||
|
if state.topic_bundle.is_update_required():
|
||||||
|
new_endpoint_updates[f"/pluginupdate/{plugin_name}"] = state.topic_bundle
|
||||||
|
|
||||||
|
if (len(new_endpoint_updates) > 0) or _stop_event.is_set():
|
||||||
|
break
|
||||||
|
|
||||||
|
_control_update_required.wait()
|
||||||
|
|
||||||
|
for endpoint, topic_bundle in new_endpoint_updates.items():
|
||||||
|
try:
|
||||||
|
r = session.post(control_api_url+endpoint,
|
||||||
|
json=topic_bundle.get_payload(),
|
||||||
|
auth=(device_secret, intro_key))
|
||||||
|
|
||||||
|
if r.status_code == requests.codes['conflict']:
|
||||||
|
# Server replies with this when trying to add our device ID and failing
|
||||||
|
# due to it already existing (device secret hash is a mismatch). We need to
|
||||||
|
# regenerate our ID
|
||||||
|
log.info(F"Control server has indicated that device ID {device_id} already"
|
||||||
|
" exists. Generating new one...")
|
||||||
|
device_secret, device_id = generate_device_identity(root_dir)
|
||||||
|
elif r.status_code == requests.codes['ok']:
|
||||||
|
topic_bundle.process_message(r.json())
|
||||||
|
except requests.exceptions.RequestException:
|
||||||
|
log.exception("Failed to make Shepherd Control request")
|
||||||
|
|
||||||
|
if _stop_event.is_set():
|
||||||
|
# Breaking here is a clean way of killing any delay and allowing a final update before
|
||||||
|
# the thread ends.
|
||||||
|
log.warning("Control thread stopping...")
|
||||||
|
_stop_event.clear()
|
||||||
|
break
|
||||||
|
|
||||||
|
delay = update_rate_limiter.new_event(time.monotonic())
|
||||||
|
_stop_event.wait(delay)
|
||||||
|
|
||||||
|
_update_thread_init_done.clear()
|
||||||
|
|
||||||
|
|
||||||
|
def get_cached_config(config_dir):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def clear_cached_config(config_dir):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class SmoothTokenBucketLimit():
|
||||||
|
"""
|
||||||
|
Event rate limiter implementing a modified Token Bucket algorithm. Delay returned ramps
|
||||||
|
up as the bucket empties.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, allowed_events, period, allowed_burst, initial_time):
|
||||||
|
self.allowed_events = allowed_events
|
||||||
|
self.period = period
|
||||||
|
self.allowed_burst = allowed_burst
|
||||||
|
self.last_token_timestamp = initial_time
|
||||||
|
self.tokens = allowed_events
|
||||||
|
self._is_saturated = False
|
||||||
|
|
||||||
|
def new_event(self, time_now):
|
||||||
|
"""
|
||||||
|
Register a new event for the rate limiter. Return a required delay to ignore future
|
||||||
|
events for in seconds. Conceptually, the "token" we're grabbing here is for the _next_
|
||||||
|
event.
|
||||||
|
"""
|
||||||
|
if self.tokens < self.allowed_events:
|
||||||
|
time_since_last_token = time_now - self.last_token_timestamp
|
||||||
|
tokens_added = int(time_since_last_token/(self.period/self.allowed_events))
|
||||||
|
|
||||||
|
self.tokens = self.tokens + tokens_added
|
||||||
|
self.last_token_timestamp = self.last_token_timestamp + \
|
||||||
|
(self.period/self.allowed_events)*tokens_added
|
||||||
|
if self.tokens >= self.allowed_events:
|
||||||
|
self.last_token_timestamp = time_now
|
||||||
|
|
||||||
|
if self.tokens > 0:
|
||||||
|
self.tokens = self.tokens - 1
|
||||||
|
# Add a delay that ramps from 0 when the bucket is allowed_burst from full to p/x when
|
||||||
|
# it is empty
|
||||||
|
ramp_token_count = self.allowed_events-self.allowed_burst
|
||||||
|
if self.tokens > ramp_token_count:
|
||||||
|
delay = 0
|
||||||
|
else:
|
||||||
|
delay = ((self.period/self.allowed_events)/ramp_token_count) * \
|
||||||
|
(ramp_token_count-self.tokens)
|
||||||
|
|
||||||
|
self._is_saturated = False
|
||||||
|
else:
|
||||||
|
delay = (self.period/self.allowed_events) - (time_now-self.last_token_timestamp)
|
||||||
|
self.last_token_timestamp = time_now+delay
|
||||||
|
# This delay makes is set to account for the next token that would otherwise be added,
|
||||||
|
# without relying on the returned delay _actually_ occurring exactly.
|
||||||
|
self._is_saturated = True
|
||||||
|
|
||||||
|
return delay
|
||||||
|
|
||||||
|
def is_saturated(self):
|
||||||
|
"""
|
||||||
|
Returns true if the rate limiter delay is at it's maximum value
|
||||||
|
"""
|
||||||
|
return self._is_saturated
|
||||||
@ -0,0 +1,363 @@
|
|||||||
|
"""
|
||||||
|
Core shepherd module, tying together main service functionality. Provides main CLI.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from copy import deepcopy
|
||||||
|
from datetime import datetime
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
|
||||||
|
from configspec import *
|
||||||
|
|
||||||
|
from . import plugin
|
||||||
|
from . import control
|
||||||
|
from . import tasks
|
||||||
|
|
||||||
|
|
||||||
|
log = logging.getLogger("shepherd.agent")
|
||||||
|
|
||||||
|
|
||||||
|
core_interface = plugin.PluginInterface()
|
||||||
|
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
# Relative pathnames here are all relative to "root_dir". `root_dir` itself is relative to
|
||||||
|
# the directory the default config is loaded from
|
||||||
|
confspec.add_specs({
|
||||||
|
"name": StringSpec(helptext="Identifying name for this device"),
|
||||||
|
})
|
||||||
|
|
||||||
|
confspec.add_specs(optional=True, spec_dict={
|
||||||
|
"root_dir":
|
||||||
|
(StringSpec(helptext="Operating directory for shepherd to place working files."
|
||||||
|
" Relative to the directory containing the default config file."),
|
||||||
|
"./"),
|
||||||
|
"custom_config_path":
|
||||||
|
StringSpec(helptext="Path to custom config layer TOML file."),
|
||||||
|
"compiled_config_path":
|
||||||
|
(StringSpec(helptext="Path to custom file Shepherd will generate to show compiled"
|
||||||
|
" config that was used and any errors in validation."),
|
||||||
|
"compiled-config.toml"),
|
||||||
|
"plugin_dir":
|
||||||
|
(StringSpec(helptext="Optional directory for Shepherd to look for plugins in."),
|
||||||
|
"./shepherd-plugins")
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
|
core_interface.register_confspec(confspec)
|
||||||
|
|
||||||
|
# Allows plugins to add delay for system time to stabilise
|
||||||
|
core_interface.register_hook("wait_for_stable_time")
|
||||||
|
|
||||||
|
# Allow other modules to add to the core interface (confspec, hooks, interface functions)
|
||||||
|
# Having modules modify a confspec after it's registered here is a bit of a hack.
|
||||||
|
tasks.register_on(core_interface)
|
||||||
|
control.register_on(core_interface)
|
||||||
|
|
||||||
|
|
||||||
|
@ plugin.plugin_class
|
||||||
|
class Agent():
|
||||||
|
"""
|
||||||
|
Holds the main state required to run Shepherd Agent
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, default_config_path, use_custom_config=True, control_enabled=False,
|
||||||
|
new_device_mode=False):
|
||||||
|
"""
|
||||||
|
Load in the Shepherd Agent config and associated plugins.
|
||||||
|
Args:
|
||||||
|
default_config_path: The path to the default config file
|
||||||
|
use_custom_config: Set False to disable the local custom config layer
|
||||||
|
control_enabled: Set False to disable Shepherd Control remote management
|
||||||
|
(including any cached Control config layer)
|
||||||
|
new_device_mode: Set True to clear out any cached state and trigger new generation
|
||||||
|
of ID, as if it were being run on a fresh system.
|
||||||
|
"""
|
||||||
|
# Make sure the plugin system uses this instance rather making its own
|
||||||
|
core_interface._plugin_obj = self
|
||||||
|
|
||||||
|
# The config defined by the device (everything before the Control layer)
|
||||||
|
self.local_config = None
|
||||||
|
# The config actually being used
|
||||||
|
self.applied_config = None
|
||||||
|
# Split the applied_config up into core and plugins
|
||||||
|
self.core_config = None
|
||||||
|
|
||||||
|
self.interface_functions = None
|
||||||
|
self.plugin_interfaces = None
|
||||||
|
|
||||||
|
self.restart_args = [default_config_path,
|
||||||
|
use_custom_config, control_enabled, new_device_mode]
|
||||||
|
self.control_enabled = control_enabled
|
||||||
|
|
||||||
|
# Compile the config layers
|
||||||
|
confman = ConfigManager()
|
||||||
|
# Pre-seed confman with core confspec to bootstrap 'plugin_dir'.
|
||||||
|
# The plugin load system will get it from the 'shepherd' plugin interface later, but we
|
||||||
|
# need the 'plugin_dir' before that.
|
||||||
|
confman.add_confspec("shepherd", core_interface.confspec)
|
||||||
|
|
||||||
|
compile_local_config(confman, default_config_path, use_custom_config)
|
||||||
|
self.local_config = deepcopy(confman.get_config_bundles())
|
||||||
|
|
||||||
|
local_core_conf = confman.get_config_bundle('shepherd')
|
||||||
|
|
||||||
|
# Check for new device mode
|
||||||
|
if new_device_mode or check_new_device_file(local_core_conf["custom_config_path"]):
|
||||||
|
log.info("'new device' mode enabled, clearing old state...")
|
||||||
|
control.generate_device_identity(local_core_conf["root_dir"])
|
||||||
|
control.clear_cached_config(local_core_conf["root_dir"])
|
||||||
|
|
||||||
|
if local_core_conf["control"] is None:
|
||||||
|
self.control_enabled = False
|
||||||
|
log.warning("Shepherd control config section not present. Will not attempt to"
|
||||||
|
" connect to Shepherd Control server.")
|
||||||
|
|
||||||
|
if self.control_enabled:
|
||||||
|
compile_remote_config(confman)
|
||||||
|
else:
|
||||||
|
log.info("Shepherd Control config layer disabled")
|
||||||
|
|
||||||
|
self.applied_config = confman.get_config_bundles()
|
||||||
|
self.core_config = confman.get_config_bundle('shepherd')
|
||||||
|
|
||||||
|
loaded_plugin_names = list(self.applied_config.keys())
|
||||||
|
loaded_plugin_names.remove('shepherd')
|
||||||
|
if len(loaded_plugin_names) == 0:
|
||||||
|
loaded_plugin_names.append("--none--")
|
||||||
|
log.info(F"Loaded plugins: {', '.join(loaded_plugin_names)}")
|
||||||
|
|
||||||
|
log.debug("Compiled config: %s", confman.root_config)
|
||||||
|
if self.core_config["compiled_config_path"]:
|
||||||
|
message = F"Compiled Shepherd config at {datetime.now()}"
|
||||||
|
confman.dump_to_file(self.core_config["compiled_config_path"], message=message)
|
||||||
|
log.info(F"Saved compiled config to {self.core_config['compiled_config_path']}")
|
||||||
|
|
||||||
|
@ plugin.plugin_function
|
||||||
|
def root_dir(self):
|
||||||
|
return self.core_config["root_dir"]
|
||||||
|
|
||||||
|
@ plugin.plugin_function
|
||||||
|
def device_name(self):
|
||||||
|
return self.core_config["name"]
|
||||||
|
|
||||||
|
def restart(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
# We don't worry about the plugin dir here, or 'shepherd' being included, as they should
|
||||||
|
# already all be loaded and cached.
|
||||||
|
self.plugin_interfaces = plugin.init_plugins(self.applied_config)
|
||||||
|
# After this point, plugins may already have their own threads running if they created
|
||||||
|
# them during init
|
||||||
|
self.interface_functions = core_interface.plugins
|
||||||
|
|
||||||
|
cmd_runner = control.CommandRunner(self.interface_functions)
|
||||||
|
core_update_state = control.CoreUpdateState(cmd_runner.cmd_reader,
|
||||||
|
cmd_runner.cmd_result_writer)
|
||||||
|
core_update_state.set_static_state(
|
||||||
|
self.local_config, self.applied_config, core_interface.confspec)
|
||||||
|
|
||||||
|
plugin_update_states = {name: iface._update_state
|
||||||
|
for name, iface in self.plugin_interfaces.items()}
|
||||||
|
|
||||||
|
if self.control_enabled:
|
||||||
|
control.start_control(self.core_config["control"], self.root_dir(),
|
||||||
|
core_update_state, plugin_update_states)
|
||||||
|
|
||||||
|
# Need somewhere to eventually pass in the hooks Tasks will need for the lowpower stuff,
|
||||||
|
# probably just another init_plugins arg.
|
||||||
|
|
||||||
|
# TODO Collect plugin tasks
|
||||||
|
|
||||||
|
task_session = tasks.init_tasks(self.core_config['session'], self.root_dir(),
|
||||||
|
[], self.applied_config, self.interface_functions)
|
||||||
|
|
||||||
|
# TODO Any time stabilisation or waiting for Control
|
||||||
|
|
||||||
|
tasks.start_tasks(core_interface, task_session)
|
||||||
|
|
||||||
|
# tasks.init_tasks(self.core_config) # seperate tasks.start?
|
||||||
|
|
||||||
|
# plugin.start() # Run the plugin `.run` hooks in seperate threads
|
||||||
|
|
||||||
|
# scheduler.restore_jobs()
|
||||||
|
|
||||||
|
|
||||||
|
def check_new_device_file(custom_config_path):
|
||||||
|
if not custom_config_path:
|
||||||
|
return False
|
||||||
|
|
||||||
|
trigger_path = Path(Path(custom_config_path).parent, 'shepherd.new')
|
||||||
|
if trigger_path.exists():
|
||||||
|
trigger_path.unlink()
|
||||||
|
log.info("'shepherd.new' file detected, removing file and"
|
||||||
|
" triggering 'new device' mode")
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def compile_local_config(confman, default_config_path, use_custom_config):
|
||||||
|
"""
|
||||||
|
Load the default config and optionally try to overlay the custom config layer.
|
||||||
|
As part of this, load the required plugins into cache (required to validate their config).
|
||||||
|
"""
|
||||||
|
|
||||||
|
# ====Default Local Config Layer====
|
||||||
|
# This must validate to continue.
|
||||||
|
default_config_path = Path(default_config_path).expanduser().resolve()
|
||||||
|
try:
|
||||||
|
load_config_layer_and_plugins(confman, default_config_path)
|
||||||
|
log.info(F"Loaded default config layer from {default_config_path}")
|
||||||
|
except Exception as e:
|
||||||
|
if isinstance(e, InvalidConfigError):
|
||||||
|
log.error(F"Failed to load default config from {default_config_path}."
|
||||||
|
F" {chr(10).join(e.args)}")
|
||||||
|
else:
|
||||||
|
log.error(F"Failed to load default config from {default_config_path}", exc_info=True)
|
||||||
|
raise
|
||||||
|
|
||||||
|
# Resolve and freeze local install paths that shouldn't be changed from default config
|
||||||
|
core_conf = confman.get_config_bundle("shepherd")
|
||||||
|
resolve_core_conf_paths(core_conf, default_config_path.parent)
|
||||||
|
confman.freeze_value("shepherd", "root_dir")
|
||||||
|
confman.freeze_value("shepherd", "plugin_dir")
|
||||||
|
confman.freeze_value("shepherd", "custom_config_path")
|
||||||
|
confman.freeze_value("shepherd", "compiled_config_path")
|
||||||
|
|
||||||
|
# Pull out custom config path and save current good config
|
||||||
|
custom_config_path = core_conf["custom_config_path"]
|
||||||
|
confman.save_fallback()
|
||||||
|
|
||||||
|
if not core_conf["plugin_dir"]:
|
||||||
|
log.warning("Custom plugin path is empty, won't load custom plugins")
|
||||||
|
|
||||||
|
# ====Custom Local Config Layer====
|
||||||
|
# If this fails, fallback to default config
|
||||||
|
if not use_custom_config:
|
||||||
|
log.info("Custom config layer disabled")
|
||||||
|
return
|
||||||
|
if not custom_config_path:
|
||||||
|
log.warning("Custom config path is empty, skipping custom config layer")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
load_config_layer_and_plugins(confman, custom_config_path)
|
||||||
|
log.info(F"Loaded custom config layer from {custom_config_path}")
|
||||||
|
except Exception as e:
|
||||||
|
if isinstance(e, InvalidConfigError):
|
||||||
|
log.error(F"Failed to load custom config layer from {custom_config_path}."
|
||||||
|
F" {e.args[0]}")
|
||||||
|
else:
|
||||||
|
log.error(F"Failed to load custom config layer from {custom_config_path}.",
|
||||||
|
exc_info=True)
|
||||||
|
log.warning("Falling back to default config.")
|
||||||
|
confman.fallback()
|
||||||
|
|
||||||
|
|
||||||
|
def compile_remote_config(confman):
|
||||||
|
"""
|
||||||
|
Attempt to load and apply the Shepherd Control config layer (cached from prior communication,
|
||||||
|
Control hasn't actually started up yet). Falls back to previous config if it fails.
|
||||||
|
As part of this, load the required plugins into cache (required to validate their config).
|
||||||
|
"""
|
||||||
|
# ====Control Remote Config Layer====
|
||||||
|
|
||||||
|
# Freeze Shepherd Control related config.
|
||||||
|
confman.freeze_value("shepherd", "control", "server")
|
||||||
|
confman.freeze_value("shepherd", "control", "intro_key")
|
||||||
|
|
||||||
|
# Save current good local config
|
||||||
|
confman.save_fallback()
|
||||||
|
|
||||||
|
core_conf = confman.get_config_bundle("shepherd")
|
||||||
|
try:
|
||||||
|
control_config = control.get_cached_config(core_conf["root_dir"])
|
||||||
|
try:
|
||||||
|
load_config_layer_and_plugins(confman, control_config)
|
||||||
|
log.info("Loaded cached Shepherd Control config layer")
|
||||||
|
except Exception as e:
|
||||||
|
if isinstance(e, InvalidConfigError):
|
||||||
|
log.error("Failed to load cached Shepherd Control config layer."
|
||||||
|
F" {e.args[0]}")
|
||||||
|
else:
|
||||||
|
log.error("Failed to load cached Shepherd Control config layer.",
|
||||||
|
exc_info=True)
|
||||||
|
log.warning("Falling back to local config.")
|
||||||
|
confman.fallback()
|
||||||
|
except Exception:
|
||||||
|
log.warning("No cached Shepherd Control config layer available.")
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_core_conf_paths(core_conf, relative_dir):
|
||||||
|
"""
|
||||||
|
Set the cwd to ``root_dir`` and resolve other core config paths relative to that.
|
||||||
|
``root_dir`` itself will resolve relative to ``relative_dir``, intended to be the config
|
||||||
|
file directory.
|
||||||
|
Also expands out any "~" user characters. If paths are empty, leaves them as is, rather than
|
||||||
|
the default pathlib behaviour of resolving to the current directory
|
||||||
|
"""
|
||||||
|
os.chdir(relative_dir)
|
||||||
|
core_conf["root_dir"] = str(Path(core_conf["root_dir"]).expanduser().resolve())
|
||||||
|
try:
|
||||||
|
os.chdir(core_conf["root_dir"])
|
||||||
|
except FileNotFoundError:
|
||||||
|
raise FileNotFoundError(F"Shepherd root operating directory '{core_conf['root_dir']}'"
|
||||||
|
F" does not exist")
|
||||||
|
|
||||||
|
if core_conf["plugin_dir"]:
|
||||||
|
core_conf["plugin_dir"] = str(Path(core_conf["plugin_dir"]).expanduser().resolve())
|
||||||
|
if core_conf["custom_config_path"]:
|
||||||
|
core_conf["custom_config_path"] = str(
|
||||||
|
Path(core_conf["custom_config_path"]).expanduser().resolve())
|
||||||
|
if core_conf["compiled_config_path"]:
|
||||||
|
core_conf["compiled_config_path"] = str(
|
||||||
|
Path(core_conf["compiled_config_path"]).expanduser().resolve())
|
||||||
|
|
||||||
|
|
||||||
|
def load_config_layer_and_plugins(confman: ConfigManager, config_source):
|
||||||
|
"""
|
||||||
|
Load a config layer, find the necessary plugin classes, then validate it.
|
||||||
|
If this succeeds, the returned dict of plugin classes will directly match
|
||||||
|
the bundle names in the config manager.
|
||||||
|
"""
|
||||||
|
# Load in config layer
|
||||||
|
confman.load_overlay(config_source)
|
||||||
|
|
||||||
|
# Get the core config so we can find the plugin directory
|
||||||
|
core_config = confman.get_config_bundle("shepherd")
|
||||||
|
plugin_dir = core_config["plugin_dir"]
|
||||||
|
|
||||||
|
# List other bundle names to get plugins we need to load
|
||||||
|
plugin_names = confman.get_bundle_names()
|
||||||
|
|
||||||
|
# Load plugins to get their config specifications
|
||||||
|
plugin_interfaces = {name: plugin.load_plugin(name, plugin_dir) for name in plugin_names}
|
||||||
|
for plugin_name, plugin_interface in plugin_interfaces.items():
|
||||||
|
confman.add_confspec(plugin_name, plugin_interface.confspec)
|
||||||
|
|
||||||
|
# Validate all plugin configs
|
||||||
|
confman.validate_bundles()
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
Shim to allow the Agent to restart itself without involving the actual CLI
|
||||||
|
"""
|
||||||
|
if __name__ == '__main__':
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description="Shepherd core restart shim. For general use,"
|
||||||
|
" use the main Shepherd CLI instead.")
|
||||||
|
|
||||||
|
parser.add_argument('default_config_path', type=str)
|
||||||
|
parser.add_argument('use_custom_config', type=bool)
|
||||||
|
parser.add_argument('control_enabled', type=bool)
|
||||||
|
parser.add_argument('new_device_mode', type=bool)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
agent = Agent(args.default_config_path, args.use_custom_config,
|
||||||
|
args.control_enabled, args.new_device_mode)
|
||||||
|
agent.start()
|
||||||
@ -0,0 +1,917 @@
|
|||||||
|
import importlib
|
||||||
|
from pathlib import Path
|
||||||
|
import inspect
|
||||||
|
import logging
|
||||||
|
import sys
|
||||||
|
import pkgutil
|
||||||
|
from collections import namedtuple
|
||||||
|
from collections.abc import Sequence
|
||||||
|
from functools import partial
|
||||||
|
from types import MappingProxyType
|
||||||
|
import pkg_resources
|
||||||
|
from configspec import ConfigSpecification
|
||||||
|
from configspec.specification import _ValueSpecification
|
||||||
|
import configspec
|
||||||
|
import preserve
|
||||||
|
from .util import NamespaceProxy
|
||||||
|
from . import control
|
||||||
|
from . import tasks
|
||||||
|
from .. import base_plugins
|
||||||
|
|
||||||
|
# Note that while module attributes intended for external use are mixed here, all the external
|
||||||
|
# ones are pulled into the root package scope and are intended to be accessed that way.
|
||||||
|
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Cache of loaded plugin interfaces so far.
|
||||||
|
_loaded_plugins = {}
|
||||||
|
|
||||||
|
|
||||||
|
def unload_plugins():
|
||||||
|
"""
|
||||||
|
Clear the list of loaded plugins. If the same module is later loaded as a plugin, it will
|
||||||
|
be reloaded.
|
||||||
|
"""
|
||||||
|
for plugin_name in _loaded_plugins.copy().keys():
|
||||||
|
unload_plugin(plugin_name)
|
||||||
|
|
||||||
|
|
||||||
|
def unload_plugin(plugin_name):
|
||||||
|
"""
|
||||||
|
Remove the named plugin from the list of loaded plugins. If the same module is later loaded
|
||||||
|
as a plugin, it will be reloaded. Returns False if the plugin was not already loaded.
|
||||||
|
|
||||||
|
Unloading plugins _should not be relied upon_ to completely reset their state. It is
|
||||||
|
intended primarily for use in testing.
|
||||||
|
Critically, loading a plugin again after unloading it will cause `importlib.reload()` to be
|
||||||
|
called on the primary module or package, _but not its own submodules or other imports_. There
|
||||||
|
is no easy solution to this problem, which is why Shepherd restarts the whole interpreter
|
||||||
|
process to restart.
|
||||||
|
"""
|
||||||
|
if plugin_name in _loaded_plugins:
|
||||||
|
del _loaded_plugins[plugin_name]
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class UnboundMethod():
|
||||||
|
"""
|
||||||
|
Simple wrapper to mark that this is a reference to a method hasn't been bound to an instance
|
||||||
|
yet, (or had a decorator like ''staticmethod'' or ''classmethod'' unwrapped). Sets its
|
||||||
|
signature to the result of binding to an anonymous object.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, func):
|
||||||
|
self._func = func
|
||||||
|
self._bound_func = None
|
||||||
|
sigobj = self._func.__get__(object())
|
||||||
|
self.__signature__ = inspect.signature(sigobj)
|
||||||
|
self.__doc__ = inspect.getdoc(self._func)
|
||||||
|
self.__name__ = sigobj.__name__
|
||||||
|
|
||||||
|
def bind(self, obj):
|
||||||
|
"""
|
||||||
|
Bind the wrapped method to an object, and return the result. Once bound, calling
|
||||||
|
the UnboundMethod will actually call the bound result, and the ''func()'' property will
|
||||||
|
return it.
|
||||||
|
"""
|
||||||
|
self._bound_func = self._func.__get__(obj)
|
||||||
|
return self._bound_func
|
||||||
|
|
||||||
|
@property
|
||||||
|
def func(self):
|
||||||
|
if self._bound_func is None:
|
||||||
|
raise Exception("Cannot get func from UnboundMethod until it has been bound.")
|
||||||
|
return self._bound_func
|
||||||
|
|
||||||
|
def __call__(self, *args, **kwargs):
|
||||||
|
if self._bound_func is None:
|
||||||
|
raise Exception("Cannot call UnboundMethod until it has been bound.")
|
||||||
|
return self._bound_func(*args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class PluginLoadError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def is_instance_check(classtype):
|
||||||
|
def instance_checker(obj, classtype=classtype):
|
||||||
|
return isinstance(obj, classtype)
|
||||||
|
return instance_checker
|
||||||
|
|
||||||
|
|
||||||
|
ClassMarker = namedtuple("ClassMarker", [])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_class(cls):
|
||||||
|
"""
|
||||||
|
Class decorator, used to indicate that a class is to be used as the Plugin Class
|
||||||
|
for this plugin. Note that only one plugin class is allowed per plugin. Only works when placed
|
||||||
|
in the root of the plugin module or package (same as the interface)
|
||||||
|
Use on the class definition:
|
||||||
|
|
||||||
|
@plugin_class
|
||||||
|
class MyPluginClass:
|
||||||
|
|
||||||
|
This is equivalent to registering the class directly with the plugin interface later:
|
||||||
|
|
||||||
|
interface.register_plugin_class(MyPluginClass)
|
||||||
|
"""
|
||||||
|
if not inspect.isclass(cls):
|
||||||
|
raise PluginLoadError("@plugin_class can only be used to decorate a class")
|
||||||
|
cls._shepherd_load_marker = ClassMarker()
|
||||||
|
return cls
|
||||||
|
|
||||||
|
|
||||||
|
FunctionMarker = namedtuple("FunctionMarker", ["name"])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_function(func=None, *, name=None):
|
||||||
|
"""
|
||||||
|
Method decorator to register a method as a plugin interface function.
|
||||||
|
If `name` is not supplied, the name of the decorated function is used.
|
||||||
|
|
||||||
|
Either used directly:
|
||||||
|
@plugin_function
|
||||||
|
def my_method(self):
|
||||||
|
|
||||||
|
or with optional keyword arguments:
|
||||||
|
@plugin_function(name="someOtherName")
|
||||||
|
def my_badly_named_method(self):
|
||||||
|
|
||||||
|
Can either be used on functions in the root level of the plugin module, or
|
||||||
|
on methods within the registered Plugin Class (either with @plugin_class or
|
||||||
|
interface.register_plugin_class() )
|
||||||
|
"""
|
||||||
|
if func is None:
|
||||||
|
return partial(plugin_function, name=name)
|
||||||
|
|
||||||
|
func._shepherd_load_marker = FunctionMarker(name)
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
HookMarker = namedtuple("HookMarker", ["name", "signature"])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_hook(func=None, *, name=None):
|
||||||
|
"""
|
||||||
|
Method decorator to register a hook for the plugin. Will use the decorated function signature
|
||||||
|
for the hook, and replace the decorated function with the new hook on plugin init, so it can
|
||||||
|
be called directly. If `name` is not supplied, the name of the decorated function is used.
|
||||||
|
|
||||||
|
Like `plugin_function`, can be either used directly:
|
||||||
|
@plugin_hook
|
||||||
|
def my_method(self):
|
||||||
|
|
||||||
|
or with the optional keyword argument:
|
||||||
|
@plugin_hook(name="someOtherName")
|
||||||
|
def my_badly_named_method(self):
|
||||||
|
|
||||||
|
As the decorated function is only being used as a convenient way to declare the hook signature
|
||||||
|
and to call the hook later, the usual Python `self` method binding system isn't appropriate.
|
||||||
|
Methods in the plugin class _can_ be registered as hooks (as can module root level functions),
|
||||||
|
but the signature will be used directly (don't add the `self` argument). For technical clarity,
|
||||||
|
methods in the class should be decorated with `@staticmethod` below the `@plugin_hook`
|
||||||
|
decorator.
|
||||||
|
"""
|
||||||
|
if func is None:
|
||||||
|
return partial(plugin_hook, name=name)
|
||||||
|
|
||||||
|
if isinstance(func, staticmethod):
|
||||||
|
# Pull the underlying function out of a staticmethod. It's static, so bound object is
|
||||||
|
# irrelevant
|
||||||
|
func = func.__get__(object())
|
||||||
|
|
||||||
|
if not name:
|
||||||
|
name = func.__name__
|
||||||
|
|
||||||
|
func._shepherd_load_marker = HookMarker(name, inspect.signature(func))
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
AttachmentMarker = namedtuple("AttachmentMarker", ["hook_identifier"])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_attachment(hook_identifier):
|
||||||
|
"""
|
||||||
|
Function decorator to register a function or method as an attachment to a plugin hook.
|
||||||
|
|
||||||
|
The `hook_identifier` is a string indicating what hook to attach to. It can either refer to a
|
||||||
|
hook in _another_ plugin with the form "my_plugin.my_hook", or to a local hook in the same
|
||||||
|
plugin with just the hook name: "my_hook".
|
||||||
|
|
||||||
|
Can either be used on functions in the root level of the plugin module, or
|
||||||
|
on methods within the registered Plugin Class (either with @plugin_class or
|
||||||
|
interface.register_plugin_class() )
|
||||||
|
"""
|
||||||
|
def attachment_decorator(func):
|
||||||
|
func._shepherd_load_marker = AttachmentMarker(hook_identifier)
|
||||||
|
return func
|
||||||
|
return attachment_decorator
|
||||||
|
|
||||||
|
|
||||||
|
InitMarker = namedtuple("InitMarker", [])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_init(func=None):
|
||||||
|
"""
|
||||||
|
Method decorator to register a method as a plugin init function, similar
|
||||||
|
to passing it to `interface.register_init()`.
|
||||||
|
|
||||||
|
Can either be used on functions in the root level of the plugin module, or
|
||||||
|
on methods within the registered Plugin Class (either with @plugin_class or
|
||||||
|
interface.register_plugin_class() )
|
||||||
|
"""
|
||||||
|
if func is None:
|
||||||
|
return plugin_init
|
||||||
|
|
||||||
|
func._shepherd_load_marker = InitMarker()
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
RunMarker = namedtuple("RunMarker", [])
|
||||||
|
|
||||||
|
|
||||||
|
def plugin_run(func=None):
|
||||||
|
"""
|
||||||
|
Method decorator to register a method as a plugin run function, similar
|
||||||
|
to passing it to `interface.register_run()`.
|
||||||
|
|
||||||
|
Can either be used on functions in the root level of the plugin module, or
|
||||||
|
on methods within the registered Plugin Class (either with @plugin_class or
|
||||||
|
interface.register_plugin_class() )
|
||||||
|
"""
|
||||||
|
if func is None:
|
||||||
|
return plugin_run
|
||||||
|
|
||||||
|
func._shepherd_load_marker = RunMarker()
|
||||||
|
return func
|
||||||
|
|
||||||
|
|
||||||
|
@preserve.preservable(exclude_attrs=('function'))
|
||||||
|
class InterfaceCall():
|
||||||
|
def __init__(self, plugin_name, function_name, kwargs=None):
|
||||||
|
"""
|
||||||
|
Record an interface function call for future use.
|
||||||
|
"""
|
||||||
|
self.plugin_name = plugin_name
|
||||||
|
self.function_name = function_name
|
||||||
|
self.function = None
|
||||||
|
self.kwargs = kwargs
|
||||||
|
|
||||||
|
if self.kwargs is None:
|
||||||
|
self.kwargs = {}
|
||||||
|
|
||||||
|
def __restore_init__(self):
|
||||||
|
self.function = None
|
||||||
|
|
||||||
|
def resolve(self, interface_functions):
|
||||||
|
"""
|
||||||
|
Resolve the InterfaceFunction this call refers to. Requires a dict of plugin functions,
|
||||||
|
where the keys are plugin names and the values are NamedTuples containing the interface
|
||||||
|
functions for that plugin.
|
||||||
|
"""
|
||||||
|
if self.plugin_name not in interface_functions:
|
||||||
|
raise ValueError(F"Plugin '{self.plugin_name}' could not be found to resolve function")
|
||||||
|
|
||||||
|
if not hasattr(interface_functions[self.plugin_name], self.function_name):
|
||||||
|
raise ValueError(F"Interface function '{self.function_name}' could not be found in"
|
||||||
|
F" plugin '{self.plugin_name}'")
|
||||||
|
|
||||||
|
self.function = getattr(interface_functions[self.plugin_name], self.function_name)
|
||||||
|
|
||||||
|
def call(self):
|
||||||
|
"""
|
||||||
|
Make the interface function call this object refers to (using the stored kwargs).
|
||||||
|
Must make sure `resolve()` is called first to actually find the function.
|
||||||
|
"""
|
||||||
|
return self.function(**self.kwargs)
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return F"{self.plugin_name}.{self.function_name}({self.kwargs})"
|
||||||
|
|
||||||
|
|
||||||
|
class InterfaceFunction():
|
||||||
|
def __init__(self, func, name=None, remote_command=False):
|
||||||
|
"""
|
||||||
|
Wrapper around a callable to define a plugin interface function.
|
||||||
|
"""
|
||||||
|
self.func = func
|
||||||
|
self.remote = remote_command
|
||||||
|
self.spec = None
|
||||||
|
|
||||||
|
if not callable(self.func):
|
||||||
|
raise TypeError("InterfaceFunction can only be created around a callable or method.")
|
||||||
|
|
||||||
|
sig = inspect.signature(self.func)
|
||||||
|
|
||||||
|
if self.remote:
|
||||||
|
self.spec = ConfigSpecification()
|
||||||
|
for param in sig.parameters:
|
||||||
|
if param.kind not in (inspect.Parameter.POSITIONAL_OR_KEYWORD,
|
||||||
|
inspect.Parameter.KEYWORD_ONLY):
|
||||||
|
raise ValueError("Interface functions must be callable with keyword arguments")
|
||||||
|
|
||||||
|
arg_spec = param.annotation
|
||||||
|
if arg_spec in ("str", str):
|
||||||
|
arg_spec = configspec.StringSpec()
|
||||||
|
if arg_spec in ("int", int):
|
||||||
|
arg_spec = configspec.IntSpec()
|
||||||
|
|
||||||
|
if not isinstance(arg_spec, _ValueSpecification):
|
||||||
|
raise ValueError("Function annotations for a Shepherd Interface function"
|
||||||
|
"must be a type of ConfigSpecification, or one of the valid"
|
||||||
|
"type shortcuts")
|
||||||
|
|
||||||
|
self.spec.add_spec(param.name, arg_spec)
|
||||||
|
|
||||||
|
if sig.return_annotation is not inspect.Signature.empty:
|
||||||
|
ret_spec = sig.return_annotation
|
||||||
|
if ret_spec in ("str", str):
|
||||||
|
ret_spec = configspec.StringSpec()
|
||||||
|
if ret_spec in ("int", int):
|
||||||
|
ret_spec = configspec.IntSpec()
|
||||||
|
|
||||||
|
if not isinstance(ret_spec, _ValueSpecification):
|
||||||
|
raise ValueError("Function annotations for a Shepherd Interface function"
|
||||||
|
"must be a type of ConfigSpecification, or one of the valid"
|
||||||
|
"type shortcuts")
|
||||||
|
|
||||||
|
self.spec.add_spec("return", arg_spec)
|
||||||
|
|
||||||
|
func_doc = inspect.getdoc(self.func)
|
||||||
|
if func_doc:
|
||||||
|
self.spec.helptext = func_doc
|
||||||
|
|
||||||
|
if name:
|
||||||
|
self.name = name
|
||||||
|
else:
|
||||||
|
self.name = self.func.__name__
|
||||||
|
|
||||||
|
log.debug(F"Loaded interface function {self.name} with parameters: {sig.parameters}")
|
||||||
|
|
||||||
|
def get_spec(self):
|
||||||
|
"""
|
||||||
|
Get the function spec used for Shepherd Control to know how to call it as a command.
|
||||||
|
Will return None unless `remote_command` was marked True on creation.
|
||||||
|
Returns a ConfigSpecification. If a return value spec is present, it uses the reserved
|
||||||
|
spec name "return". Any docstring on the function is placed in the root ConfigSpecification
|
||||||
|
helptext.
|
||||||
|
"""
|
||||||
|
return self.spec
|
||||||
|
|
||||||
|
def __call__(self, *args, **kwargs):
|
||||||
|
return self.func(*args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class HookAttachment():
|
||||||
|
"""
|
||||||
|
Simple record to store the details of a deferred hook attachment. Only a class to allow the
|
||||||
|
func attribute to be changed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, func, plugin_name, hook_name):
|
||||||
|
self.func = func
|
||||||
|
self.plugin_name = plugin_name
|
||||||
|
self.hook_name = hook_name
|
||||||
|
|
||||||
|
|
||||||
|
class PluginHook():
|
||||||
|
"""
|
||||||
|
A hook to call a set of attachements provided by plugins. Calling the hook directly
|
||||||
|
will call all attached functions, returning the results in a dict where the keys are
|
||||||
|
the name of the plugin the attachment came from (if there are no attachements, the
|
||||||
|
result will be an empty dict).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, name, signature):
|
||||||
|
self.name = name
|
||||||
|
self.signature = signature
|
||||||
|
# dict of callables, plugin names as keys
|
||||||
|
self._attached_functions = {}
|
||||||
|
self.attachments = MappingProxyType(self._attached_functions)
|
||||||
|
|
||||||
|
def _attach(self, new_func, plugin_name):
|
||||||
|
if not callable(new_func):
|
||||||
|
raise TypeError("Hook attachment must be callable")
|
||||||
|
if plugin_name in self._attached_functions:
|
||||||
|
raise Exception(F"Hook already has attachment from plugin '{plugin_name}'")
|
||||||
|
|
||||||
|
new_sig = inspect.signature(new_func)
|
||||||
|
if str(new_sig) != str(self.signature):
|
||||||
|
raise Exception(F"Hook attachment signature '{new_sig}' must match the signature "
|
||||||
|
F"'{str(self.signature)}' for target hook {self.name}")
|
||||||
|
self._attached_functions[plugin_name] = new_func
|
||||||
|
|
||||||
|
def __call__(self, *args, **kwargs):
|
||||||
|
results = {}
|
||||||
|
for plugin_name, func in self._attached_functions.items():
|
||||||
|
results[plugin_name] = func(*args, **kwargs)
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
class PluginInterface():
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._confspec = None
|
||||||
|
self._loaded = False
|
||||||
|
self._initialised = False
|
||||||
|
self._functions = {}
|
||||||
|
self._hooks = {}
|
||||||
|
self._attachments = []
|
||||||
|
self._tasks = []
|
||||||
|
self._plugin_class = None
|
||||||
|
self._plugin_obj = None
|
||||||
|
self._init_func = None
|
||||||
|
self._run_func = None
|
||||||
|
self.config = None
|
||||||
|
self.plugins = None
|
||||||
|
self.hooks = None
|
||||||
|
self._plugin_name = "<not yet loaded>"
|
||||||
|
self._update_state = control.PluginUpdateState()
|
||||||
|
|
||||||
|
def _load_pluginclass(self, module):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _load_guard(self):
|
||||||
|
if self._loaded:
|
||||||
|
raise PluginLoadError("Cannot call interface register functions once"
|
||||||
|
" plugin is loaded")
|
||||||
|
|
||||||
|
def register_confspec(self, confspec):
|
||||||
|
self._load_guard()
|
||||||
|
if self._confspec is not None:
|
||||||
|
raise PluginLoadError("Plugin can only register one ConfigSpecification")
|
||||||
|
if not isinstance(confspec, ConfigSpecification):
|
||||||
|
raise PluginLoadError("confspec must be an instance of ConfigSpecification")
|
||||||
|
self._confspec = confspec
|
||||||
|
|
||||||
|
def register_class(self, cls):
|
||||||
|
self._load_guard()
|
||||||
|
if self._plugin_class is not None:
|
||||||
|
raise PluginLoadError("Plugin can only register one plugin class")
|
||||||
|
if not inspect.isclass(cls):
|
||||||
|
raise PluginLoadError("plugin_class must be a class")
|
||||||
|
self._plugin_class = cls
|
||||||
|
|
||||||
|
def register_function(self, func, name=None, remote_command=False):
|
||||||
|
"""
|
||||||
|
Register a function or method as an interface function for the plugin. If name is not
|
||||||
|
provided, the name of the callable will be used.
|
||||||
|
"""
|
||||||
|
self._load_guard()
|
||||||
|
|
||||||
|
ifunc = InterfaceFunction(func, name, remote_command)
|
||||||
|
|
||||||
|
if ifunc.name in self._functions:
|
||||||
|
raise PluginLoadError(F"Interface function with name '{ifunc.name}' already exists")
|
||||||
|
|
||||||
|
self._functions[ifunc.name] = ifunc
|
||||||
|
|
||||||
|
def register_attachment(self, func, hook_identifier):
|
||||||
|
"""
|
||||||
|
Register a function or method as an attachment to a plugin hook.
|
||||||
|
|
||||||
|
The `hook_identifier` is a string indicating what hook to attach to. It can either refer
|
||||||
|
to a hook in _another_ plugin with the form "my_plugin.my_hook", or to a local hook in the
|
||||||
|
same plugin with just the hook name: "my_hook".
|
||||||
|
"""
|
||||||
|
self._load_guard()
|
||||||
|
|
||||||
|
if not callable(func):
|
||||||
|
raise TypeError("Hook attachment can only be created around a callable or method.")
|
||||||
|
|
||||||
|
extra, plugin_name, hook_name = ([None, None]+hook_identifier.split('.'))[-3:]
|
||||||
|
|
||||||
|
if extra is not None:
|
||||||
|
raise ValueError("Hook identifier can contain at most 2 parts around a '.' character")
|
||||||
|
|
||||||
|
self._attachments.append(HookAttachment(func, plugin_name, hook_name))
|
||||||
|
|
||||||
|
def register_hook(self, name, signature=None):
|
||||||
|
"""
|
||||||
|
Register a plugin hook for other plugins to attach to. Can only be registered during
|
||||||
|
plugin load. Hook will be accessible during init from `PluginInterface.hooks.<hook_name>`.
|
||||||
|
Optionally, the same hook object is also returned from `register_hook` to allow it to be
|
||||||
|
stored and called elsewhere.
|
||||||
|
|
||||||
|
In most cases, the decorator form (`@plugin_hook`) is more convenient to use, as it will
|
||||||
|
directly replace the decorated function or method (usually with just `pass` as the content)
|
||||||
|
with the hook - allowing it to be called as normal.
|
||||||
|
|
||||||
|
If the hook requires arguments, either use the decorator form or pass in the `signature`
|
||||||
|
argument. For basic keyword args, this can just be a sequence of string argument names,
|
||||||
|
for example:
|
||||||
|
|
||||||
|
`register_hook("my_hook", ["arg_a", "arg_b"])`
|
||||||
|
|
||||||
|
If a more complex signature is required, a `inspect.Signature` object can be passed in.
|
||||||
|
This is most easily expressed inline with a lambda (only to use standard Python function
|
||||||
|
argument definitions - the lambda doesn't get called).
|
||||||
|
For example:
|
||||||
|
|
||||||
|
`register_hook("my_hook", inspect.signature(lambda arg_a, arg_b_with_default=5: None))`
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: A string (and valid Python identifier) naming the hook. Must be unique within
|
||||||
|
the plugin.
|
||||||
|
signature: If None, registers the hook as requiring no arguments. Can either be a list
|
||||||
|
of argument names, or an `inspect.Signature` object.
|
||||||
|
"""
|
||||||
|
self._load_guard()
|
||||||
|
if not isinstance(name, str):
|
||||||
|
raise PluginLoadError("Hook name must be a string")
|
||||||
|
if name in self._hooks:
|
||||||
|
raise PluginLoadError(F"Hook with name '{name}' already exists")
|
||||||
|
|
||||||
|
if signature is None:
|
||||||
|
signature = inspect.Signature([])
|
||||||
|
elif isinstance(signature, Sequence):
|
||||||
|
params = []
|
||||||
|
for param_name in signature:
|
||||||
|
params.append(inspect.Parameter(
|
||||||
|
param_name, inspect.Parameter.POSITIONAL_OR_KEYWORD))
|
||||||
|
signature = inspect.Signature(params)
|
||||||
|
|
||||||
|
if not isinstance(signature, inspect.Signature):
|
||||||
|
raise PluginLoadError("Hook signature must either be a sequence of paremeter names or"
|
||||||
|
" and instance of inspect.Signature")
|
||||||
|
|
||||||
|
self._hooks[name] = PluginHook(name, signature)
|
||||||
|
return self._hooks[name]
|
||||||
|
|
||||||
|
def register_init(self, func):
|
||||||
|
"""
|
||||||
|
Register a function or method as the init function for the plugin. This will be called
|
||||||
|
when the plugin is initialised, after load. This is where the plugin can do any setup
|
||||||
|
required before hooks and interface functions may be called by other plugins. Plugin config
|
||||||
|
is available during this call.
|
||||||
|
|
||||||
|
Plugin init is also where any tasks may be added.
|
||||||
|
|
||||||
|
The plugin init function cannot take any arguments.
|
||||||
|
|
||||||
|
The plugin init function is analogous to the `__init__` method when using a plugin class is
|
||||||
|
registered. If both an init function _and_ a plugin class are registered, both the init
|
||||||
|
function and the `__init__` method will be called.
|
||||||
|
"""
|
||||||
|
self._load_guard()
|
||||||
|
|
||||||
|
if self._init_func is not None:
|
||||||
|
raise PluginLoadError("Plugin can only register one init function")
|
||||||
|
if not callable(func):
|
||||||
|
raise TypeError("Plugin init function must be a callable.")
|
||||||
|
if len(inspect.signature(func).parameters) > 0:
|
||||||
|
raise TypeError("Plugin init function cannot take any arguments")
|
||||||
|
self._init_func = func
|
||||||
|
|
||||||
|
def register_run(self, func):
|
||||||
|
"""
|
||||||
|
Register a function or method to be called in a seperate thread once all plugins are
|
||||||
|
initialised. This function is intended to be used for any continuous loop needed by the
|
||||||
|
plugin, to avoid blocking other plugins or Shepherd itself. When the "run" function is
|
||||||
|
called, all other plugin hooks and interface functions are available to be called.
|
||||||
|
|
||||||
|
The plugin "run" function cannot take any arguments.
|
||||||
|
|
||||||
|
If trying to register a method on a plugin class, it is better to use the decorator form
|
||||||
|
"@plugin_run", as this will then bind to the actual instane of the class once it is
|
||||||
|
instantiated.
|
||||||
|
"""
|
||||||
|
self._load_guard()
|
||||||
|
|
||||||
|
if self._run_func is not None:
|
||||||
|
raise PluginLoadError("Plugin can only register one run function")
|
||||||
|
if not callable(func):
|
||||||
|
raise TypeError("Plugin run function must be a callable.")
|
||||||
|
if len(inspect.signature(func).parameters) > 0:
|
||||||
|
raise TypeError("Plugin run function cannot take any arguments")
|
||||||
|
self._run_func = func
|
||||||
|
|
||||||
|
def add_task(self, trigger, interface_function, kwargs=None):
|
||||||
|
"""
|
||||||
|
Add a task when creating a new session. Can only be called during init (object or hook).
|
||||||
|
Will be ignored if Shepherd is resuming an old session.
|
||||||
|
Args:
|
||||||
|
trigger: The trigger object (either a CronTrigger or the result of a ConfigTriggerSpec)
|
||||||
|
interface_function: The interface function on this plugin to call when triggered. Can
|
||||||
|
either be a callable that was registered as a plugin function or a string matching
|
||||||
|
the function name.
|
||||||
|
kwargs: Any keyword arguments to be passed to the function when the task is triggered.
|
||||||
|
Must be Preservable.
|
||||||
|
"""
|
||||||
|
if not self._loaded:
|
||||||
|
raise Exception("Cannot add plugin tasks until plugin is loaded")
|
||||||
|
if self._initialised:
|
||||||
|
raise Exception("Cannot add plugin tasks after plugin has been initialised")
|
||||||
|
|
||||||
|
if isinstance(interface_function, str):
|
||||||
|
if interface_function not in self._functions:
|
||||||
|
raise Exception("Plugin does not have interface function"
|
||||||
|
F" named {interface_function}")
|
||||||
|
task_call = InterfaceCall(self._plugin_name, interface_function, kwargs)
|
||||||
|
else:
|
||||||
|
# Find the callable in our interface functions
|
||||||
|
func_name = None
|
||||||
|
for name, ifunc in self._functions.items():
|
||||||
|
if ifunc.func == interface_function:
|
||||||
|
func_name = name
|
||||||
|
break
|
||||||
|
|
||||||
|
if func_name is None:
|
||||||
|
raise Exception(F"Function {interface_function} has not been registered"
|
||||||
|
" with the plugin")
|
||||||
|
task_call = InterfaceCall(self._plugin_name, func_name, kwargs)
|
||||||
|
|
||||||
|
self._tasks.append(tasks.Task(trigger, task_call))
|
||||||
|
|
||||||
|
def set_status(self, status):
|
||||||
|
"""
|
||||||
|
Set the plugin status, to be sent to Shepherd Control if configured.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
status: A flat dictionary of fields with string keys.
|
||||||
|
"""
|
||||||
|
self._update_state.set_status(status)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def confspec(self):
|
||||||
|
return self._confspec
|
||||||
|
|
||||||
|
|
||||||
|
def discover_base_plugins():
|
||||||
|
"""
|
||||||
|
Returns a list of base plugin names available to load. These are plugins included with
|
||||||
|
shepherd-agent, in 'base_plugins'.
|
||||||
|
"""
|
||||||
|
return [name for _, name, _ in pkgutil.iter_modules(base_plugins.__path__)]
|
||||||
|
|
||||||
|
|
||||||
|
def discover_custom_plugins(plugin_dir=None):
|
||||||
|
"""
|
||||||
|
Returns a list of custom plugin names available to load. This includes all modules or packages
|
||||||
|
within the supplied custom plugin directory.
|
||||||
|
"""
|
||||||
|
if plugin_dir:
|
||||||
|
if Path(plugin_dir).is_dir():
|
||||||
|
return [name for _, name, _ in pkgutil.iter_modules([plugin_dir])]
|
||||||
|
else:
|
||||||
|
log.warning(F"Custom plugin directory {plugin_dir} does not exist")
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
|
def discover_installed_plugins():
|
||||||
|
"""
|
||||||
|
Returns a list of installed plugin names available to load. These are packages that have used
|
||||||
|
the 'shephed.plugin' entrypoint in their setup.py
|
||||||
|
"""
|
||||||
|
return [entrypoint.name for entrypoint in pkg_resources.iter_entry_points('shepherd.plugins')]
|
||||||
|
|
||||||
|
|
||||||
|
def load_plugin(plugin_name, plugin_dir=None):
|
||||||
|
"""
|
||||||
|
Finds a Shepherd plugin, loads it, and returns the resulting PluginInterface object.
|
||||||
|
|
||||||
|
Will check 3 sources, in order:
|
||||||
|
1. Built-in plugin modules/subpackages within ''shepherd.base_plugins''. Plugin
|
||||||
|
module/package names match the plugin name.
|
||||||
|
2. Modules/packages within the supplied ''plugin_dir'' path. Plugin module/package
|
||||||
|
names match the plugin name.
|
||||||
|
3. Any installed packages supplying the ''shepherd.plugin'' entrypoint.
|
||||||
|
|
||||||
|
Once a module is found, loading it involves scanning the root of the module for a
|
||||||
|
PluginInterface instance. If a confspec isn't registered, a ConfigSpecification instance
|
||||||
|
at the module root will also be implicitly registered to the interface.
|
||||||
|
|
||||||
|
Lastly, any plugin decorators are scanned for and registered (including a plugin class if
|
||||||
|
present).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
plugin_name: Name used to try and locate the plugin
|
||||||
|
plugin_dir: Optional directory path to be used for custom plugins
|
||||||
|
|
||||||
|
Returns: The PluginInterface for the loaded plugin
|
||||||
|
"""
|
||||||
|
|
||||||
|
if plugin_name in _loaded_plugins:
|
||||||
|
return _loaded_plugins[plugin_name]
|
||||||
|
|
||||||
|
# Each of the 3 plugin sources have different import mechanisms. Discovery is broken out to
|
||||||
|
# allow them to be listed. Using a try/except block wouldn't be able to tell the difference
|
||||||
|
# between a plugin not being found or //it's// imports not loading correctly.
|
||||||
|
module = None
|
||||||
|
existing_modules = sys.modules.copy().values()
|
||||||
|
|
||||||
|
if plugin_name == 'shepherd':
|
||||||
|
module = importlib.import_module("..core", __name__)
|
||||||
|
log.info("Loading core plugin interface")
|
||||||
|
|
||||||
|
elif plugin_name in discover_base_plugins():
|
||||||
|
module = importlib.import_module(base_plugins.__name__+'.'+plugin_name)
|
||||||
|
|
||||||
|
if module in existing_modules:
|
||||||
|
log.info(F"Module for {plugin_name} was aleady imported, reloading")
|
||||||
|
importlib.reload(module)
|
||||||
|
log.info(F"Loading base plugin {plugin_name}")
|
||||||
|
|
||||||
|
elif plugin_name in discover_custom_plugins(plugin_dir):
|
||||||
|
saved_syspath = sys.path
|
||||||
|
try:
|
||||||
|
sys.path = [str(plugin_dir)]
|
||||||
|
module = importlib.import_module(plugin_name)
|
||||||
|
|
||||||
|
if module in existing_modules:
|
||||||
|
log.info(F"Module for {plugin_name} was aleady imported, reloading")
|
||||||
|
importlib.reload(module)
|
||||||
|
finally:
|
||||||
|
sys.path = saved_syspath
|
||||||
|
modulepath = getattr(module, "__path__", [module.__file__])[0]
|
||||||
|
log.info(F"Loading custom plugin {plugin_name} from {modulepath}")
|
||||||
|
|
||||||
|
elif plugin_name in discover_installed_plugins():
|
||||||
|
module = pkg_resources.iter_entry_points('shepherd.plugins', plugin_name)[0].load()
|
||||||
|
|
||||||
|
if module in existing_modules:
|
||||||
|
log.info(F"Module for {plugin_name} was aleady imported, reloading")
|
||||||
|
importlib.reload(module)
|
||||||
|
log.info(F"Loading installed plugin {plugin_name} from {module.__name__}")
|
||||||
|
|
||||||
|
if not module:
|
||||||
|
raise PluginLoadError("Could not find plugin "+plugin_name)
|
||||||
|
|
||||||
|
# Now we have the module, scan it for the two implicit objects we look for - the interface and
|
||||||
|
# the confspec
|
||||||
|
|
||||||
|
interface_list = inspect.getmembers(module, is_instance_check(PluginInterface))
|
||||||
|
if not interface_list:
|
||||||
|
raise PluginLoadError("Imported shepherd plugins must contain an instance"
|
||||||
|
" of PluginInterface")
|
||||||
|
|
||||||
|
if len(interface_list) > 1:
|
||||||
|
log.warning(F"Plugin module {module.__name__} has more"
|
||||||
|
F" than one PluginInterface instance.")
|
||||||
|
|
||||||
|
_, interface = interface_list[0]
|
||||||
|
|
||||||
|
load_plugin_interface(plugin_name, interface, module)
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def load_plugin_interface(plugin_name, interface, module=None):
|
||||||
|
"""
|
||||||
|
Load the plugin interface provided and add it to the plugin cache. If a module is provided or
|
||||||
|
the interface has a plugin class registered to it, scan them for plugin load markers and
|
||||||
|
perform the appropriate registrations on the interface.
|
||||||
|
|
||||||
|
Usually called by `load_plugin()`, but allows a PluginInterface to be loaded directly, rather
|
||||||
|
than searching for it.
|
||||||
|
"""
|
||||||
|
|
||||||
|
interface._plugin_name = plugin_name
|
||||||
|
|
||||||
|
# Only looks for implicit confspec if one isn't registered. Uses a blank one if none are
|
||||||
|
# supplied.
|
||||||
|
|
||||||
|
if interface._confspec is None and module is not None:
|
||||||
|
confspec_list = inspect.getmembers(module, is_instance_check(ConfigSpecification))
|
||||||
|
if confspec_list:
|
||||||
|
if len(confspec_list) > 1:
|
||||||
|
log.warning(F"Plugin {interface._plugin_name} has more"
|
||||||
|
F" than one root ConfigSpecification instance.")
|
||||||
|
interface.register_confspec(confspec_list[0][1])
|
||||||
|
|
||||||
|
if interface._confspec is None:
|
||||||
|
interface._confspec = ConfigSpecification()
|
||||||
|
|
||||||
|
interface._update_state.set_confspec(interface.confspec)
|
||||||
|
|
||||||
|
if module is not None:
|
||||||
|
# Scan module for load markers left by decorators and pass them over to register methods
|
||||||
|
for key, attr in module.__dict__.items():
|
||||||
|
if hasattr(attr, "_shepherd_load_marker"):
|
||||||
|
if isinstance(attr._shepherd_load_marker, FunctionMarker):
|
||||||
|
interface.register_function(attr, **attr._shepherd_load_marker._asdict())
|
||||||
|
elif isinstance(attr._shepherd_load_marker, AttachmentMarker):
|
||||||
|
interface.register_attachment(attr, **attr._shepherd_load_marker._asdict())
|
||||||
|
elif isinstance(attr._shepherd_load_marker, HookMarker):
|
||||||
|
# Hooks are a little different in that we replace the attr with the hook
|
||||||
|
newhook = interface.register_hook(**attr._shepherd_load_marker._asdict())
|
||||||
|
setattr(module, key, newhook)
|
||||||
|
elif isinstance(attr._shepherd_load_marker, ClassMarker):
|
||||||
|
interface.register_class(attr)
|
||||||
|
elif isinstance(attr._shepherd_load_marker, InitMarker):
|
||||||
|
interface.register_init(attr)
|
||||||
|
elif isinstance(attr._shepherd_load_marker, RunMarker):
|
||||||
|
interface.register_run(attr)
|
||||||
|
|
||||||
|
if interface._plugin_class is not None:
|
||||||
|
# Scan plugin class for marked methods
|
||||||
|
for key, attr in interface._plugin_class.__dict__.items():
|
||||||
|
if hasattr(attr, "_shepherd_load_marker"):
|
||||||
|
if isinstance(attr._shepherd_load_marker, FunctionMarker):
|
||||||
|
# Instance doesn't exist yet, so need to save unbound methods for binding later
|
||||||
|
interface.register_function(UnboundMethod(attr),
|
||||||
|
**attr._shepherd_load_marker._asdict())
|
||||||
|
elif isinstance(attr._shepherd_load_marker, AttachmentMarker):
|
||||||
|
interface.register_attachment(UnboundMethod(attr),
|
||||||
|
**attr._shepherd_load_marker._asdict())
|
||||||
|
elif isinstance(attr._shepherd_load_marker, HookMarker):
|
||||||
|
# Hooks are a little different in that we replace the attr with the hook
|
||||||
|
newhook = interface.register_hook(**attr._shepherd_load_marker._asdict())
|
||||||
|
setattr(interface._plugin_class, key, newhook)
|
||||||
|
elif isinstance(attr._shepherd_load_marker, InitMarker):
|
||||||
|
interface.register_init(UnboundMethod(attr))
|
||||||
|
elif isinstance(attr._shepherd_load_marker, RunMarker):
|
||||||
|
interface.register_run(UnboundMethod(attr))
|
||||||
|
|
||||||
|
# Assemble remote interface function specs
|
||||||
|
|
||||||
|
command_spec = {}
|
||||||
|
for function in interface._functions.values():
|
||||||
|
if function.remote:
|
||||||
|
command_spec[function.name] = function.get_spec()
|
||||||
|
|
||||||
|
interface._update_state.set_commandspec(command_spec)
|
||||||
|
|
||||||
|
interface._loaded = True
|
||||||
|
|
||||||
|
# Add plugin interface to the cache
|
||||||
|
_loaded_plugins[plugin_name] = interface
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def init_plugins(plugin_configs, plugin_dir=None):
|
||||||
|
"""
|
||||||
|
Loads and initialise plugins named as keys in plugin_configs.
|
||||||
|
Returns dict of initialised plugin interfaces, and a dict of interface function namedtuples
|
||||||
|
(one for each plugin)
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Pick out plugins to load (should already be loaded in cache)
|
||||||
|
plugin_interfaces = {}
|
||||||
|
for plugin_name in plugin_configs.keys():
|
||||||
|
plugin_interfaces[plugin_name] = load_plugin(plugin_name, plugin_dir)
|
||||||
|
|
||||||
|
interface_functions = {}
|
||||||
|
|
||||||
|
# Run plugin init and init hooks
|
||||||
|
for plugin_name, interface in plugin_interfaces.items():
|
||||||
|
# Collect interface functions from this plugin
|
||||||
|
interface_functions[plugin_name] = NamespaceProxy(interface._functions)
|
||||||
|
|
||||||
|
# Provide config for plugin init
|
||||||
|
interface.config = plugin_configs[plugin_name]
|
||||||
|
|
||||||
|
# If it has one, instantiate the plugin object and bind methods to it.
|
||||||
|
if interface._plugin_class is not None:
|
||||||
|
# Special case with the 'shepherd' plugin already populates `_plugin_obj`
|
||||||
|
if interface._plugin_obj is None:
|
||||||
|
interface._plugin_obj = interface._plugin_class()
|
||||||
|
|
||||||
|
for ifunc in interface._functions.values():
|
||||||
|
if isinstance(ifunc.func, UnboundMethod):
|
||||||
|
ifunc.func = ifunc.func.bind(interface._plugin_obj)
|
||||||
|
|
||||||
|
for attachment in interface._attachments:
|
||||||
|
if isinstance(attachment.func, UnboundMethod):
|
||||||
|
attachment.func = attachment.func.bind(interface._plugin_obj)
|
||||||
|
|
||||||
|
if isinstance(interface._init_func, UnboundMethod):
|
||||||
|
interface._init_func = interface._init_func.bind(interface._plugin_obj)
|
||||||
|
|
||||||
|
if isinstance(interface._run_func, UnboundMethod):
|
||||||
|
interface._run_func = interface._run_func.bind(interface._plugin_obj)
|
||||||
|
|
||||||
|
# Call the plugin init func (we've already done any plugin class instance __init__ above)
|
||||||
|
if interface._init_func is not None:
|
||||||
|
interface._init_func()
|
||||||
|
|
||||||
|
# Find hooks attachments are referring to and attach them
|
||||||
|
for attachment in interface._attachments:
|
||||||
|
hook_plugin_name = attachment.plugin_name
|
||||||
|
if hook_plugin_name is None:
|
||||||
|
hook_plugin_name = plugin_name
|
||||||
|
hook_name = attachment.hook_name
|
||||||
|
if hook_plugin_name not in plugin_interfaces:
|
||||||
|
raise ValueError(F"{plugin_name} attachment target plugin "
|
||||||
|
F"'{hook_plugin_name}' does not exist")
|
||||||
|
if hook_name not in plugin_interfaces[hook_plugin_name]._hooks:
|
||||||
|
raise ValueError(F"{plugin_name} attachment target hook "
|
||||||
|
F"'{hook_plugin_name}:{hook_name}' does not exist")
|
||||||
|
|
||||||
|
plugin_interfaces[hook_plugin_name]._hooks[hook_name]._attach(attachment.func,
|
||||||
|
plugin_name)
|
||||||
|
|
||||||
|
interface._initialised = True
|
||||||
|
|
||||||
|
# Wait until all plugins have run their init before filling in and giving access
|
||||||
|
# to all the interface functions and hooks
|
||||||
|
interface_functions_proxy = MappingProxyType(interface_functions)
|
||||||
|
for plugin_name, interface in plugin_interfaces.items():
|
||||||
|
# Each plugin has a NamespaceProxy of its interface functions for read-only attr access
|
||||||
|
interface.plugins = interface_functions_proxy
|
||||||
|
interface.hooks = NamespaceProxy(interface._hooks)
|
||||||
|
|
||||||
|
return plugin_interfaces
|
||||||
@ -0,0 +1,332 @@
|
|||||||
|
"""
|
||||||
|
Implements both the main task scheduler for Shepherd and the Session system for restoring
|
||||||
|
state between power cycles.
|
||||||
|
"""
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
import logging
|
||||||
|
from collections import namedtuple
|
||||||
|
import threading
|
||||||
|
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from dateutil import tz
|
||||||
|
import pytz
|
||||||
|
|
||||||
|
from apscheduler.triggers.cron import CronTrigger as APCronTrigger
|
||||||
|
|
||||||
|
from configspec import *
|
||||||
|
import preserve
|
||||||
|
|
||||||
|
# from .util import HoldLock
|
||||||
|
|
||||||
|
log = logging.getLogger("shepherd.agent.tasks")
|
||||||
|
|
||||||
|
|
||||||
|
def register_on(core_interface):
|
||||||
|
"""
|
||||||
|
Register the session confspec and hooks on the core interface passed in - `start_tasks` later
|
||||||
|
assumes that these hooks are present.
|
||||||
|
"""
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("resume_delay", IntSpec(helptext="Initial estimate of the time taken to"
|
||||||
|
" resume a session, in seconds"), default=180)
|
||||||
|
confspec.add_spec("enable_suspend", BoolSpec(helptext="Enables suspension of the agent session"
|
||||||
|
" in between tasks"), default=True)
|
||||||
|
confspec.add_spec("min_suspend_time", IntSpec(helptext="Smallest wait period before the next"
|
||||||
|
" scheduled task that the agent will decide to"
|
||||||
|
" suspend, in seconds"), default=300)
|
||||||
|
|
||||||
|
core_interface.confspec.add_spec("session", confspec, optional=True, default={})
|
||||||
|
|
||||||
|
# `resume_time` is a DateTime indicating when the session should resume - it already has the
|
||||||
|
# resume delay applied. Hook should return True on success
|
||||||
|
core_interface.register_hook("session_suspend", ["resume_time"])
|
||||||
|
|
||||||
|
|
||||||
|
class TaskTrigger(ABC):
|
||||||
|
"""Abstract trigger class"""
|
||||||
|
@abstractmethod
|
||||||
|
def next_time(self, base_time):
|
||||||
|
"""
|
||||||
|
Return a time indicating the next trigger time after base_time. Return None if no more
|
||||||
|
trigger events. Should be a DateTime object with the `tz.tzutc()` timezone.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
@preserve.preservable(exclude_attrs=['ap_trigger'])
|
||||||
|
class CronTrigger(TaskTrigger):
|
||||||
|
"""
|
||||||
|
Interprets Cron strings using a wrapper around the APScheduler CronTrigger (and so function
|
||||||
|
is similar). Values left as default or supplied as None are set to a wildcard, unless it is
|
||||||
|
a smaller unit than those supplied - where it instead gets set to it's minimum (so setting
|
||||||
|
`hour` to 3 will set `minute` and `second` to 0).
|
||||||
|
|
||||||
|
The trigger format will be matched against
|
||||||
|
The timezone used is always the local system timezone.
|
||||||
|
|
||||||
|
Details available at https://apscheduler.readthedocs.io/en/latest/modules/triggers/cron.html
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, month=None, day=None, day_of_week=None, hour=None,
|
||||||
|
minute=None, second=None):
|
||||||
|
self.month = month
|
||||||
|
self.day = day
|
||||||
|
self.day_of_week = day_of_week
|
||||||
|
self.hour = hour
|
||||||
|
self.minute = minute
|
||||||
|
self.second = second
|
||||||
|
self.__restore_init__()
|
||||||
|
|
||||||
|
def __restore_init__(self):
|
||||||
|
# Default timezone is the one from tzlocal
|
||||||
|
self.ap_trigger = APCronTrigger(month=self.month, day=self.day,
|
||||||
|
day_of_week=self.day_of_week,
|
||||||
|
hour=self.hour, minute=self.minute,
|
||||||
|
second=self.second)
|
||||||
|
|
||||||
|
def next_time(self, base_time):
|
||||||
|
"""
|
||||||
|
Return a time indicating the next trigger time after base_time. Return None if no more
|
||||||
|
trigger events.
|
||||||
|
"""
|
||||||
|
# Convert base_time to UTC with dateutil, then to pytz which APScheduler requires.
|
||||||
|
utc_base_time = base_time.astimezone(tz.tzutc()).astimezone(pytz.utc)
|
||||||
|
fire_time = self.ap_trigger.get_next_fire_time(None, utc_base_time)
|
||||||
|
# Convert back to UTC, as ap_trigger returns a value with local timezone
|
||||||
|
# Use DateUtil, as it doesn't add other crap to tzinfo
|
||||||
|
return fire_time.astimezone(pytz.utc).astimezone(tz.tzutc())
|
||||||
|
|
||||||
|
|
||||||
|
CronTriggerSpec = ConfigSpecification()
|
||||||
|
CronTriggerSpec.add_specs({
|
||||||
|
'month': StringSpec(helptext="Month in year, 1-12"),
|
||||||
|
'day': StringSpec(helptext="Day of month, 1-32"),
|
||||||
|
'day_of_week': StringSpec(helptext="Day of week, 0-6 or mon,tue,wed,thu,fri,sat,sun"),
|
||||||
|
'hour': StringSpec(helptext="Hour in day, 0-23"),
|
||||||
|
'minute': StringSpec(helptext="Minute in hour, 0-59"),
|
||||||
|
'second': StringSpec(helptext="Second in minute, 0-59"),
|
||||||
|
}, optional=True)
|
||||||
|
|
||||||
|
|
||||||
|
# class IntervalTrigger(TaskTrigger):
|
||||||
|
"""
|
||||||
|
Triggers every x period starting from when it was first created (carries over lowpower)
|
||||||
|
"""
|
||||||
|
# pass
|
||||||
|
|
||||||
|
|
||||||
|
# class SingleTrigger(TaskTrigger):
|
||||||
|
"""
|
||||||
|
Either pass a whole datetime instance, or a delta like a period that gets added to current.
|
||||||
|
"""
|
||||||
|
# pass
|
||||||
|
|
||||||
|
|
||||||
|
@preserve.preservable
|
||||||
|
class Task():
|
||||||
|
def __init__(self, trigger, interface_call, use_session=True):
|
||||||
|
"""
|
||||||
|
Define a new task. If `use_session` is true, will only add the task when a new session is
|
||||||
|
created, otherwise it will be restored from the old session. Suspended sessions will also
|
||||||
|
be resumed in order to perform tasks where `use_session` is true.
|
||||||
|
|
||||||
|
If `use_session` is false, the task will be added on every init, and will not be saved
|
||||||
|
when a session is suspended.
|
||||||
|
"""
|
||||||
|
self.trigger = trigger
|
||||||
|
self.interface_call = interface_call
|
||||||
|
self.use_session = use_session
|
||||||
|
# InterfaceCall already handles the callables and args for us, we just need to preserve
|
||||||
|
# them. Trigger is going to be multiple formats, but the most common will be Cron style.
|
||||||
|
|
||||||
|
|
||||||
|
@preserve.preservable
|
||||||
|
class Session():
|
||||||
|
"""
|
||||||
|
Container class to hold session details
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config, tasks, resume_delay, resume_time=None):
|
||||||
|
"""
|
||||||
|
Create a session instance.
|
||||||
|
`config` is the applied config for the current session, used to detect when it changes
|
||||||
|
`tasks` is the list of tasks saved from the old session
|
||||||
|
`resume_delay` is the estimated time taken from `resume_time` when deciding to resume the
|
||||||
|
session, to compensate for the time taken to resume
|
||||||
|
`resume_time` is the intended resume time (slightly before the next scheduled task)
|
||||||
|
"""
|
||||||
|
self.config = config
|
||||||
|
self.tasks = tasks
|
||||||
|
self.resume_time = resume_time
|
||||||
|
self.resume_delay = resume_delay
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def load(cls, root_dir, applied_config):
|
||||||
|
"""
|
||||||
|
Load a Session instance from the shepherd.session file. Returns None
|
||||||
|
if the config has changed or no file is found.
|
||||||
|
"""
|
||||||
|
session_file = Path(root_dir, 'shepherd.session')
|
||||||
|
if session_file.exists():
|
||||||
|
session = preserve.restore(json.loads(session_file.read_text()))
|
||||||
|
|
||||||
|
if session.config == applied_config:
|
||||||
|
return session
|
||||||
|
|
||||||
|
log.info("Config has changed since last session")
|
||||||
|
else:
|
||||||
|
log.info("No existing session file found")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def save(self, root_dir):
|
||||||
|
"""
|
||||||
|
Peel out non-session tasks, then save this Session instance to the
|
||||||
|
shepherd.session file.
|
||||||
|
"""
|
||||||
|
all_tasks = self.tasks
|
||||||
|
self.tasks = [task for task in all_tasks if task.use_session]
|
||||||
|
|
||||||
|
session_file = Path(root_dir, 'shepherd.session')
|
||||||
|
session_file.write_text(json.dumps(preserve.preserve(self)))
|
||||||
|
|
||||||
|
self.tasks = all_tasks
|
||||||
|
|
||||||
|
|
||||||
|
def init_tasks(config, root_dir, init_tasklist, applied_config, interface_functions):
|
||||||
|
"""
|
||||||
|
Generate the list of tasks to be run. Attempt to restore existing session, if it is present.
|
||||||
|
Use the supplied list of init tasks, ignoring tasks marked with 'use_session' unless we're
|
||||||
|
starting a new session.
|
||||||
|
|
||||||
|
Resolves task interface calls, and returns the task session.
|
||||||
|
"""
|
||||||
|
session = Session.load(root_dir, applied_config)
|
||||||
|
|
||||||
|
if session is None:
|
||||||
|
log.info("Starting new session")
|
||||||
|
session = Session(applied_config, init_tasklist, config['resume_delay'])
|
||||||
|
else:
|
||||||
|
# Add non-session tasks
|
||||||
|
session.tasks.extend([task for task in init_tasklist if not task.use_session])
|
||||||
|
|
||||||
|
# Resolve task interface calls
|
||||||
|
for task in session.tasks:
|
||||||
|
task.interface_call.resolve(interface_functions)
|
||||||
|
|
||||||
|
return session
|
||||||
|
|
||||||
|
|
||||||
|
ScheduledTask = namedtuple("ScheduledTask", ["scheduled_for", "task"])
|
||||||
|
|
||||||
|
_update_thread_init_done = threading.Event()
|
||||||
|
|
||||||
|
_stop_event = threading.Event()
|
||||||
|
|
||||||
|
|
||||||
|
def stop():
|
||||||
|
_stop_event.set()
|
||||||
|
log.info("Tasks thread stop requested.")
|
||||||
|
|
||||||
|
|
||||||
|
def start_tasks(core_interface, session):
|
||||||
|
"""
|
||||||
|
Initialise the Tasks session and start the Tasks update thread.
|
||||||
|
"""
|
||||||
|
# Clear for easier testing
|
||||||
|
_stop_event.clear()
|
||||||
|
_update_thread_init_done.clear()
|
||||||
|
|
||||||
|
config = core_interface.config['session']
|
||||||
|
suspend_hook = core_interface.hooks.session_suspend
|
||||||
|
|
||||||
|
tasks_thread = threading.Thread(target=_tasks_update_loop,
|
||||||
|
args=(config, suspend_hook, session))
|
||||||
|
tasks_thread.start()
|
||||||
|
|
||||||
|
# Wait for init so our log makes sense
|
||||||
|
_update_thread_init_done.wait()
|
||||||
|
|
||||||
|
return tasks_thread
|
||||||
|
|
||||||
|
|
||||||
|
MIN_DELAY = 0.01 # Minimum time (in seconds) the task loop will sleep for.
|
||||||
|
|
||||||
|
|
||||||
|
def _tasks_update_loop(config, suspend_hook, session):
|
||||||
|
|
||||||
|
sched_tasks = []
|
||||||
|
# When resuming, schedule tasks from the desired resume time, even if it's in the past
|
||||||
|
base_time = session.resume_time
|
||||||
|
now = datetime.now(tz.tzutc())
|
||||||
|
# If it's a new session, only schedule tasks from now.
|
||||||
|
if base_time is None:
|
||||||
|
base_time = now
|
||||||
|
|
||||||
|
# Maximum permitted snooze is currently hardcoded to 5 minutes. This means that if we
|
||||||
|
# resume the session more than 5 minutes later than we'd intended (`session.resume_time`),
|
||||||
|
# only run tasks that would have been scheduled for the last 5 minutes.
|
||||||
|
max_snooze_time = timedelta(minutes=5)
|
||||||
|
if base_time < now-max_snooze_time:
|
||||||
|
log.warning(F"Session resumed more than maximum snooze time ({max_snooze_time}) after"
|
||||||
|
F" intended session resume time ({base_time}). Only scheduling tasks after"
|
||||||
|
F" {now-max_snooze_time}, so may have missed some scheduled tasks.")
|
||||||
|
base_time = now-max_snooze_time
|
||||||
|
|
||||||
|
if len(session.tasks) == 0:
|
||||||
|
log.info("No tasks scheduled. Stopping Tasks thread.")
|
||||||
|
_update_thread_init_done.set()
|
||||||
|
return
|
||||||
|
|
||||||
|
for task in session.tasks:
|
||||||
|
scheduled_time = task.trigger.get_next_time(base_time)
|
||||||
|
sched_tasks.append(ScheduledTask(scheduled_time, task))
|
||||||
|
|
||||||
|
suspend_available = False
|
||||||
|
if config['enable_suspend']:
|
||||||
|
if suspend_hook.attachments:
|
||||||
|
suspend_available = True
|
||||||
|
log.info("Session suspension enabled.")
|
||||||
|
else:
|
||||||
|
log.warning("'enable_suspend' set to true, but no suspend hooks are attached. Add"
|
||||||
|
" a plugin that provides a suspend hook.")
|
||||||
|
|
||||||
|
# Let our `start_tasks` call continue
|
||||||
|
_update_thread_init_done.set()
|
||||||
|
|
||||||
|
# Order by next first
|
||||||
|
sched_tasks.sort(key=lambda schedtask: schedtask.scheduled_for)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
now = datetime.now(tz.tzutc())
|
||||||
|
if sched_tasks[0].scheduled_for <= now:
|
||||||
|
# Scheduled time has passed, run the task
|
||||||
|
log.info(F"Running task {sched_tasks[0].task.interface_call}...")
|
||||||
|
|
||||||
|
# Should we be catching exceptions for this?
|
||||||
|
sched_tasks[0].task.interface_call.call()
|
||||||
|
|
||||||
|
# Reschedule and sort
|
||||||
|
sched_tasks[0].scheduled_for = sched_tasks[0].task.trigger.get_next_time(now)
|
||||||
|
log.info(F"Done. Rescheduling task for {sched_tasks[0].scheduled_for}.")
|
||||||
|
|
||||||
|
sched_tasks.sort(key=lambda schedtask: schedtask.scheduled_for)
|
||||||
|
|
||||||
|
else:
|
||||||
|
time_to_wait = sched_tasks[0].scheduled_for - now
|
||||||
|
if suspend_available:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
_stop_event.wait(max(time_to_wait.total_seconds(), MIN_DELAY))
|
||||||
|
|
||||||
|
if _stop_event.is_set():
|
||||||
|
log.warning("Tasks thread stopping...")
|
||||||
|
_stop_event.clear()
|
||||||
|
break
|
||||||
|
|
||||||
|
_update_thread_init_done.clear()
|
||||||
|
|
||||||
|
# TODO Handle case when tasks return None as next trigger time, and when no triggers are left
|
||||||
|
# TODO Add maximum suspend period
|
||||||
|
# TODO Add "snooze" task checking even on new session, to catch tasks we miss if we restart
|
||||||
|
# due to new config
|
||||||
@ -0,0 +1,344 @@
|
|||||||
|
from types import MappingProxyType
|
||||||
|
import time
|
||||||
|
import itertools
|
||||||
|
import contextlib
|
||||||
|
import threading
|
||||||
|
|
||||||
|
# Vendored from python-snippets
|
||||||
|
|
||||||
|
|
||||||
|
class NamespaceProxy():
|
||||||
|
"""
|
||||||
|
Read-only proxy of a mapping (like a dict) allowing item access via attributes. Mapping keys
|
||||||
|
that are not strings will be ignored, and attribute access to any names starting with "__"
|
||||||
|
will still be passed to the actual object attributes.
|
||||||
|
|
||||||
|
Being a proxy, attributes available and their values will change as the underlying backing
|
||||||
|
dict is changed.
|
||||||
|
|
||||||
|
Intended for sitatuations where a dynamic mapping needs to be passed out to client code but
|
||||||
|
you'd like to heavily suggest that it not be modified.
|
||||||
|
|
||||||
|
Note that only the top-level mapping is read only - if the attribute values themselves are
|
||||||
|
mutable, they may still be modified via the NamespaceProxy.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, backing_dict):
|
||||||
|
"""
|
||||||
|
Create a new NamespaceProxy, with attribute access to the underlying backing dict passed
|
||||||
|
in.
|
||||||
|
"""
|
||||||
|
object.__setattr__(self, "_dict_proxy", MappingProxyType(backing_dict))
|
||||||
|
|
||||||
|
def __getattribute__(self, name):
|
||||||
|
if name.startswith("__"):
|
||||||
|
return object.__getattribute__(self, name)
|
||||||
|
return object.__getattribute__(self, "_dict_proxy")[name]
|
||||||
|
|
||||||
|
def __setattr__(self, *args):
|
||||||
|
raise TypeError("NamespaceProxy does not allow attributes to be modified")
|
||||||
|
|
||||||
|
def __delattr__(self, *args):
|
||||||
|
raise TypeError("NamespaceProxy does not allow attributes to be modified")
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
keys = sorted(object.__getattribute__(self, "_dict_proxy"))
|
||||||
|
items = ("{}={!r}".format(k, object.__getattribute__(
|
||||||
|
self, "_dict_proxy")[k]) for k in keys)
|
||||||
|
return "{}({})".format(type(self).__name__, ", ".join(items))
|
||||||
|
|
||||||
|
def __eq__(self, other):
|
||||||
|
if isinstance(other, self.__class__):
|
||||||
|
return (object.__getattribute__(self, "_dict_proxy") ==
|
||||||
|
object.__getattribute__(other, "_dict_proxy"))
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class HoldLock(contextlib.AbstractContextManager):
|
||||||
|
"""
|
||||||
|
A sort-of thread lock, intended to allow one thread to wait until all others are finished
|
||||||
|
using a multi-user resource.
|
||||||
|
|
||||||
|
Once created, threads may call `hold()` on the HoldLock to acquire a hold. If a thread then
|
||||||
|
calls `wait()` or iterates `waiting_for()`, those calls will block until all holds are
|
||||||
|
released with `release()`.
|
||||||
|
|
||||||
|
In this simple use case, the HoldLock almost behaves like a reverse semaphore - `hold()`
|
||||||
|
increases a counter by 1, `release()` reduces it by 1, and calling `wait()` blocks until the
|
||||||
|
counter comes back down to 0. The closest example of a similar thing I've found is Golang
|
||||||
|
WaitGroups, which work like this.
|
||||||
|
|
||||||
|
Additionally, the HoldLock allows an identifier to be passed to `hold()`. This same identifier
|
||||||
|
must be referred to with `release()`, but can be any object - rather than a simple counter,
|
||||||
|
the HoldLock maintains a list of these identifiers. These only really become useful when the
|
||||||
|
main waiting thread calls `holders()` or iterates `waiting_for()` - as then it gets access
|
||||||
|
to these identifiers. The common use case here is to use a string explaining the reason for the
|
||||||
|
`hold()` as the identifier, which then allows the main thread to print a list of things it's
|
||||||
|
waiting for by iterating `waiting_for()`. By default, the `HoldLock.AnonHolder` identifier is
|
||||||
|
used in all calls, allowing the identifier to be completely ignored if it's not useful.
|
||||||
|
|
||||||
|
The HoldLock object itself can be used as a context manager in `with` statements, and functions
|
||||||
|
the same as calling `hold()` with defaults.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class AnonHolder():
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Holder(contextlib.AbstractContextManager):
|
||||||
|
"""
|
||||||
|
An object representing something that has a hold on a HoldLock. Can be used as a context
|
||||||
|
manager. Only intended to be used once.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, hold_lock, identifier, expiry):
|
||||||
|
self.hold_lock = hold_lock
|
||||||
|
self.identifier = identifier
|
||||||
|
self.expiry = expiry
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_value, traceback):
|
||||||
|
self.hold_lock._release(self)
|
||||||
|
|
||||||
|
def release(self):
|
||||||
|
self.hold_lock._release(self)
|
||||||
|
|
||||||
|
def expired(self):
|
||||||
|
if self.expiry is not None:
|
||||||
|
if self.expiry <= self.hold_lock.time_func():
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def __enter__(self):
|
||||||
|
self.hold()
|
||||||
|
return self
|
||||||
|
|
||||||
|
def __exit__(self, exc_type, exc_value, traceback):
|
||||||
|
self.release()
|
||||||
|
|
||||||
|
def __init__(self, time_func=time.monotonic):
|
||||||
|
"""
|
||||||
|
Create a HoldLock instance. By default, time.monotonic is used for all timeouts, but this
|
||||||
|
can be supplied as any function that returns a current absolute time in seconds as a float.
|
||||||
|
"""
|
||||||
|
self._holders = []
|
||||||
|
self._expired_holders = []
|
||||||
|
self._cv = threading.Condition()
|
||||||
|
self.time_func = time_func
|
||||||
|
self._closed = False
|
||||||
|
|
||||||
|
def hold(self, identifier=AnonHolder, timeout=None):
|
||||||
|
"""
|
||||||
|
Acquire a hold on this HoldLock, blocking any `wait()` call until all holds are released.
|
||||||
|
Multiple threads may acquire a hold simultaneously, and an identifier may be used more than
|
||||||
|
once. A hold must later be released with `release()`, providing the same identifier.
|
||||||
|
|
||||||
|
The default `None` identifier works like any other, but will result in calls to `holders`
|
||||||
|
or `waiting_for()` to return a tuple containing None values.
|
||||||
|
|
||||||
|
Can either be called directly or used as a context manager - `with holdlock.hold():`. The
|
||||||
|
returned Holder object also provides a way to see if the hold has expired
|
||||||
|
(`holder.expired()`) and also provides an alternate way to release it without having to
|
||||||
|
pass the identifier again (`holder.release()`).
|
||||||
|
|
||||||
|
holder1 = holdlock.hold("annoying to reference identifier")
|
||||||
|
holder1.release()
|
||||||
|
|
||||||
|
with holdlock.hold(timeout=5) as holder2:
|
||||||
|
while True:
|
||||||
|
time.sleep(1)
|
||||||
|
if holder2.expired():
|
||||||
|
print("Timeout has expired")
|
||||||
|
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
if self._closed:
|
||||||
|
raise Exception("Cannot get new hold on closed HoldWait instance")
|
||||||
|
new_holder = self.Holder(self, identifier, self.time_func() +
|
||||||
|
timeout if timeout else None)
|
||||||
|
self._hold(new_holder)
|
||||||
|
return new_holder
|
||||||
|
|
||||||
|
def _hold(self, holder):
|
||||||
|
with self._cv:
|
||||||
|
self._holders.append(holder)
|
||||||
|
# Sort to make sure earliest expiry is first, with None at the end
|
||||||
|
self._holders.sort(key=lambda holder: (holder.expiry is None, holder.expiry))
|
||||||
|
self._cv.notify_all()
|
||||||
|
|
||||||
|
def release(self, identifier=AnonHolder):
|
||||||
|
"""
|
||||||
|
Release a hold on this HoldLock. If there are mutiple holders with the supplied identifier,
|
||||||
|
the one with the earliest timeout will be released.
|
||||||
|
|
||||||
|
Returns False if the hold had expired (technically holds only expire _if_ someone was
|
||||||
|
waiting for it when the timeout was hit), otherwise returns True.
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
# _holders is already sorted for us
|
||||||
|
for holder in itertools.chain(self._expired_holders, self._holders):
|
||||||
|
if holder.identifier == identifier:
|
||||||
|
matching_holder = holder
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
raise Exception(F"Release identifier '{identifier}' is not currently held")
|
||||||
|
|
||||||
|
return self._release(matching_holder)
|
||||||
|
|
||||||
|
def _release(self, holder):
|
||||||
|
with self._cv:
|
||||||
|
if holder in self._expired_holders:
|
||||||
|
self._expired_holders.remove(holder)
|
||||||
|
else:
|
||||||
|
self._holders.remove(holder)
|
||||||
|
self._cv.notify_all()
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
"""
|
||||||
|
Stop any threads from acquiring a new hold on this HoldLock (the will raise an exception)
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
self._closed = True
|
||||||
|
|
||||||
|
def reopen(self):
|
||||||
|
"""
|
||||||
|
Start allowing threads to get a hold on this HoldLock again (after having called `close()`)
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
self._closed = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def holders(self):
|
||||||
|
"""
|
||||||
|
Return a tuple if current holder identities. The tuple itself is a copy, but the values in
|
||||||
|
it are the same objects that `hold()` calls have passed in as identifiers.
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
return(tuple(holder.identifier for holder in self._holders))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def hold_count(self):
|
||||||
|
"""
|
||||||
|
Return the current number of holds on this HoldLock
|
||||||
|
"""
|
||||||
|
with self._cv:
|
||||||
|
return len(self._holders)
|
||||||
|
|
||||||
|
def wait(self, timeout=None):
|
||||||
|
"""
|
||||||
|
Wait for all threads currently holding this HoldLock to release it, returning True unless
|
||||||
|
the timeout is hit, where it will return False.
|
||||||
|
|
||||||
|
Note that unless `close()` is called first, _more threads may get a hold_ while waiting.
|
||||||
|
|
||||||
|
If `timeout` is specified, this must be a relative float value in seconds. If
|
||||||
|
`timeout` is None, `wait()` will block indefinitely for all holds to be released.
|
||||||
|
"""
|
||||||
|
expiry = None
|
||||||
|
if timeout is not None:
|
||||||
|
expiry = self.time_func()+timeout
|
||||||
|
|
||||||
|
with self._cv:
|
||||||
|
while len(self._holders) > 0:
|
||||||
|
cv_timeout = None
|
||||||
|
now = self.time_func()
|
||||||
|
|
||||||
|
# Pull out any holders that have expired
|
||||||
|
while (self._holders[0].expiry is not None):
|
||||||
|
if self._holders[0].expiry <= now:
|
||||||
|
self._expired_holders.append(self._holders.pop(0))
|
||||||
|
if len(self._holders) == 0:
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
cv_timeout = self._holders[0].expiry - now
|
||||||
|
break
|
||||||
|
|
||||||
|
if expiry is not None:
|
||||||
|
if expiry <= now:
|
||||||
|
return False
|
||||||
|
cv_timeout = min(cv_timeout, expiry - now) if cv_timeout else expiry - now
|
||||||
|
|
||||||
|
self._cv.wait(cv_timeout)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def waiting_for(self, timeout=None, update_period=None):
|
||||||
|
"""
|
||||||
|
Behaves the same as `wait()`, but is a generator that will return sequences of remaining
|
||||||
|
holder identifiers while waiting for all holds to be released. By default, returns a new
|
||||||
|
sequence of remaining holders whenever it changes, but can also be supplied with
|
||||||
|
`update_period` to add more intermediate updates.
|
||||||
|
|
||||||
|
When all holds are released, the last returned sequence by the generator will be empty (no
|
||||||
|
longer waiting on any holds).
|
||||||
|
If `timeout` is not None and the timeout expires instead, the last sequence returned
|
||||||
|
will _not_ be empty (was still waiting on holds when the timeout expired).
|
||||||
|
"""
|
||||||
|
expiry = None
|
||||||
|
if timeout is not None:
|
||||||
|
expiry = self.time_func()+timeout
|
||||||
|
|
||||||
|
with self._cv:
|
||||||
|
|
||||||
|
# We effectively have 2 sections where holders can be released/timed out, and time can
|
||||||
|
# pass - the wait, and the yield, so things that check for changes
|
||||||
|
# in those need to be done after both.
|
||||||
|
|
||||||
|
while len(self._holders) > 0:
|
||||||
|
now = self.time_func()
|
||||||
|
|
||||||
|
# check main timeout
|
||||||
|
if expiry is not None:
|
||||||
|
if expiry <= now:
|
||||||
|
return
|
||||||
|
|
||||||
|
# expire any holders
|
||||||
|
while (self._holders[0].expiry is not None):
|
||||||
|
if self._holders[0].expiry <= now:
|
||||||
|
self._holders.pop(0)
|
||||||
|
if len(self._holders) == 0:
|
||||||
|
# Generate empty holder tuple and finish
|
||||||
|
self._cv.release()
|
||||||
|
yield tuple()
|
||||||
|
self._cv.acquire()
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Yield holders
|
||||||
|
yielded_holders = self.holders
|
||||||
|
self._cv.release()
|
||||||
|
yield yielded_holders
|
||||||
|
self._cv.acquire()
|
||||||
|
|
||||||
|
# If holders has changed since before yield, continue (no need to wait for change)
|
||||||
|
if self.holders != yielded_holders:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Holders haven't changed, so we have at least 1
|
||||||
|
cv_timeout = update_period
|
||||||
|
now = self.time_func()
|
||||||
|
|
||||||
|
# Check main timeout again
|
||||||
|
if expiry is not None:
|
||||||
|
if expiry <= now:
|
||||||
|
return
|
||||||
|
cv_timeout = min(cv_timeout, expiry - now) if cv_timeout else expiry - now
|
||||||
|
|
||||||
|
# Check holder expiry again
|
||||||
|
if self._holders[0].expiry is not None:
|
||||||
|
if self._holders[0].expiry <= now:
|
||||||
|
# next holder has expired, continue and let original check deal with it
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
holder_timeout = self._holders[0].expiry - now
|
||||||
|
cv_timeout = min(
|
||||||
|
holder_timeout, cv_timeout) if cv_timeout else holder_timeout
|
||||||
|
|
||||||
|
self._cv.wait(cv_timeout)
|
||||||
|
|
||||||
|
# Generate empty holder tuple and finish
|
||||||
|
self._cv.release()
|
||||||
|
yield tuple()
|
||||||
|
self._cv.acquire()
|
||||||
|
return
|
||||||
@ -1,80 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
|
|
||||||
print(cv2.__version__)
|
|
||||||
|
|
||||||
gst_str = ('v4l2src device=/dev/video0 ! '
|
|
||||||
'videoconvert ! appsink drop=true max-buffers=1 sync=false')
|
|
||||||
print(gst_str)
|
|
||||||
|
|
||||||
logo_im = Image.open('smallshepherd.png')
|
|
||||||
|
|
||||||
overlayfont = "DejaVuSansMono.ttf"
|
|
||||||
|
|
||||||
try:
|
|
||||||
fnt = ImageFont.truetype(overlayfont, 50)
|
|
||||||
except IOError:
|
|
||||||
fnt = ImageFont.load_default()
|
|
||||||
|
|
||||||
loaded_fonts={}
|
|
||||||
loaded_logos={}
|
|
||||||
|
|
||||||
vidcap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
|
|
||||||
while True:
|
|
||||||
breakpoint()
|
|
||||||
vidcap.grab()
|
|
||||||
read_flag, frame = vidcap.read()
|
|
||||||
print(read_flag)
|
|
||||||
#overlay = frame.copy()
|
|
||||||
# You may need to convert the color.
|
|
||||||
|
|
||||||
#Convert over to PIL. Mostly so we can use our own font.
|
|
||||||
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
|
||||||
im_pil = Image.fromarray(img)
|
|
||||||
|
|
||||||
font_size = int(im_pil.height/40)
|
|
||||||
margin_size = int(font_size/5)
|
|
||||||
|
|
||||||
if font_size not in loaded_fonts:
|
|
||||||
loaded_fonts[font_size] = ImageFont.truetype(overlayfont, int(font_size*0.9))
|
|
||||||
|
|
||||||
thisfont = loaded_fonts[font_size]
|
|
||||||
|
|
||||||
if font_size not in loaded_logos:
|
|
||||||
newsize = (int(logo_im.width*(font_size/logo_im.height)),font_size)
|
|
||||||
loaded_logos[font_size] = logo_im.resize(newsize, Image.BILINEAR)
|
|
||||||
|
|
||||||
thislogo = loaded_logos[font_size]
|
|
||||||
|
|
||||||
|
|
||||||
overlay = Image.new('RGBA',(im_pil.width,font_size+(2*margin_size)), (0,0,0))
|
|
||||||
|
|
||||||
overlay.paste(thislogo, (int((overlay.width-thislogo.width)/2),margin_size))
|
|
||||||
|
|
||||||
draw = ImageDraw.Draw(overlay)
|
|
||||||
draw.text((margin_size*2, margin_size), "SARDIcam-1", font=thisfont,
|
|
||||||
fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
datetext = "2019-07-24 20:22:31"
|
|
||||||
datewidth, _ = draw.textsize(datetext,thisfont)
|
|
||||||
draw.text((overlay.width-(margin_size*2)-datewidth, margin_size), datetext, font=thisfont,
|
|
||||||
fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
overlay.putalpha(128)
|
|
||||||
|
|
||||||
im_pil.paste(overlay, (0,im_pil.height-overlay.height),overlay)
|
|
||||||
im_pil.save("test.jpg", "JPEG", quality=90)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# For reversing the operation:
|
|
||||||
#im_np = np.asarray(im_pil)
|
|
||||||
|
|
||||||
#cv2.rectangle(overlay,(200,200),(500,500),(255,0,0),-1)
|
|
||||||
#cv2.addWeighted(overlay, 0.3, frame, 0.7, 0, frame)
|
|
||||||
#cv2.imwrite("frame.jpg", frame)
|
|
||||||
|
|
||||||
# print out build properties:
|
|
||||||
# print(cv2.getBuildInformation())
|
|
||||||
@ -1,420 +0,0 @@
|
|||||||
"""
|
|
||||||
Configuration management module. Enables configuration to be validated against
|
|
||||||
requirement definitions before being loaded and used.
|
|
||||||
|
|
||||||
Compatible with both raw config data structures and TOML files, config data must
|
|
||||||
start with a root dict containing named "config bundles". These are intended to
|
|
||||||
refer to different modular parts of the application needing configuration, and the
|
|
||||||
config data structure must contain at least one.
|
|
||||||
|
|
||||||
Each config bundle itself needs to have a dict at the root, and so in practice a minimal
|
|
||||||
TOML config file would look like::
|
|
||||||
|
|
||||||
[myapp]
|
|
||||||
config_thingy_a = "foooooo!"
|
|
||||||
important_number = 8237
|
|
||||||
|
|
||||||
This would resolve to a config bundle named "myapp" that results in the dict::
|
|
||||||
|
|
||||||
{"config_thingy_a": "foooooo!", "important_number": 8237}
|
|
||||||
|
|
||||||
Root items that are not dicts are not supported, for instance both the following TOML files would fail::
|
|
||||||
|
|
||||||
[[myapp]]
|
|
||||||
important_number = 8237
|
|
||||||
[[myapp]]
|
|
||||||
another_important_number = 2963
|
|
||||||
|
|
||||||
(root object in bundle is a list)
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
root_thingy = 46
|
|
||||||
|
|
||||||
(root object in config is a single value)
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
import re
|
|
||||||
import toml
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from copy import deepcopy
|
|
||||||
|
|
||||||
from .freezedry import freezedryable, rehydrate
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidConfigError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# The Table and Array terms from the TOML convention essentially
|
|
||||||
# map directly to Dictionaries (Tables), and Lists (Arrays)
|
|
||||||
|
|
||||||
class _ConfigDefinition(ABC):
|
|
||||||
def __init__(self, default=None, optional=False, helptext=""):
|
|
||||||
self.default = default
|
|
||||||
self.optional = optional
|
|
||||||
self.helptext = helptext
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def validate(self, value):
|
|
||||||
"""
|
|
||||||
Checks the supplied value to confirm that it complies with this ConfigDefinition.
|
|
||||||
Raises InvalidConfigError on failure.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class BoolDef(_ConfigDefinition):
|
|
||||||
def __init__(self, default=None, optional=False, helptext=""):
|
|
||||||
super().__init__(default, optional, helptext)
|
|
||||||
|
|
||||||
def validate(self, value):
|
|
||||||
if not isinstance(value, bool):
|
|
||||||
raise InvalidConfigError("Config value must be a boolean")
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class IntDef(_ConfigDefinition):
|
|
||||||
def __init__(self, default=None, minval=None, maxval=None,
|
|
||||||
optional=False, helptext=""):
|
|
||||||
super().__init__(default, optional, helptext)
|
|
||||||
self.minval = minval
|
|
||||||
self.maxval = maxval
|
|
||||||
|
|
||||||
def validate(self, value):
|
|
||||||
if not isinstance(value, int):
|
|
||||||
raise InvalidConfigError("Config value must be an integer")
|
|
||||||
if self.minval is not None and value < self.minval:
|
|
||||||
raise InvalidConfigError("Config value must be >= " +
|
|
||||||
str(self.minval))
|
|
||||||
if self.maxval is not None and value > self.maxval:
|
|
||||||
raise InvalidConfigError("Config value must be <= " +
|
|
||||||
str(self.maxval))
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class StringDef(_ConfigDefinition):
|
|
||||||
def __init__(self, default="", minlength=None, maxlength=None,
|
|
||||||
optional=False, helptext=""):
|
|
||||||
super().__init__(default, optional, helptext)
|
|
||||||
self.minlength = minlength
|
|
||||||
self.maxlength = maxlength
|
|
||||||
|
|
||||||
def validate(self, value):
|
|
||||||
if not isinstance(value, str):
|
|
||||||
raise InvalidConfigError(F"Config value must be a string and is {value}")
|
|
||||||
if self.minlength is not None and len(value) < self.minlength:
|
|
||||||
raise InvalidConfigError("Config string length must be >= " +
|
|
||||||
str(self.minlength))
|
|
||||||
if self.maxlength is not None and len(value) > self.maxlength:
|
|
||||||
raise InvalidConfigError("Config string length must be <= " +
|
|
||||||
str(self.maxlength))
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class DictDef(_ConfigDefinition):
|
|
||||||
def __init__(self, default=None, optional=False, helptext=""):
|
|
||||||
super().__init__(default, optional, helptext)
|
|
||||||
self.def_dict = {}
|
|
||||||
|
|
||||||
def add_def(self, name, newdef):
|
|
||||||
if not isinstance(newdef, _ConfigDefinition):
|
|
||||||
raise TypeError("Config definiton must be an instance of a "
|
|
||||||
"ConfigDefinition subclass")
|
|
||||||
if not isinstance(name, str):
|
|
||||||
raise TypeError("Config definition name must be a string")
|
|
||||||
self.def_dict[name] = newdef
|
|
||||||
return newdef
|
|
||||||
|
|
||||||
def validate(self, value_dict):
|
|
||||||
"""
|
|
||||||
Checks the supplied value to confirm that it complies with this ConfigDefinition.
|
|
||||||
Raises InvalidConfigError on failure.
|
|
||||||
|
|
||||||
This *can* modify the supplied value dict, inserting defaults for any child
|
|
||||||
ConfigDefinitions that are marked as optional.
|
|
||||||
"""
|
|
||||||
def_set = set(self.def_dict.keys())
|
|
||||||
value_set = set(value_dict.keys())
|
|
||||||
|
|
||||||
for missing_key in def_set - value_set:
|
|
||||||
if not self.def_dict[missing_key].optional:
|
|
||||||
raise InvalidConfigError("Dict must contain key: " +
|
|
||||||
missing_key)
|
|
||||||
else:
|
|
||||||
value_dict[missing_key] = self.def_dict[missing_key].default
|
|
||||||
|
|
||||||
for extra_key in value_set - def_set:
|
|
||||||
raise InvalidConfigError("Dict contains unknown key: " +
|
|
||||||
extra_key)
|
|
||||||
|
|
||||||
for key, value in value_dict.items():
|
|
||||||
try:
|
|
||||||
self.def_dict[key].validate(value)
|
|
||||||
except InvalidConfigError as e:
|
|
||||||
e.args = ("Key: " + key,) + e.args
|
|
||||||
raise
|
|
||||||
|
|
||||||
def get_template(self, include_optional=False):
|
|
||||||
"""
|
|
||||||
Return a config dict with the minimum structure required for this ConfigDefinition.
|
|
||||||
Default values will be included, though not all required fields will necessarily have
|
|
||||||
defaults that successfully validate.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
include_optional: If set true, will include *all* config fields, not just the
|
|
||||||
required ones
|
|
||||||
Returns:
|
|
||||||
Dict containing the structure that should be passed back in (with values) to comply
|
|
||||||
with this ConfigDefinition.
|
|
||||||
"""
|
|
||||||
template = {}
|
|
||||||
for key, confdef in self.def_dict.items():
|
|
||||||
if confdef.optional and (not include_optional):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if hasattr(confdef,"get_template"):
|
|
||||||
template[key]=confdef.get_template(include_optional)
|
|
||||||
else:
|
|
||||||
template[key]=confdef.default
|
|
||||||
return template
|
|
||||||
|
|
||||||
|
|
||||||
class _ListDefMixin():
|
|
||||||
def validate(self, value_list):
|
|
||||||
if not isinstance(value_list, list):
|
|
||||||
raise InvalidConfigError("Config item must be a list")
|
|
||||||
for index, value in enumerate(value_list):
|
|
||||||
try:
|
|
||||||
super().validate(value)
|
|
||||||
except InvalidConfigError as e:
|
|
||||||
e.args = ("List index: " + str(index),) + e.args
|
|
||||||
raise
|
|
||||||
|
|
||||||
def get_template(self, include_optional=False):
|
|
||||||
if hasattr(super(),"get_template"):
|
|
||||||
return [super().get_template(include_optional)]
|
|
||||||
else:
|
|
||||||
return [self.default]
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class BoolListDef(_ListDefMixin, BoolDef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class IntListDef(_ListDefMixin, IntDef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class StringListDef(_ListDefMixin, StringDef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class DictListDef(_ListDefMixin, DictDef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@freezedryable
|
|
||||||
class ConfDefinition(DictDef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigManager():
|
|
||||||
def __init__(self):
|
|
||||||
self.root_config = {}
|
|
||||||
self.confdefs = {}
|
|
||||||
self.frozen_config = {}
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _load_source(source):
|
|
||||||
"""
|
|
||||||
Accept a filepath or opened file representing a TOML file, or a direct dict,
|
|
||||||
and return a plain parsed dict.
|
|
||||||
"""
|
|
||||||
if isinstance(source, dict): # load from dict
|
|
||||||
return source
|
|
||||||
elif isinstance(source, str): # load from pathname
|
|
||||||
with open(source, 'r') as conf_file:
|
|
||||||
return toml.load(conf_file)
|
|
||||||
else: # load from file
|
|
||||||
return toml.load(source)
|
|
||||||
|
|
||||||
|
|
||||||
def load(self, source):
|
|
||||||
"""
|
|
||||||
Load a config source into the ConfigManager, replacing any existing config.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
source: Either a dict config to load directly, a filepath to a TOML file,
|
|
||||||
or an open file.
|
|
||||||
"""
|
|
||||||
self.root_config = self._load_source(source)
|
|
||||||
self._overlay(self.frozen_config, self.root_config)
|
|
||||||
|
|
||||||
def load_overlay(self, source):
|
|
||||||
"""
|
|
||||||
Load a config source into the ConfigManager, merging it over the top of any existing
|
|
||||||
config. Dicts will be recursively processed with keys being merged and existing values
|
|
||||||
being replaced by the new source. This includes lists, which will be treated as any other
|
|
||||||
value and completely replaced.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
source: Either the root dict of a data structure to load directly, a filepath to a TOML file,
|
|
||||||
or an open TOML file.
|
|
||||||
"""
|
|
||||||
self._overlay(self._load_source(source), self.root_config)
|
|
||||||
self._overlay(self.frozen_config, self.root_config)
|
|
||||||
|
|
||||||
|
|
||||||
def freeze_value(self, bundle_name, *field_names):
|
|
||||||
"""
|
|
||||||
Freeze the given config field so that subsequent calls to ``load`` and ``load_overlay``
|
|
||||||
cannot change it. Can only be used for dict values or dict values nested in parent dicts.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
bundle_name: The name of the bundle to look for the field in.
|
|
||||||
*field_names: a series of strings that locate the config field, either a single
|
|
||||||
key or series of nested keys.
|
|
||||||
"""
|
|
||||||
|
|
||||||
#Bundle names are really no different from any other nested dict
|
|
||||||
names = (bundle_name,) + field_names
|
|
||||||
|
|
||||||
target_field = self.root_config
|
|
||||||
frozen_value = self.frozen_config
|
|
||||||
|
|
||||||
# Cycle through nested names, creating frozen_config nested dicts as necessary
|
|
||||||
for name in names[:-1]:
|
|
||||||
target_field = target_field[name]
|
|
||||||
if name not in frozen_value:
|
|
||||||
frozen_value[name] = {}
|
|
||||||
frozen_value = frozen_value[name]
|
|
||||||
|
|
||||||
|
|
||||||
frozen_value[names[-1]] = target_field[names[-1]]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def add_confdef(self, bundle_name, confdef):
|
|
||||||
"""
|
|
||||||
Stores a ConfigDefinition for future use when validating the corresponding config bundle
|
|
||||||
|
|
||||||
Args:
|
|
||||||
bundle_name (str) : The name to store the config definition under.
|
|
||||||
confdef (ConfigDefinition): The populated ConfigDefinition to store.
|
|
||||||
"""
|
|
||||||
self.confdefs[bundle_name]=confdef
|
|
||||||
|
|
||||||
def add_confdefs(self, confdefs):
|
|
||||||
"""
|
|
||||||
Stores multiple ConfigDefinitions at once for future use when validating the corresponding config bundles
|
|
||||||
|
|
||||||
Args:
|
|
||||||
confdefs : A dict of populated ConfigDefinitions to store, using their keys as names.
|
|
||||||
"""
|
|
||||||
self.confdefs.update(confdefs)
|
|
||||||
|
|
||||||
def list_missing_confdefs(self):
|
|
||||||
"""
|
|
||||||
Returns a list of config bundle names that do not have a corresponding ConfigDefinition
|
|
||||||
stored in the ConfigManager.
|
|
||||||
"""
|
|
||||||
return list(self.root_config.keys() - self.confdefs.keys())
|
|
||||||
|
|
||||||
|
|
||||||
def _overlay(self, src, dest):
|
|
||||||
for key in src:
|
|
||||||
# If the key is also in the dest and both are dicts, merge them.
|
|
||||||
if key in dest and isinstance(src[key], dict) and isinstance(dest[key], dict):
|
|
||||||
self._overlay(src[key], dest[key])
|
|
||||||
else:
|
|
||||||
# Otherwise it's either an existing value to be replaced or needs to be added.
|
|
||||||
dest[key] = src[key]
|
|
||||||
|
|
||||||
def get_config_bundle(self, bundle_name, conf_def=None):
|
|
||||||
"""
|
|
||||||
Get a config bundle called ``bundle_name`` and validate
|
|
||||||
it against the corresponding config definition stored in the ConfigManager.
|
|
||||||
If ``conf_def`` is supplied, it gets used instead. Returns a validated
|
|
||||||
config bundle dict.
|
|
||||||
|
|
||||||
Note that as part of validation, optional keys that are missing will be
|
|
||||||
filled in with their default values (see ``DictDef``). This function will copy
|
|
||||||
the config bundle *after* validation, and so config loaded in the ConfManager will
|
|
||||||
be modified, but future ConfigManager manipulations won't change the returned config
|
|
||||||
bundle.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
config_name: (str) Name of the config dict to find.
|
|
||||||
conf_def: (ConfDefinition) Optional config definition to validate against.
|
|
||||||
"""
|
|
||||||
if not isinstance(conf_def, ConfDefinition):
|
|
||||||
conf_def = self.confdefs[bundle_name]
|
|
||||||
|
|
||||||
if bundle_name not in self.root_config:
|
|
||||||
raise InvalidConfigError(
|
|
||||||
"Config must contain dict: " + bundle_name)
|
|
||||||
try:
|
|
||||||
conf_def.validate(self.root_config[bundle_name])
|
|
||||||
except InvalidConfigError as e:
|
|
||||||
e.args = ("Bundle: " + bundle_name,) + e.args
|
|
||||||
raise
|
|
||||||
return deepcopy(self.root_config[bundle_name])
|
|
||||||
|
|
||||||
def get_config_bundles(self, bundle_names):
|
|
||||||
"""
|
|
||||||
Get multiple config bundles from the root dict at once, validating each one with the
|
|
||||||
corresponding confdef stored in the ConfigManager. See ``get_config_bundle``
|
|
||||||
|
|
||||||
Args:
|
|
||||||
bundle_names: A list of config bundle names to get. If dictionary is supplied, uses the values
|
|
||||||
as ConfigDefinitions rather than looking up a stored one in the ConfigManager.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
A dict of config dicts, with keys matching those passed in ``bundle_names``.
|
|
||||||
"""
|
|
||||||
config_values = {}
|
|
||||||
if isinstance(bundle_names, dict):
|
|
||||||
for name, conf_def in bundle_names.items():
|
|
||||||
config_values[name] = self.get_config_bundle(name, conf_def)
|
|
||||||
else:
|
|
||||||
for name in bundle_names:
|
|
||||||
config_values[name] = self.get_config_bundle(name)
|
|
||||||
return config_values
|
|
||||||
|
|
||||||
def get_bundle_names(self):
|
|
||||||
"""
|
|
||||||
Returns a list of names of top level config bundles
|
|
||||||
"""
|
|
||||||
return list(self.root_config.keys())
|
|
||||||
|
|
||||||
def dump_toml(self):
|
|
||||||
return toml.dumps(self.root_config)
|
|
||||||
|
|
||||||
def dump_to_file(self, filepath, message=None):
|
|
||||||
with open(filepath, 'w+') as f:
|
|
||||||
content = self.dump_toml()
|
|
||||||
if message is not None:
|
|
||||||
content = content.rstrip() + gen_comment(message)
|
|
||||||
f.write(content)
|
|
||||||
|
|
||||||
|
|
||||||
def strip_toml_message(string):
|
|
||||||
print("stripping...")
|
|
||||||
return re.sub("(?m)^#\\ shepherd_message:[^\\n]*$\\n?(?:^#[^\\n]+$\\n?)*",
|
|
||||||
'', string)
|
|
||||||
|
|
||||||
|
|
||||||
def update_toml_message(filepath, message):
|
|
||||||
with open(filepath, 'r+') as f:
|
|
||||||
content = f.read()
|
|
||||||
content = strip_toml_message(content).rstrip()
|
|
||||||
content += gen_comment(message)
|
|
||||||
f.seek(0)
|
|
||||||
f.write(content)
|
|
||||||
f.truncate()
|
|
||||||
|
|
||||||
|
|
||||||
def gen_comment(string):
|
|
||||||
return '\n# shepherd_message: ' + '\n# '.join(string.replace('#', '').splitlines()) + '\n'
|
|
||||||
|
|
||||||
@ -1,139 +0,0 @@
|
|||||||
import os
|
|
||||||
import uuid
|
|
||||||
import subprocess
|
|
||||||
import requests
|
|
||||||
import threading
|
|
||||||
import json
|
|
||||||
from urllib.parse import urlparse, urlunparse, urljoin
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
import shepherd.plugin
|
|
||||||
# Check for shepherd.new file in edit conf dir. If there,
|
|
||||||
# or if no shepherd.id file can be found, generate a new one.
|
|
||||||
# For now, also attempt to delete /var/lib/zerotier-one/identity.public and identity.secret
|
|
||||||
# Once generated, if it was due to shepherd.new file, delete it.
|
|
||||||
|
|
||||||
|
|
||||||
#Start new thread, and push ID and core config to api.shepherd.distreon.net/client/update
|
|
||||||
|
|
||||||
class UpdateManager():
|
|
||||||
def __init__(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
class SequenceUpdate():
|
|
||||||
Item = namedtuple('Item', ['sequence_number', 'content'])
|
|
||||||
def __init__(self):
|
|
||||||
self.items = []
|
|
||||||
self._sequence_count = 0
|
|
||||||
self._dirty = False
|
|
||||||
|
|
||||||
def _next_sequence_number(self):
|
|
||||||
# TODO: need to establish a max sequence number, so that it can be compared to half
|
|
||||||
# that range and wrap around.
|
|
||||||
self._sequence_count +=1
|
|
||||||
return self._sequence_count
|
|
||||||
|
|
||||||
def mark_as_dirty(self):
|
|
||||||
self._dirty = True
|
|
||||||
|
|
||||||
def add_item(self, item):
|
|
||||||
self.items.append(self.Item(self._next_sequence_number(), item))
|
|
||||||
self.mark_as_dirty()
|
|
||||||
|
|
||||||
def get_payload():
|
|
||||||
pass
|
|
||||||
def process_ack():
|
|
||||||
pass
|
|
||||||
|
|
||||||
client_id = None
|
|
||||||
control_url = None
|
|
||||||
api_key = None
|
|
||||||
|
|
||||||
def _update_job(core_config, plugin_config):
|
|
||||||
payload = {"client_id":client_id, "core_config":core_config,"plugin_config":plugin_config}
|
|
||||||
#json_string = json.dumps(payload)
|
|
||||||
try:
|
|
||||||
# Using the json arg rather than json.dumps ourselves automatically sets the Content-Type
|
|
||||||
# header to application/json, which Flask expects to work correctly
|
|
||||||
r = requests.post(control_url, json=payload, auth=(client_id, api_key))
|
|
||||||
except requests.exceptions.ConnectionError:
|
|
||||||
raise
|
|
||||||
|
|
||||||
def generate_new_zerotier_id():
|
|
||||||
print("Removing old Zerotier id files")
|
|
||||||
try:
|
|
||||||
os.remove("/var/lib/zerotier-one/identity.public")
|
|
||||||
os.remove("/var/lib/zerotier-one/identity.secret")
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
print("Restarting Zerotier systemd service to regenerate ID")
|
|
||||||
subprocess.run(["systemctl", "restart", "zerotier-one.service"])
|
|
||||||
|
|
||||||
def generate_new_id(root_dir):
|
|
||||||
global client_id
|
|
||||||
with open(os.path.join(root_dir, "shepherd.id"), 'w+') as f:
|
|
||||||
new_id = uuid.uuid1()
|
|
||||||
client_id = str(new_id)
|
|
||||||
f.write(client_id)
|
|
||||||
generate_new_zerotier_id()
|
|
||||||
|
|
||||||
def init_control(core_config, plugin_config):
|
|
||||||
global client_id
|
|
||||||
global control_url
|
|
||||||
global api_key
|
|
||||||
|
|
||||||
# On init, need to be able to quickly return the cached shepherd control layer if necessary.
|
|
||||||
|
|
||||||
# Create the /update endpoint structure
|
|
||||||
|
|
||||||
root_dir = os.path.expanduser(core_config["root_dir"])
|
|
||||||
editconf_dir = os.path.dirname(os.path.expanduser(core_config["conf_edit_path"]))
|
|
||||||
|
|
||||||
#Some weirdness with URL parsing means that by default most urls (like www.google.com)
|
|
||||||
# get treated as relative
|
|
||||||
# https://stackoverflow.com/questions/53816559/python-3-netloc-value-in-urllib-parse-is-empty-if-url-doesnt-have
|
|
||||||
|
|
||||||
control_url = core_config["control_server"]
|
|
||||||
if "//" not in control_url:
|
|
||||||
control_url = "//"+control_url
|
|
||||||
control_url = urlunparse(urlparse(control_url)._replace(scheme="https"))
|
|
||||||
control_url = urljoin(control_url, "/client/update")
|
|
||||||
print(F"Control url is: {control_url}")
|
|
||||||
|
|
||||||
api_key = core_config["api_key"]
|
|
||||||
|
|
||||||
if os.path.isfile(os.path.join(editconf_dir, "shepherd.new")):
|
|
||||||
generate_new_id(root_dir)
|
|
||||||
os.remove(os.path.join(editconf_dir, "shepherd.new"))
|
|
||||||
print(F"Config hostname: {core_config['hostname']}")
|
|
||||||
if not (core_config["hostname"] == ""):
|
|
||||||
print("Attempting to change hostname")
|
|
||||||
subprocess.run(["raspi-config", "nonint", "do_hostname", core_config["hostname"]])
|
|
||||||
elif not os.path.isfile(os.path.join(root_dir, "shepherd.id")):
|
|
||||||
generate_new_id(root_dir)
|
|
||||||
else:
|
|
||||||
with open(os.path.join(root_dir, "shepherd.id"), 'r') as id_file:
|
|
||||||
client_id = id_file.readline().strip()
|
|
||||||
|
|
||||||
print(F"Client ID is: {client_id}")
|
|
||||||
|
|
||||||
control_thread = threading.Thread(target=_update_job, args=(core_config,plugin_config))
|
|
||||||
control_thread.start()
|
|
||||||
|
|
||||||
|
|
||||||
def _post_logs_job():
|
|
||||||
logs = shepherd.plugin.plugin_functions["scout"].get_logs()
|
|
||||||
measurements = shepherd.plugin.plugin_functions["scout"].get_measurements()
|
|
||||||
|
|
||||||
|
|
||||||
payload = {"client_id":client_id, "logs":logs, "measurements":measurements}
|
|
||||||
|
|
||||||
try:
|
|
||||||
r = requests.post(control_url, json=payload, auth=(client_id, api_key))
|
|
||||||
except requests.exceptions.ConnectionError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def post_logs():
|
|
||||||
post_logs_thread = threading.Thread(target=_post_logs_job, args=())
|
|
||||||
post_logs_thread.start()
|
|
||||||
|
|
||||||
@ -1,219 +0,0 @@
|
|||||||
"""
|
|
||||||
Core shepherd module, tying together main service functionality.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
import toml
|
|
||||||
import logging
|
|
||||||
import click
|
|
||||||
from copy import deepcopy
|
|
||||||
|
|
||||||
from . import scheduler
|
|
||||||
from . import config
|
|
||||||
from . import plugin
|
|
||||||
from . import control
|
|
||||||
|
|
||||||
|
|
||||||
# Future implementations of checking config differences should be done on
|
|
||||||
# the hash of the nested conf dict, so comments shouldn't affect this.
|
|
||||||
|
|
||||||
# save old config to somewhere in the shepherd root dir - probably need to
|
|
||||||
# implement a TOML writer in the config module.
|
|
||||||
|
|
||||||
# later on, there's going to be an issue with a new config being applied
|
|
||||||
# remotely, then the system restarting, and an old edit in /boot being
|
|
||||||
# applied over the top...
|
|
||||||
# Fix this by saving the working config to /boot when new config applied
|
|
||||||
# remotely.
|
|
||||||
|
|
||||||
|
|
||||||
# Relative pathnames here are all relative to "root_dir"
|
|
||||||
def define_core_config(confdef):
|
|
||||||
"""
|
|
||||||
Defines the config definition by populating the ConfigDefinition passed in ``confdef`` - the same pattern plugins use
|
|
||||||
"""
|
|
||||||
confdef.add_def("name", config.StringDef(
|
|
||||||
helptext="Identifying name for this device"))
|
|
||||||
|
|
||||||
confdef.add_def("hostname",
|
|
||||||
config.StringDef(default="", optional=True,
|
|
||||||
helptext="If set, changes the system hostname"))
|
|
||||||
confdef.add_def("plugin_dir",
|
|
||||||
config.StringDef(default="~/shepherd-plugins", optional=True,
|
|
||||||
helptext="Optional directory for Shepherd to look for plugins in."))
|
|
||||||
confdef.add_def("root_dir",
|
|
||||||
config.StringDef(default="~/shepherd", optional=True,
|
|
||||||
helptext="Operating directory for shepherd to place working files."))
|
|
||||||
confdef.add_def("custom_config_path",
|
|
||||||
config.StringDef(optional=True,
|
|
||||||
helptext="Path to custom config layer TOML file."))
|
|
||||||
confdef.add_def("generated_config_path",
|
|
||||||
config.StringDef(default="shepherd-generated.toml", optional=True,
|
|
||||||
helptext="Path to custom file Shepherd will generate to show compiled config that was used and any errors in validation."))
|
|
||||||
|
|
||||||
confdef.add_def("control_server", config.StringDef())
|
|
||||||
confdef.add_def("control_api_key", config.StringDef())
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_core_conf_paths(core_conf):
|
|
||||||
"""
|
|
||||||
Set the cwd to ``root_dir`` and resolve other core config paths relative to that.
|
|
||||||
Also expands out any "~" user characters.
|
|
||||||
"""
|
|
||||||
core_conf["root_dir"] = str(Path(core_conf["root_dir"]).expanduser().resolve())
|
|
||||||
os.chdir(core_conf["root_dir"])
|
|
||||||
core_conf["plugin_dir"] = str(Path(core_conf["plugin_dir"]).expanduser().resolve())
|
|
||||||
core_conf["custom_config_path"] = str(
|
|
||||||
Path(core_conf["custom_config_path"]).expanduser().resolve())
|
|
||||||
core_conf["generated_config_path"] = str(
|
|
||||||
Path(core_conf["generated_config_path"]).expanduser().resolve())
|
|
||||||
|
|
||||||
|
|
||||||
def load_config_layer(confman, config_source):
|
|
||||||
"""
|
|
||||||
Load a config layer, find the necessary plugin classes, then validate it.
|
|
||||||
"""
|
|
||||||
# Load in config layer
|
|
||||||
confman.load_overlay(config_source)
|
|
||||||
|
|
||||||
# Get the core config so we can find the plugin directory
|
|
||||||
core_config = confman.get_config_bundle("shepherd")
|
|
||||||
plugin_dir = core_config["plugin_dir"]
|
|
||||||
|
|
||||||
# List other table names to get plugins we need to load
|
|
||||||
plugin_names = confman.get_bundle_names()
|
|
||||||
plugin_names.remove("shepherd")
|
|
||||||
|
|
||||||
# Load plugins to get their conf defs
|
|
||||||
plugin_classes = plugin.find_plugins(plugin_names, plugin_dir)
|
|
||||||
for plugin_name, plugin_class in plugin_classes.items():
|
|
||||||
new_conf_def = config.ConfDefinition()
|
|
||||||
plugin_class.define_config(new_conf_def)
|
|
||||||
confman.add_confdef(plugin_name, new_conf_def)
|
|
||||||
|
|
||||||
# Get plugin configs
|
|
||||||
plugin_configs = confman.get_config_bundles(plugin_classes.keys())
|
|
||||||
return (core_config, plugin_classes, plugin_configs)
|
|
||||||
|
|
||||||
|
|
||||||
def compile_config(default_config_path):
|
|
||||||
"""
|
|
||||||
Run through the process of assembling the various config layers, falling back to working
|
|
||||||
ones where necessary. Also gathers needed plugin classes in the process.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Create core confdef and populate it
|
|
||||||
core_confdef = config.ConfDefinition()
|
|
||||||
define_core_config(core_confdef)
|
|
||||||
|
|
||||||
confman = config.ConfigManager()
|
|
||||||
confman.add_confdef("shepherd", core_confdef)
|
|
||||||
|
|
||||||
# Default config. This must validate to continue.
|
|
||||||
try:
|
|
||||||
core_conf, plugin_classes, plugin_configs = load_config_layer(
|
|
||||||
confman, Path(default_config_path).expanduser())
|
|
||||||
logging.info(F"Loaded default config from {default_config_path}")
|
|
||||||
except:
|
|
||||||
logging.error(F"Failed to load default config from {default_config_path}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Resolve and freeze local install paths that shouldn't be changed or affect loading custom config
|
|
||||||
confman.freeze_value("shepherd", "root_dir")
|
|
||||||
confman.freeze_value("shepherd", "plugin_dir")
|
|
||||||
confman.freeze_value("shepherd", "custom_config_path")
|
|
||||||
confman.freeze_value("shepherd", "generated_config_path")
|
|
||||||
resolve_core_conf_paths(core_conf)
|
|
||||||
|
|
||||||
# Pull out custom config path and save current good root_config
|
|
||||||
custom_config_path = core_conf["custom_config_path"]
|
|
||||||
saved_root_config = deepcopy(confman.root_config)
|
|
||||||
|
|
||||||
# Custom config layer. If this fails, maintain default config but continue on to Control layer
|
|
||||||
try:
|
|
||||||
core_conf, plugin_classes, plugin_configs = load_config_layer(
|
|
||||||
confman, custom_config_path)
|
|
||||||
logging.info(F"Loaded custom config from {custom_config_path}")
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(
|
|
||||||
F"Failed to load custom config from {custom_config_path}. Falling back to default config.", exc_info=e)
|
|
||||||
confman.root_config = saved_root_config
|
|
||||||
|
|
||||||
# Freeze Shepherd Control related config.
|
|
||||||
confman.freeze_value("shepherd", "control_server")
|
|
||||||
confman.freeze_value("shepherd", "control_api_key")
|
|
||||||
resolve_core_conf_paths(core_conf)
|
|
||||||
|
|
||||||
# Save current good root_config
|
|
||||||
saved_root_config = deepcopy(confman.root_config)
|
|
||||||
|
|
||||||
# Shepherd Control config layer. If this fails, maintain current local config.
|
|
||||||
try:
|
|
||||||
control_config = control.get_config(core_conf["root_dir"])
|
|
||||||
try:
|
|
||||||
core_conf, plugin_classes, plugin_configs = load_config_layer(
|
|
||||||
confman, control_config)
|
|
||||||
logging.info(F"Loaded cached Shepherd Control config")
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(
|
|
||||||
F"Failed to load cached Shepherd Control config. Falling back to local config.", exc_info=e)
|
|
||||||
confman.root_config = saved_root_config
|
|
||||||
except:
|
|
||||||
logging.warning("No cached Shepherd Control config available.")
|
|
||||||
|
|
||||||
confman.dump_to_file(core_conf["generated_config_path"])
|
|
||||||
|
|
||||||
return core_conf, plugin_classes, plugin_configs
|
|
||||||
|
|
||||||
|
|
||||||
@click.group(invoke_without_command = True)
|
|
||||||
#help="Path to default config TOML file"
|
|
||||||
|
|
||||||
@click.argument('default_config', default="shepherd-default.toml", type=click.Path())
|
|
||||||
@click.pass_context
|
|
||||||
def cli(ctx, default_config):
|
|
||||||
"""
|
|
||||||
Core service. Expects the default config to be set as an argument.
|
|
||||||
"""
|
|
||||||
#argparser = argparse.ArgumentParser(description="Keep track of a mob "
|
|
||||||
# "of roaming Pis")
|
|
||||||
#argparser.add_argument("configfile", nargs='?', metavar="configfile",
|
|
||||||
# help="Path to configfile", default="shepherd.toml")
|
|
||||||
#argparser.add_argument(
|
|
||||||
# '-e', '--noedit', help="Disable the editable config temporarily", action="store_true", default=False)
|
|
||||||
#argparser.add_argument("-t", "--test", help="Test and interface function of the from 'plugin:function'",
|
|
||||||
# default=None)
|
|
||||||
|
|
||||||
#args = argparser.parse_args()
|
|
||||||
|
|
||||||
core_conf, plugin_classes, plugin_configs = compile_config(default_config)
|
|
||||||
|
|
||||||
if args.test is None:
|
|
||||||
control.init_control(core_conf, plugin_configs)
|
|
||||||
|
|
||||||
scheduler.init_scheduler(core_conf)
|
|
||||||
plugin.init_plugins(plugin_classes, plugin_configs, core_conf)
|
|
||||||
scheduler.restore_jobs()
|
|
||||||
|
|
||||||
print(str(datetime.now()))
|
|
||||||
|
|
||||||
if ctx.invoked_subcommand is not None:
|
|
||||||
return
|
|
||||||
|
|
||||||
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
|
|
||||||
try:
|
|
||||||
scheduler.start()
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@click.argument('plugin_function')
|
|
||||||
@cli.command()
|
|
||||||
def test():
|
|
||||||
if args.test is not None:
|
|
||||||
(test_plugin, test_func) = args.test.split(':')
|
|
||||||
func = getattr(shepherd.plugin.plugin_functions[test_plugin], test_func)
|
|
||||||
print(func())
|
|
||||||
return
|
|
||||||
@ -1,114 +0,0 @@
|
|||||||
from enum import Enum, auto
|
|
||||||
import inspect
|
|
||||||
|
|
||||||
class RehydrateMethod(Enum):
|
|
||||||
DIRECT = auto()
|
|
||||||
INIT = auto()
|
|
||||||
CLASS_METHOD = auto()
|
|
||||||
|
|
||||||
#freezedry, for when pickling is just a bit too intense
|
|
||||||
|
|
||||||
# The class key is a reserved dict key used to flag that the dict should be unpacked back out to a class instance
|
|
||||||
class_key = "<freezedried>"
|
|
||||||
# The Pack module stores some state from init to keep a list of valid packable classes
|
|
||||||
freezedryables = {}
|
|
||||||
|
|
||||||
# Decorator to mark class as packable and keep track of associated names and classes. When packed, the
|
|
||||||
# special key string "<packable>" indicates what class the current dict should be unpacked to
|
|
||||||
|
|
||||||
# name argument is the string that will identify this class in a packed dict
|
|
||||||
def freezedryable(cls, rehydrate_method=RehydrateMethod.DIRECT, name=None):
|
|
||||||
if name is None:
|
|
||||||
cls._freezedry_name = cls.__name__
|
|
||||||
else:
|
|
||||||
if isinstance(name, str):
|
|
||||||
raise Exception("freezedryable name must be a string")
|
|
||||||
cls._freezedry_name = name
|
|
||||||
cls._rehydrate_method = rehydrate_method
|
|
||||||
|
|
||||||
if cls._freezedry_name in freezedryables:
|
|
||||||
raise Exception("Duplicate freezedryable class name "+cls._freezedry_name)
|
|
||||||
freezedryables[cls._freezedry_name] = cls
|
|
||||||
|
|
||||||
def _freezedry(self):
|
|
||||||
dried_dict=_freezedry_dict(vars(self))
|
|
||||||
dried_dict[class_key]=self._freezedry_name
|
|
||||||
return dried_dict
|
|
||||||
|
|
||||||
cls.freezedry=_freezedry
|
|
||||||
#setattr(cls, "freezedry", freezedry)
|
|
||||||
return cls
|
|
||||||
|
|
||||||
|
|
||||||
def freezedry(hydrated_obj):
|
|
||||||
# If it's a primitive, store it. If it's a dict or list, recursively freezedry that.
|
|
||||||
# If it's an instance of another freezedryable class, call its .freezedry() method.
|
|
||||||
if isinstance(hydrated_obj, (str, int, float, bool, type(None))):
|
|
||||||
return hydrated_obj
|
|
||||||
elif isinstance(hydrated_obj, dict):
|
|
||||||
return _freezedry_dict(hydrated_obj)
|
|
||||||
elif isinstance(hydrated_obj, list):
|
|
||||||
dried_list = []
|
|
||||||
for val in hydrated_obj:
|
|
||||||
dried_list.append(freezedry(val))
|
|
||||||
return dried_list
|
|
||||||
elif hasattr(hydrated_obj, "_freezedry_name"):
|
|
||||||
return hydrated_obj.freezedry()
|
|
||||||
else:
|
|
||||||
raise Exception("Object "+str(hydrated_obj)+" is not freezedryable")
|
|
||||||
|
|
||||||
def _freezedry_dict(hydrated_dict):
|
|
||||||
dried_dict = {}
|
|
||||||
for k,val in hydrated_dict.items():
|
|
||||||
if not isinstance(k,str):
|
|
||||||
raise Exception("Non-string dictionary keys are not freezedryable")
|
|
||||||
if k == class_key:
|
|
||||||
raise Exception("Key "+class_key+" is reserved for internal freezedry use")
|
|
||||||
dried_dict[k]=freezedry(val)
|
|
||||||
return dried_dict
|
|
||||||
|
|
||||||
def rehydrate(dried_obj):
|
|
||||||
if isinstance(dried_obj, (str, int, float, bool, type(None))):
|
|
||||||
return dried_obj
|
|
||||||
elif isinstance(dried_obj, dict):
|
|
||||||
return _rehydrate_dict(dried_obj)
|
|
||||||
elif isinstance(dried_obj, list):
|
|
||||||
hydrated_list = []
|
|
||||||
for val in dried_obj:
|
|
||||||
hydrated_list.append(rehydrate(val))
|
|
||||||
return hydrated_list
|
|
||||||
else:
|
|
||||||
raise Exception("Object "+str(dried_obj)+" is not rehydrateable")
|
|
||||||
|
|
||||||
def _rehydrate_dict(dried_dict):
|
|
||||||
hydrated_dict = {}
|
|
||||||
for k,val in dried_dict.items():
|
|
||||||
if not isinstance(k,str):
|
|
||||||
raise Exception("Non-string dictionary keys are not rehydrateable")
|
|
||||||
if k != class_key:
|
|
||||||
hydrated_dict[k]=rehydrate(val)
|
|
||||||
|
|
||||||
# Check if this is an object that needs to be unpacked back to an instance
|
|
||||||
if class_key in dried_dict:
|
|
||||||
if dried_dict[class_key] not in freezedryables:
|
|
||||||
raise Exception("Class "+dried_dict[class_key]+" has not been decorated as freezedryable")
|
|
||||||
f_class=freezedryables[dried_dict[class_key]]
|
|
||||||
# If DIRECT, skip __init__ and set attributes back directly
|
|
||||||
if f_class._rehydrate_method == RehydrateMethod.DIRECT:
|
|
||||||
hydrated_instance = f_class.__new__(f_class)
|
|
||||||
hydrated_instance.__dict__.update(hydrated_dict)
|
|
||||||
#if INIT, pass all attributes as keywords to __init__ method
|
|
||||||
elif f_class._rehydrate_method == RehydrateMethod.INIT:
|
|
||||||
hydrated_instance = f_class(**hydrated_dict)
|
|
||||||
# IF CLASS_METHOD, pass all attributes as keyword aguments to classmethod "unpack()"
|
|
||||||
elif f_class._rehydrate_method == RehydrateMethod.CLASS_METHOD:
|
|
||||||
if inspect.ismethod(getattr(f_class, "rehydrate", None)):
|
|
||||||
hydrated_instance = f_class.rehydrate(**hydrated_dict)
|
|
||||||
else:
|
|
||||||
raise Exception("Class "+str(f_class)+" does not have classmethod 'rehydrate()'")
|
|
||||||
else:
|
|
||||||
raise Exception("Class _rehydrate_method "+str(f_class._rehydrate_method)+" is not supported")
|
|
||||||
|
|
||||||
return hydrated_instance
|
|
||||||
else:
|
|
||||||
return hydrated_dict
|
|
||||||
@ -1,263 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
from contextlib import suppress
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
import importlib
|
|
||||||
|
|
||||||
from types import SimpleNamespace
|
|
||||||
from collections import namedtuple
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
import shepherd.scheduler
|
|
||||||
|
|
||||||
|
|
||||||
class Hook():
|
|
||||||
def __init__(self):
|
|
||||||
self.attached_functions = []
|
|
||||||
|
|
||||||
def attach(self, new_func):
|
|
||||||
if not callable(new_func):
|
|
||||||
raise TypeError("Argument to Hook.attach must be callable")
|
|
||||||
self.attached_functions.append(new_func)
|
|
||||||
|
|
||||||
def __call__(self, *args, **kwargs):
|
|
||||||
for func in self.attached_functions:
|
|
||||||
func(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class InterfaceFunction():
|
|
||||||
def __init__(self, func):
|
|
||||||
if not callable(func):
|
|
||||||
raise TypeError("Argument to InterfaceFunction must be callable")
|
|
||||||
self.func = func
|
|
||||||
|
|
||||||
def __call__(self, *args, **kwargs):
|
|
||||||
return self.func(*args, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class Plugin(ABC):
|
|
||||||
@staticmethod
|
|
||||||
@abstractmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def __init__(self, plugininterface, config):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def run(self, hooks, plugins):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class SimplePlugin(Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def()
|
|
||||||
|
|
||||||
def __init__(self, plugininterface, config):
|
|
||||||
super().__init__(plugininterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = plugininterface
|
|
||||||
self.plugins = plugininterface.other_plugins
|
|
||||||
self.hooks = plugininterface.hooks
|
|
||||||
|
|
||||||
|
|
||||||
plugin_interfaces = {} # dict of plugin interfaces
|
|
||||||
|
|
||||||
# convenience dicts bundling together lists from interfaces
|
|
||||||
plugin_functions = {} # dict of plugins containing callable interface functions
|
|
||||||
plugin_hooks = {} # dict of plugins containing hook namespaces
|
|
||||||
|
|
||||||
|
|
||||||
_defer = True
|
|
||||||
_deferred_attachments = []
|
|
||||||
_deferred_jobs = []
|
|
||||||
|
|
||||||
|
|
||||||
def init_plugins(plugin_classes, plugin_configs, core_config):
|
|
||||||
# Startup pluginmanagers
|
|
||||||
global plugin_interfaces
|
|
||||||
global plugin_functions
|
|
||||||
global plugin_hooks
|
|
||||||
|
|
||||||
global _defer
|
|
||||||
global _deferred_attachments
|
|
||||||
global _deferred_jobs
|
|
||||||
|
|
||||||
for name, plugin_class in plugin_classes.items():
|
|
||||||
# Instanciate the plugin interface - this also instanciates
|
|
||||||
# the plugin
|
|
||||||
plugin_interfaces[name] = PluginInterface(
|
|
||||||
name, plugin_class, plugin_configs[name], core_config)
|
|
||||||
plugin_functions[name] = plugin_interfaces[name].functions
|
|
||||||
plugin_hooks[name] = plugin_interfaces[name].hooks
|
|
||||||
|
|
||||||
# interfaces and hooks should now be populated, attach hooks, schedule jobs
|
|
||||||
_defer = False
|
|
||||||
for attachment in _deferred_attachments:
|
|
||||||
_attach_hook(attachment)
|
|
||||||
|
|
||||||
for job_desc in _deferred_jobs:
|
|
||||||
_add_job(job_desc)
|
|
||||||
|
|
||||||
# Hand shared interface callables back out
|
|
||||||
for plugininterface in plugin_interfaces.values():
|
|
||||||
plugininterface.functions = plugin_functions
|
|
||||||
|
|
||||||
def _add_job(job_desc):
|
|
||||||
global _deferred_jobs
|
|
||||||
global _defer
|
|
||||||
|
|
||||||
if not _defer:
|
|
||||||
shepherd.scheduler.schedule_job(job_desc)
|
|
||||||
else:
|
|
||||||
_deferred_jobs.append(job_desc)
|
|
||||||
|
|
||||||
def _attach_hook(attachment):
|
|
||||||
global plugin_hooks
|
|
||||||
global _deferred_attachments
|
|
||||||
global _defer
|
|
||||||
|
|
||||||
if not _defer:
|
|
||||||
targetplugin_hooks = plugin_hooks.get(attachment.pluginname)
|
|
||||||
if targetplugin_hooks is not None:
|
|
||||||
targethook = getattr(targetplugin_hooks, attachment.hookname)
|
|
||||||
if targethook is not None:
|
|
||||||
targethook.attach(attachment.func)
|
|
||||||
else:
|
|
||||||
raise Exception("Could not find hook '" +
|
|
||||||
attachment.hookname+"' in module '"+attachment.pluginname+"'")
|
|
||||||
else:
|
|
||||||
raise Exception(
|
|
||||||
"Cannot attach hook to non-existing module '"+attachment.pluginname+"'")
|
|
||||||
else:
|
|
||||||
_deferred_attachments.append(attachment)
|
|
||||||
|
|
||||||
# Eventually, would like to be able to have client plugin simply:
|
|
||||||
# self.shepherd.add_job(trigger, self.interface.myfunc)
|
|
||||||
# self.shepherd.attach_hook(pluginnanme,hookname, callable)
|
|
||||||
# self.shepherd.addinterface
|
|
||||||
# self.shepherd.hooks.myhook()
|
|
||||||
# self.shepherd.plugins.otherplugin.otherinterface()
|
|
||||||
|
|
||||||
# self.shepherd.add_job()
|
|
||||||
|
|
||||||
# Would be good to be able to use abstract methods to enable simpler plugin config
|
|
||||||
# defs. A way to avoid instantiating the class would be to run it all as class methods,
|
|
||||||
# enabling
|
|
||||||
|
|
||||||
|
|
||||||
HookAttachment = namedtuple(
|
|
||||||
'HookAttachment', ['pluginname', 'hookname', 'func'])
|
|
||||||
|
|
||||||
|
|
||||||
class PluginInterface():
|
|
||||||
'''
|
|
||||||
Class to handle the management of a single plugin.
|
|
||||||
All interaction to or from the plugin to other Shepherd components or
|
|
||||||
plugins should go through here.
|
|
||||||
'''
|
|
||||||
|
|
||||||
def __init__(self, pluginname, pluginclass, pluginconfig, coreconfig):
|
|
||||||
if not issubclass(pluginclass, Plugin):
|
|
||||||
raise TypeError(
|
|
||||||
"Argument must be subclass of shepherd.plugin.Plugin")
|
|
||||||
|
|
||||||
self.coreconfig = coreconfig
|
|
||||||
|
|
||||||
self.hooks = SimpleNamespace() # My hooks
|
|
||||||
self.functions = SimpleNamespace() # My callable interface functions
|
|
||||||
|
|
||||||
self._name = pluginname
|
|
||||||
self._plugin = pluginclass(self, pluginconfig)
|
|
||||||
|
|
||||||
def register_hook(self, name):
|
|
||||||
setattr(self.hooks, name, Hook())
|
|
||||||
|
|
||||||
def register_function(self, func):
|
|
||||||
setattr(self.functions, func.__name__, InterfaceFunction(func))
|
|
||||||
|
|
||||||
@property
|
|
||||||
def other_plugins(self):
|
|
||||||
global plugin_functions
|
|
||||||
return plugin_functions
|
|
||||||
|
|
||||||
def attach_hook(self, pluginname, hookname, func):
|
|
||||||
_attach_hook(HookAttachment(pluginname, hookname, func))
|
|
||||||
|
|
||||||
# Add a job to the scheduler. By default each will be identified by the interface
|
|
||||||
# callable name, and so adding another job with the same callable will oevrride the first.
|
|
||||||
# Use the optional job_name to differentiate jobs with an extra string
|
|
||||||
|
|
||||||
def add_job(self, func, hour, minute, second=0, job_name=""):
|
|
||||||
for function_name, function in self.functions.__dict__.items():
|
|
||||||
if func == function.func:
|
|
||||||
# jobstring should canonically describe this job, to be retrieved
|
|
||||||
# after reboot later. Of the form:
|
|
||||||
# shepherd:pluginname:functionname:jobname
|
|
||||||
jobstring = "shepherd:"+self._name+":"+function_name+":"+job_name
|
|
||||||
_add_job(shepherd.scheduler.JobDescription(jobstring, hour, minute, second))
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise Exception(
|
|
||||||
"Could not add job. Callable must first be registered as "
|
|
||||||
"a plugin interface with PluginInterface.register_function()")
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
An interface to a Shepherd module, accessible by other modules.
|
|
||||||
All public methods in a module interface need to be threadsafe, as they will
|
|
||||||
be called by other modules (which generally run in a seperate thread)
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def find_plugins(plugin_names, plugin_dir=None):
|
|
||||||
"""
|
|
||||||
Looks for the list of plugin names supplied and returns their classes.
|
|
||||||
Will first try for plugin modules and packages locally located in ``shepherd.plugins``,
|
|
||||||
then for modules and packages prefixed ``shepherd_`` located in the supplied ``plugin_dir``
|
|
||||||
and lastly in the global import path.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
plugin_names: List of plugin names to try and load
|
|
||||||
plugin_dir: optional search path
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict of plugin classes, with their names as keys
|
|
||||||
"""
|
|
||||||
plugin_classes = {}
|
|
||||||
for plugin_name in plugin_names:
|
|
||||||
# First look for core plugins, then the plugin_dir, then in the general import path
|
|
||||||
# for custom ones prefixed with "shepherd_"
|
|
||||||
try:
|
|
||||||
#mod = importlib.import_module("shepherd.plugins." + plugin_name)
|
|
||||||
mod = importlib.import_module('.'+plugin_name, "shepherd.plugins")
|
|
||||||
#TODO - ModuleNotFoundError is also triggered here if the plugin has a dependancy that can't be found
|
|
||||||
except ModuleNotFoundError:
|
|
||||||
try:
|
|
||||||
if (plugin_dir is not None) and (plugin_dir != ""):
|
|
||||||
if os.path.isdir(plugin_dir):
|
|
||||||
sys.path.append(plugin_dir)
|
|
||||||
mod = importlib.import_module("shepherd_" + plugin_name)
|
|
||||||
sys.path.remove(plugin_dir)
|
|
||||||
else:
|
|
||||||
raise Exception("plugin_dir is not a valid directory")
|
|
||||||
else:
|
|
||||||
mod = importlib.import_module("shepherd_" + plugin_name)
|
|
||||||
except ModuleNotFoundError:
|
|
||||||
raise Exception("Could not find plugin "+plugin_name)
|
|
||||||
|
|
||||||
# Scan imported module for Plugin subclass
|
|
||||||
attrs = [getattr(mod, name) for name in dir(mod)]
|
|
||||||
for attr in attrs:
|
|
||||||
with suppress(TypeError):
|
|
||||||
if issubclass(attr, Plugin):
|
|
||||||
plugin_classes[plugin_name] = attr
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise Exception("Imported shepherd plugin modules must contain a "
|
|
||||||
"subclass of shepherd.plugin.Plugin, such as"
|
|
||||||
"shepherd.plugin.SimplePlugin")
|
|
||||||
|
|
||||||
return plugin_classes
|
|
||||||
Binary file not shown.
@ -1,77 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import shepherd.config
|
|
||||||
import shepherd.module
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
from gpiozero import OutputDevice, Device
|
|
||||||
from gpiozero.pins.pigpio import PiGPIOFactory
|
|
||||||
|
|
||||||
from shepherd.modules.betterservo import BetterServo
|
|
||||||
|
|
||||||
Device.pin_factory = PiGPIOFactory()
|
|
||||||
|
|
||||||
|
|
||||||
APHIDTRAP_LED_PIN = 5 #Out2
|
|
||||||
|
|
||||||
|
|
||||||
class AphidtrapConfDef(shepherd.config.ConfDefinition):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
|
|
||||||
class AphidtrapModule(shepherd.module.SimpleModule):
|
|
||||||
conf_def = AphidtrapConfDef()
|
|
||||||
|
|
||||||
def setup(self):
|
|
||||||
|
|
||||||
print("Aphidtrap config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
self.led_power = OutputDevice(APHIDTRAP_LED_PIN,
|
|
||||||
active_high=True,
|
|
||||||
initial_value=False)
|
|
||||||
|
|
||||||
def setup_other_modules(self):
|
|
||||||
self.modules.picam.hook_pre_cam.attach(self.led_on)
|
|
||||||
self.modules.picam.hook_post_cam.attach(self.led_off)
|
|
||||||
|
|
||||||
def led_on(self):
|
|
||||||
self.led_power.on()
|
|
||||||
|
|
||||||
def led_off(self):
|
|
||||||
self.led_power.off()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def main(argv):
|
|
||||||
argparser = argparse.ArgumentParser(
|
|
||||||
description='Module for aphidtrap control functions. Run for testing')
|
|
||||||
argparser.add_argument("configfile", nargs='?', metavar="configfile",
|
|
||||||
help="Path to configfile", default="conf.toml")
|
|
||||||
|
|
||||||
|
|
||||||
args = argparser.parse_args()
|
|
||||||
confman = shepherd.config.ConfigManager()
|
|
||||||
|
|
||||||
srcdict = {"aphidtrap": {}}
|
|
||||||
|
|
||||||
if os.path.isfile(args.configfile):
|
|
||||||
confman.load(args.configfile)
|
|
||||||
else:
|
|
||||||
confman.load(srcdict)
|
|
||||||
|
|
||||||
aphidtrap_mod = AphidtrapModule(confman.get_config("aphidtrap", AphidtrapConfDef()),
|
|
||||||
shepherd.module.Interface(None))
|
|
||||||
|
|
||||||
aphidtrap_mod.led_on()
|
|
||||||
time.sleep(2)
|
|
||||||
aphidtrap_mod.led_off()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main(sys.argv[1:])
|
|
||||||
@ -1,147 +0,0 @@
|
|||||||
from gpiozero import PWMOutputDevice, SourceMixin, CompositeDevice
|
|
||||||
|
|
||||||
|
|
||||||
class BetterServo(SourceMixin, CompositeDevice):
|
|
||||||
"""
|
|
||||||
Copy of GPIOZero servo, but with control over pulse width and active_high
|
|
||||||
"""
|
|
||||||
def __init__(
|
|
||||||
self, pin=None, initial_value=0.0,
|
|
||||||
min_pulse_width=1/1000, max_pulse_width=2/1000,
|
|
||||||
frame_width=20/1000, pin_factory=None, active_high=True):
|
|
||||||
if min_pulse_width >= max_pulse_width:
|
|
||||||
raise ValueError('min_pulse_width must be less than max_pulse_width')
|
|
||||||
if max_pulse_width >= frame_width:
|
|
||||||
raise ValueError('max_pulse_width must be less than frame_width')
|
|
||||||
self._frame_width = frame_width
|
|
||||||
self._min_dc = min_pulse_width / frame_width
|
|
||||||
self._dc_range = (max_pulse_width - min_pulse_width) / frame_width
|
|
||||||
self._min_value = -1
|
|
||||||
self._value_range = 2
|
|
||||||
super(BetterServo, self).__init__(
|
|
||||||
pwm_device=PWMOutputDevice(
|
|
||||||
pin, frequency=int(1 / frame_width), pin_factory=pin_factory,
|
|
||||||
active_high=False
|
|
||||||
),
|
|
||||||
pin_factory=pin_factory
|
|
||||||
)
|
|
||||||
self.pwm_device.active_high=active_high
|
|
||||||
try:
|
|
||||||
self.value = initial_value
|
|
||||||
except:
|
|
||||||
self.close()
|
|
||||||
raise
|
|
||||||
|
|
||||||
@property
|
|
||||||
def frame_width(self):
|
|
||||||
"""
|
|
||||||
The time between control pulses, measured in seconds.
|
|
||||||
"""
|
|
||||||
return self._frame_width
|
|
||||||
|
|
||||||
@property
|
|
||||||
def min_pulse_width(self):
|
|
||||||
"""
|
|
||||||
The control pulse width corresponding to the servo's minimum position,
|
|
||||||
measured in seconds.
|
|
||||||
"""
|
|
||||||
return self._min_dc * self.frame_width
|
|
||||||
|
|
||||||
@property
|
|
||||||
def max_pulse_width(self):
|
|
||||||
"""
|
|
||||||
The control pulse width corresponding to the servo's maximum position,
|
|
||||||
measured in seconds.
|
|
||||||
"""
|
|
||||||
return (self._dc_range * self.frame_width) + self.min_pulse_width
|
|
||||||
|
|
||||||
@property
|
|
||||||
def pulse_width(self):
|
|
||||||
"""
|
|
||||||
Returns the current pulse width controlling the servo.
|
|
||||||
"""
|
|
||||||
if self.pwm_device.frequency is None:
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
return self.pwm_device.state * self.frame_width
|
|
||||||
|
|
||||||
@pulse_width.setter
|
|
||||||
def pulse_width(self, value):
|
|
||||||
if value is None:
|
|
||||||
self.pwm_device.frequency = None
|
|
||||||
elif self.min_pulse_width <= value <= self.max_pulse_width:
|
|
||||||
self.pwm_device.frequency = int(1 / self.frame_width)
|
|
||||||
self.pwm_device.value = (value / self.frame_width)
|
|
||||||
else:
|
|
||||||
raise OutputDeviceBadValue("Servo pulse_width must be between min and max supplied during construction, or None")
|
|
||||||
|
|
||||||
def min(self):
|
|
||||||
"""
|
|
||||||
Set the servo to its minimum position.
|
|
||||||
"""
|
|
||||||
self.value = -1
|
|
||||||
|
|
||||||
def mid(self):
|
|
||||||
"""
|
|
||||||
Set the servo to its mid-point position.
|
|
||||||
"""
|
|
||||||
self.value = 0
|
|
||||||
|
|
||||||
def max(self):
|
|
||||||
"""
|
|
||||||
Set the servo to its maximum position.
|
|
||||||
"""
|
|
||||||
self.value = 1
|
|
||||||
|
|
||||||
def detach(self):
|
|
||||||
"""
|
|
||||||
Temporarily disable control of the servo. This is equivalent to
|
|
||||||
setting :attr:`value` to ``None``.
|
|
||||||
"""
|
|
||||||
self.value = None
|
|
||||||
|
|
||||||
def _get_value(self):
|
|
||||||
if self.pwm_device.frequency is None:
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
return (
|
|
||||||
((self.pwm_device.state - self._min_dc) / self._dc_range) *
|
|
||||||
self._value_range + self._min_value)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def value(self):
|
|
||||||
"""
|
|
||||||
Represents the position of the servo as a value between -1 (the minimum
|
|
||||||
position) and +1 (the maximum position). This can also be the special
|
|
||||||
value ``None`` indicating that the servo is currently "uncontrolled",
|
|
||||||
i.e. that no control signal is being sent. Typically this means the
|
|
||||||
servo's position remains unchanged, but that it can be moved by hand.
|
|
||||||
"""
|
|
||||||
result = self._get_value()
|
|
||||||
if result is None:
|
|
||||||
return result
|
|
||||||
else:
|
|
||||||
# NOTE: This round() only exists to ensure we don't confuse people
|
|
||||||
# by returning 2.220446049250313e-16 as the default initial value
|
|
||||||
# instead of 0. The reason _get_value and _set_value are split
|
|
||||||
# out is for descendents that require the un-rounded values for
|
|
||||||
# accuracy
|
|
||||||
return round(result, 14)
|
|
||||||
|
|
||||||
@value.setter
|
|
||||||
def value(self, value):
|
|
||||||
if value is None:
|
|
||||||
self.pwm_device.frequency = None
|
|
||||||
elif -1 <= value <= 1:
|
|
||||||
self.pwm_device.frequency = int(1 / self.frame_width)
|
|
||||||
self.pwm_device.value = (
|
|
||||||
self._min_dc + self._dc_range *
|
|
||||||
((value - self._min_value) / self._value_range)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
raise OutputDeviceBadValue(
|
|
||||||
"Servo value must be between -1 and 1, or None")
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_active(self):
|
|
||||||
return self.value is not None
|
|
||||||
@ -1,78 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class FlytrapPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def('servo_open_pulse', shconf.IntDef(default=1200, minval=800, maxval=2200))
|
|
||||||
confdef.add_def('servo_closed_pulse', shconf.IntDef(default=1800, minval=800, maxval=2200))
|
|
||||||
confdef.add_def('servo_open_time', shconf.IntDef(default=5))
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.root_dir = os.path.expanduser(pluginInterface.coreconfig["root_dir"])
|
|
||||||
self.id = pluginInterface.coreconfig["id"]
|
|
||||||
|
|
||||||
print("Flytrap config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
self.interface.attach_hook("usbcam", "pre_cam", self.led_on)
|
|
||||||
self.interface.attach_hook("usbcam", "post_cam", self.uv_camera)
|
|
||||||
|
|
||||||
self.interface.register_function(self.test)
|
|
||||||
|
|
||||||
def uv_camera(self):
|
|
||||||
self.led_off()
|
|
||||||
self.led_uv_on()
|
|
||||||
self.plugins["usbcam"].run_cameras(" UV")
|
|
||||||
self.led_uv_off()
|
|
||||||
self.run_servo()
|
|
||||||
|
|
||||||
def led_on(self):
|
|
||||||
self.plugins["scout"].set_out1(True)
|
|
||||||
|
|
||||||
def led_off(self):
|
|
||||||
self.plugins["scout"].set_out1(False)
|
|
||||||
|
|
||||||
def led_uv_on(self):
|
|
||||||
self.plugins["scout"].set_out2(True)
|
|
||||||
|
|
||||||
def led_uv_off(self):
|
|
||||||
self.plugins["scout"].set_out2(False)
|
|
||||||
|
|
||||||
def run_servo(self):
|
|
||||||
self.plugins["scout"].set_aux5v(True)
|
|
||||||
#self.door_servo_power.on()
|
|
||||||
time.sleep(0.5)
|
|
||||||
|
|
||||||
self.plugins["scout"].set_pwm1(True, self.config["servo_open_pulse"])
|
|
||||||
#self.door_servo.pulse_width = self.config["servo_open_pulse"] / 1000000
|
|
||||||
time.sleep(self.config["servo_open_time"])
|
|
||||||
|
|
||||||
self.plugins["scout"].set_pwm1(True, self.config["servo_closed_pulse"])
|
|
||||||
#self.door_servo.pulse_width = self.config["servo_closed_pulse"] / 1000000
|
|
||||||
time.sleep(self.config["servo_open_time"])
|
|
||||||
self.plugins["scout"].set_pwm1(False, self.config["servo_closed_pulse"])
|
|
||||||
#self.door_servo.detach()
|
|
||||||
self.plugins["scout"].set_aux5v(False)
|
|
||||||
#self.door_servo_power.off()
|
|
||||||
|
|
||||||
def test(self):
|
|
||||||
self.led_on()
|
|
||||||
time.sleep(1)
|
|
||||||
self.led_off()
|
|
||||||
self.run_servo()
|
|
||||||
@ -1,80 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class MothtrapPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def('servo_open_pulse', shconf.IntDef(default=1200, minval=800, maxval=2200))
|
|
||||||
confdef.add_def('servo_closed_pulse', shconf.IntDef(default=1800, minval=800, maxval=2200))
|
|
||||||
confdef.add_def('servo_open_time', shconf.IntDef(default=5))
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.root_dir = os.path.expanduser(pluginInterface.coreconfig["root_dir"])
|
|
||||||
self.id = pluginInterface.coreconfig["id"]
|
|
||||||
|
|
||||||
print("Mothtrap config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#servo_max = self.config["servo_open_pulse"] / 1000000
|
|
||||||
#servo_min = self.config["servo_closed_pulse"] / 1000000
|
|
||||||
|
|
||||||
#if servo_min > servo_max:
|
|
||||||
# servo_min, servo_max = servo_max, servo_min
|
|
||||||
|
|
||||||
|
|
||||||
#print(F"Supplied min: {servo_min}, max: {servo_max}")
|
|
||||||
|
|
||||||
|
|
||||||
self.interface.attach_hook("usbcam", "pre_cam", self.led_on)
|
|
||||||
self.interface.attach_hook("usbcam", "post_cam", self.led_off)
|
|
||||||
self.interface.attach_hook("usbcam", "post_cam", self.run_servo)
|
|
||||||
|
|
||||||
self.interface.register_function(self.test)
|
|
||||||
|
|
||||||
def led_on(self):
|
|
||||||
self.plugins["scout"].set_out1(True)
|
|
||||||
#self.led_power.on()
|
|
||||||
|
|
||||||
def led_off(self):
|
|
||||||
self.plugins["scout"].set_out1(False)
|
|
||||||
#self.led_power.off()
|
|
||||||
|
|
||||||
def run_servo(self):
|
|
||||||
self.plugins["scout"].set_aux5v(True)
|
|
||||||
#self.door_servo_power.on()
|
|
||||||
time.sleep(0.5)
|
|
||||||
|
|
||||||
self.plugins["scout"].set_pwm1(True, self.config["servo_open_pulse"])
|
|
||||||
#self.door_servo.pulse_width = self.config["servo_open_pulse"] / 1000000
|
|
||||||
time.sleep(self.config["servo_open_time"])
|
|
||||||
|
|
||||||
self.plugins["scout"].set_pwm1(True, self.config["servo_closed_pulse"])
|
|
||||||
#self.door_servo.pulse_width = self.config["servo_closed_pulse"] / 1000000
|
|
||||||
time.sleep(self.config["servo_open_time"])
|
|
||||||
self.plugins["scout"].set_pwm1(False, self.config["servo_closed_pulse"])
|
|
||||||
#self.door_servo.detach()
|
|
||||||
self.plugins["scout"].set_aux5v(False)
|
|
||||||
#self.door_servo_power.off()
|
|
||||||
|
|
||||||
def test(self):
|
|
||||||
self.led_on()
|
|
||||||
time.sleep(1)
|
|
||||||
self.led_off()
|
|
||||||
self.run_servo()
|
|
||||||
@ -1,174 +0,0 @@
|
|||||||
import io
|
|
||||||
import os
|
|
||||||
from datetime import datetime
|
|
||||||
import time
|
|
||||||
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
|
|
||||||
|
|
||||||
from picamera import PiCamera
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
|
|
||||||
|
|
||||||
asset_dir = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
overlayfont_filename = os.path.join(asset_dir, "DejaVuSansMono.ttf")
|
|
||||||
logo_filename = os.path.join(asset_dir, "smallshepherd.png")
|
|
||||||
|
|
||||||
# on server side, we want to be able to list commands that a module responds to
|
|
||||||
# without actually instantiating the module class. Add command templates into
|
|
||||||
# the conf_def, than attach to them in the interface? Was worried about having
|
|
||||||
# "two sources of truth", but you already need to match the conf_def to the
|
|
||||||
# name where you access the value in the module. Could have add_command, which
|
|
||||||
# you then add standard conf_def subclasses to, to reuse validation and server
|
|
||||||
# form generation logic...
|
|
||||||
|
|
||||||
|
|
||||||
class PiCamPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def('upload_images', shconf.BoolDef(default=False, optional=True,
|
|
||||||
helptext="If true, move to an Uploader bucket. Requires Uploader plugin"))
|
|
||||||
confdef.add_def('upload_bucket', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Name of uploader bucket to shift images to."))
|
|
||||||
confdef.add_def('save_directory', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Name of directory path to save images. If empty, a 'usbcamera' directory under the Shepherd root dir will be used"))
|
|
||||||
confdef.add_def('append_id', shconf.BoolDef(default=True, optional=True,
|
|
||||||
helptext="If true, add the system ID to the end of image filenames"))
|
|
||||||
confdef.add_def('show_overlay', shconf.BoolDef(default=True, optional=True,
|
|
||||||
helptext="If true, add an overlay on each image with the system ID and date."))
|
|
||||||
confdef.add_def('overlay_desc', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Text to add to the overlay after the system ID and camera name"))
|
|
||||||
confdef.add_def('jpeg_quality', shconf.IntDef(default=80, minval=60, maxval=95, optional=True,
|
|
||||||
helptext="JPEG quality to save with. Max of 95, passed directly to Pillow"))
|
|
||||||
|
|
||||||
array = confdef.add_def('trigger', shconf.TableArrayDef(
|
|
||||||
helptext="Array of triggers that will use all cameras"))
|
|
||||||
array.add_def('hour', shconf.StringDef())
|
|
||||||
array.add_def('minute', shconf.StringDef())
|
|
||||||
array.add_def('second', shconf.StringDef(default="0", optional=True))
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.root_dir = os.path.expanduser(pluginInterface.coreconfig["root_dir"])
|
|
||||||
self.id = pluginInterface.coreconfig["id"]
|
|
||||||
|
|
||||||
self.interface.register_hook("pre_cam")
|
|
||||||
self.interface.register_hook("post_cam")
|
|
||||||
self.interface.register_function(self.camera_job)
|
|
||||||
# do some camera init stuff
|
|
||||||
|
|
||||||
print("PiCamera config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
# Seconds to wait for exposure and white balance auto-adjust to stabilise
|
|
||||||
self.stabilise_delay = 3
|
|
||||||
|
|
||||||
if self.config["save_directory"] is "":
|
|
||||||
self.save_directory = os.path.join(self.root_dir, "picamera")
|
|
||||||
else:
|
|
||||||
self.save_directory = self.config["save_directory"]
|
|
||||||
|
|
||||||
if not os.path.exists(self.save_directory):
|
|
||||||
os.makedirs(self.save_directory)
|
|
||||||
|
|
||||||
if self.config["show_overlay"]:
|
|
||||||
# Load assets
|
|
||||||
self.logo_im = Image.open(logo_filename)
|
|
||||||
|
|
||||||
self.font_size_cache = {}
|
|
||||||
self.logo_size_cache = {}
|
|
||||||
|
|
||||||
#global cam_led
|
|
||||||
#cam_led = LED(CAMERA_LED_PIN, active_high=False, initial_value=False)
|
|
||||||
|
|
||||||
for trigger in self.config["trigger"]:
|
|
||||||
trigger_id = trigger["hour"]+'-' + trigger["minute"]+'-'+trigger["second"]
|
|
||||||
self.interface.add_job(
|
|
||||||
self.camera_job, trigger["hour"], trigger["minute"], trigger["second"], job_name=trigger_id)
|
|
||||||
|
|
||||||
def _generate_overlay(self, width, height, image_time):
|
|
||||||
font_size = int(height/40)
|
|
||||||
margin_size = int(font_size/5)
|
|
||||||
|
|
||||||
if font_size not in self.font_size_cache:
|
|
||||||
self.font_size_cache[font_size] = ImageFont.truetype(
|
|
||||||
overlayfont_filename, int(font_size*0.9))
|
|
||||||
thisfont = self.font_size_cache[font_size]
|
|
||||||
|
|
||||||
if font_size not in self.logo_size_cache:
|
|
||||||
newsize = (int(self.logo_im.width*(
|
|
||||||
font_size/self.logo_im.height)), font_size)
|
|
||||||
self.logo_size_cache[font_size] = self.logo_im.resize(
|
|
||||||
newsize, Image.BILINEAR)
|
|
||||||
thislogo = self.logo_size_cache[font_size]
|
|
||||||
|
|
||||||
desc_text = self.config["overlay_desc"]
|
|
||||||
if self.config["append_id"]:
|
|
||||||
desc_text = self.id + " " + desc_text
|
|
||||||
|
|
||||||
time_text = image_time.strftime("%Y-%m-%d %H:%M:%S")
|
|
||||||
|
|
||||||
overlay = Image.new('RGBA', (width, font_size+(2*margin_size)), (0, 0, 0))
|
|
||||||
overlay.paste(thislogo, (int((overlay.width-thislogo.width)/2), margin_size))
|
|
||||||
|
|
||||||
draw = ImageDraw.Draw(overlay)
|
|
||||||
draw.text((margin_size*2, margin_size), desc_text,
|
|
||||||
font=thisfont, fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
datewidth, _ = draw.textsize(time_text, thisfont)
|
|
||||||
draw.text((overlay.width-(margin_size*2)-datewidth, margin_size), time_text, font=thisfont,
|
|
||||||
fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
# make whole overlay half transparent
|
|
||||||
overlay.putalpha(128)
|
|
||||||
return overlay
|
|
||||||
|
|
||||||
|
|
||||||
def camera_job(self):
|
|
||||||
self.hooks.pre_cam()
|
|
||||||
|
|
||||||
#Capture image
|
|
||||||
print("Running camera...")
|
|
||||||
stream = io.BytesIO()
|
|
||||||
with PiCamera() as picam:
|
|
||||||
picam.resolution = (3280, 2464)
|
|
||||||
picam.start_preview()
|
|
||||||
time.sleep(self.stabilise_delay)
|
|
||||||
picam.capture(stream, format='jpeg')
|
|
||||||
# "Rewind" the stream to the beginning so we can read its content
|
|
||||||
stream.seek(0)
|
|
||||||
img = Image.open(stream)
|
|
||||||
|
|
||||||
#Process image
|
|
||||||
image_time = datetime.now()
|
|
||||||
|
|
||||||
if self.config["show_overlay"]:
|
|
||||||
overlay = self._generate_overlay(img.width, img.height, image_time)
|
|
||||||
img.paste(overlay, (0, img.height-overlay.height), overlay)
|
|
||||||
|
|
||||||
image_filename = image_time.strftime("%Y-%m-%d %H-%M-%S")
|
|
||||||
if self.config["append_id"]:
|
|
||||||
image_filename = image_filename + " " + self.id
|
|
||||||
|
|
||||||
image_filename = image_filename + ".jpg"
|
|
||||||
image_filename = os.path.join(self.save_directory, image_filename)
|
|
||||||
img.save(image_filename+".writing", "JPEG", quality=self.config["jpeg_quality"])
|
|
||||||
os.rename(image_filename+".writing", image_filename)
|
|
||||||
|
|
||||||
if self.config["upload_images"]:
|
|
||||||
self.plugins["uploader"].move_to_bucket(image_filename, self.config["upload_bucket"])
|
|
||||||
|
|
||||||
self.hooks.post_cam()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
pass
|
|
||||||
# print("main")
|
|
||||||
# main(sys.argv[1:])
|
|
||||||
@ -1 +0,0 @@
|
|||||||
from .scout import ScoutPlugin
|
|
||||||
@ -1,208 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
"""
|
|
||||||
Plugin to interface with the shepherd scout pcb modules. Also semi-compatible
|
|
||||||
with the older TrapCtrl style boards and Pi HATs based on the SleepyPi2, provided
|
|
||||||
they are running the Shepherd Scout firmware.
|
|
||||||
|
|
||||||
The TDW serial message format is used to pull data from the companion board and
|
|
||||||
interact with its RTC and set alarms. This library uses a seperate thread
|
|
||||||
to handle the comms with the supervising microcontroller. Interface functions
|
|
||||||
add a request to the queue, and some may wait for a state to be updated before
|
|
||||||
returning (with a timeout).
|
|
||||||
"""
|
|
||||||
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
from . import tdw
|
|
||||||
|
|
||||||
import queue
|
|
||||||
import threading
|
|
||||||
import re
|
|
||||||
import serial
|
|
||||||
import time
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from enum import Enum, auto
|
|
||||||
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
|
|
||||||
class MsgName(Enum):
|
|
||||||
BATV = "batv"
|
|
||||||
BATI = "bati"
|
|
||||||
TIME = "time"
|
|
||||||
ALARM = "alarm"
|
|
||||||
AUX5V = "aux5v"
|
|
||||||
PWM1 = "pwm1"
|
|
||||||
PWM2 = "pwm2"
|
|
||||||
OUT1 = "out1"
|
|
||||||
OUT2 = "out2"
|
|
||||||
VERSION = "version"
|
|
||||||
LOG = "log"
|
|
||||||
MEASUREMENT = "meas"
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return str(self.value)
|
|
||||||
|
|
||||||
|
|
||||||
logmsgs = ["LE_NONE - Empty log",
|
|
||||||
"LE_POWERUP - Scout startup",
|
|
||||||
"LE_PI_BOOT_TIMEOUT - Tried to turn on Pi but did not recieve succesful boot signal",
|
|
||||||
"LE_PI_ON - Pi has booted",
|
|
||||||
"LE_LOW_VOLT_START_SHUTDOWN - Initiated Pi shutdown due to low supply voltage",
|
|
||||||
"LE_PI_SIGNAL_START_SHUTDOWN - Pi started to shut itself down",
|
|
||||||
"LE_PI_SHUTDOWN_TIMEOUT - Pi did not signal successful shutdown, so killed power",
|
|
||||||
"LE_MAIN5V_DISABLE - Main Pi power turned off",
|
|
||||||
"LE_VOLT_GOOD_MAIN5V_ENABLE - Turned Pi power on after voltage raised enough",
|
|
||||||
"LE_ALARM_MAIN5V_ENABLE - Turned Pi power on after wakeup alarm was hit"]
|
|
||||||
|
|
||||||
|
|
||||||
class ScoutPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def('boardver', shconf.StringDef())
|
|
||||||
confdef.add_def('serialport', shconf.StringDef())
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.msg_handler = tdw.MessageHandler(config["serialport"], 57600)
|
|
||||||
|
|
||||||
self.interface.register_function(self.get_batv)
|
|
||||||
self.interface.register_function(self.get_bati)
|
|
||||||
self.interface.register_function(self.get_time)
|
|
||||||
self.interface.register_function(self.set_alarm)
|
|
||||||
self.interface.register_function(self.set_aux5v)
|
|
||||||
self.interface.register_function(self.set_pwm1)
|
|
||||||
self.interface.register_function(self.set_pwm2)
|
|
||||||
self.interface.register_function(self.set_out1)
|
|
||||||
self.interface.register_function(self.set_out2)
|
|
||||||
self.interface.register_function(self.test_logs)
|
|
||||||
self.interface.register_function(self.get_logs)
|
|
||||||
self.interface.register_function(self.get_measurements)
|
|
||||||
|
|
||||||
self.interface.register_function(self.test)
|
|
||||||
|
|
||||||
def get_version(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.VERSION.value)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.arguments[0:2]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_batv(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.BATV.value)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_bati(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.BATI)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_aux5v(self, enabled):
|
|
||||||
cmd = self.msg_handler.send_command(MsgName.AUX5V, [str(enabled).lower()])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_pwm1(self, enabled, pulse_length):
|
|
||||||
cmd = self.msg_handler.send_command(
|
|
||||||
MsgName.PWM1, [str(enabled).lower(), str(pulse_length)])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_pwm2(self, enabled, pulse_length):
|
|
||||||
cmd = self.msg_handler.send_command(
|
|
||||||
MsgName.PWM2, [str(enabled).lower(), str(pulse_length)])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_out1(self, enabled):
|
|
||||||
cmd = self.msg_handler.send_command(MsgName.OUT1, [str(enabled).lower()])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_out2(self, enabled):
|
|
||||||
cmd = self.msg_handler.send_command(MsgName.OUT2, [str(enabled).lower()])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_time(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.TIME)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def set_alarm(self, unix_time):
|
|
||||||
cmd = self.msg_handler.send_command(MsgName.ALARM, [unix_time])
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments[0]
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_logs(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.LOG)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.multipart_args
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_measurements(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.MEASUREMENT)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.multipart_args
|
|
||||||
return None
|
|
||||||
|
|
||||||
def test_logs(self):
|
|
||||||
rqst = self.msg_handler.send_request(MsgName.LOG)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
for logline in reversed(rqst.response.multipart_args):
|
|
||||||
logdate = datetime.fromtimestamp(int(logline[0]))
|
|
||||||
batv = float(logline[1])/100.0
|
|
||||||
logmessage = logmsgs[int(logline[2])]
|
|
||||||
print(F"::{logdate:%Y-%m-%d %H:%M:%S}:: {batv:.2f}V :: {logmessage}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def test(self):
|
|
||||||
print("Testing companion board...")
|
|
||||||
print(F"Current RTC time is {self.get_time()}")
|
|
||||||
print(F"Current BatV is {self.get_batv()}")
|
|
||||||
print(F"Current BatI is {self.get_bati()}")
|
|
||||||
print("Turning on Out1 for 1 second")
|
|
||||||
self.set_out1(True)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_out1(False)
|
|
||||||
print("Turning on Out2 for 1 second")
|
|
||||||
self.set_out2(True)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_out2(False)
|
|
||||||
|
|
||||||
print("Enabling auxilliary 5V")
|
|
||||||
self.set_aux5v(True)
|
|
||||||
|
|
||||||
print("Sweeping PWM1 from 1000us to 2000us")
|
|
||||||
self.set_pwm1(True, 1000)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_pwm1(True, 2000)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_pwm1(False, 1000)
|
|
||||||
|
|
||||||
print("Sweeping PWM2 from 1000us to 2000us")
|
|
||||||
self.set_pwm2(True, 1000)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_pwm2(True, 2000)
|
|
||||||
time.sleep(1)
|
|
||||||
self.set_pwm2(False, 1000)
|
|
||||||
self.set_aux5v(False)
|
|
||||||
print("Test finished")
|
|
||||||
|
|
||||||
return None
|
|
||||||
@ -1,471 +0,0 @@
|
|||||||
|
|
||||||
"""
|
|
||||||
Python module implementing the TDW (This Do What #!?) serial message format.
|
|
||||||
|
|
||||||
|
|
||||||
Currently only supports responses to sent messages, rather than generic RX.
|
|
||||||
|
|
||||||
Intended use:
|
|
||||||
|
|
||||||
>>> msg_handler = tdw.MessageHandler("/dev/ttyAMA0", 57600)
|
|
||||||
>>> rqst = msg_handler.send_request("BATV")
|
|
||||||
>>> if rqst.wait_for_response()
|
|
||||||
>>> print(rqst.response)
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import queue
|
|
||||||
import threading
|
|
||||||
import serial
|
|
||||||
import re
|
|
||||||
import time
|
|
||||||
import logging
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
DEFAULT = object()
|
|
||||||
|
|
||||||
MAX_MSG_LEN = 128
|
|
||||||
MAX_MSG_ARGS = 8
|
|
||||||
|
|
||||||
|
|
||||||
# Return a Message object, allowing the the caller to check a "sent" flag to see if it's gone through
|
|
||||||
# yet. If needs_response was true, enable a "wait_for_response()" function that internally blocks until
|
|
||||||
# response is recieved or times out. Returns either None or a Response object.
|
|
||||||
# Note that unless a class overrides __bool__(), it will always evaluate to True, allowing "if wait_for_response():"
|
|
||||||
# to check if there's a returned message. A ".response" property gets filled after returning too, allowing
|
|
||||||
# use of that rather than the caller having to get a reference.
|
|
||||||
# Once "wait_for_response()"" returns, the message handler removes it's reference to the message and
|
|
||||||
# the response, so the caller can be sure that it won't be changed underneath it.
|
|
||||||
|
|
||||||
|
|
||||||
# Currently this is all designed only for comms initiated by Python, and doesn't handle responding
|
|
||||||
# to communication initiated by the device yet. For that, perhaps supply callback attachments
|
|
||||||
# for specific message names as well as a generic one, but by default queue them up and wait
|
|
||||||
# for a call to a "process_messages()" function or something - to allow callbacks to be handled
|
|
||||||
# in the same main thread. Optionally have a flag when creating the MessageHandler for it to
|
|
||||||
# dispatch callbacks in new threads asynchronously (or have individual flags when attaching callbacks
|
|
||||||
# perhaps?)
|
|
||||||
|
|
||||||
class TDWException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ResponseNotReceivedError(TDWException):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class MessageType(Enum):
|
|
||||||
COMMENT = "#"
|
|
||||||
COMMAND = "!"
|
|
||||||
REQUEST = "?"
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return str(self.value)
|
|
||||||
|
|
||||||
|
|
||||||
class Message():
|
|
||||||
"""
|
|
||||||
Representation of a message to be sent from this device.
|
|
||||||
Also used to track the corresponding response from the other device.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, msg_type, msg_name, arguments, multipart_args=None):
|
|
||||||
if not isinstance(msg_type, MessageType):
|
|
||||||
raise TypeError("Argument message_type must be a MessageType")
|
|
||||||
self._msg_type = msg_type
|
|
||||||
self._msg_name = str(msg_name)
|
|
||||||
self._arguments = arguments
|
|
||||||
self._multipart_args = multipart_args
|
|
||||||
|
|
||||||
@property
|
|
||||||
def msg_type(self):
|
|
||||||
return self._msg_type
|
|
||||||
|
|
||||||
@property
|
|
||||||
def msg_name(self):
|
|
||||||
return self._msg_name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def arguments(self):
|
|
||||||
return self._arguments
|
|
||||||
|
|
||||||
@property
|
|
||||||
def multipart_args(self):
|
|
||||||
return self._multipart_args
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return F"<{self.__class__.__name__}: {self.__dict__}>"
|
|
||||||
|
|
||||||
|
|
||||||
class TXMessage(Message):
|
|
||||||
"""
|
|
||||||
Locks down and stringifies the arguments and multipart arg list into tuples on creation, avoid
|
|
||||||
them being changed underneath it once the message is added to the send queue.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, msg_type, msg_name, arguments, needs_response, response_timeout, multipart_args=None):
|
|
||||||
|
|
||||||
# Allow single value or string as arg, convert to list
|
|
||||||
if isinstance(arguments, str) or (not hasattr(arguments, "__iter__")):
|
|
||||||
arguments = [arguments]
|
|
||||||
|
|
||||||
# Stringify args and put in immutable tuples
|
|
||||||
immutable_args = tuple([str(arg) for arg in arguments])
|
|
||||||
|
|
||||||
immutable_multipart_args = None
|
|
||||||
if multipart_args is not None:
|
|
||||||
# Allow single value or string as arg, convert to list
|
|
||||||
if isinstance(multipart_args, str) or (not hasattr(multipart_args, "__iter__")):
|
|
||||||
multipart_args = [multipart_args]
|
|
||||||
|
|
||||||
immutable_multipart_args = []
|
|
||||||
for arglist in multipart_args:
|
|
||||||
immutable_multipart_args.append(tuple([str(arg) for arg in arglist]))
|
|
||||||
immutable_multipart_args = tuple(immutable_multipart_args)
|
|
||||||
|
|
||||||
super().__init__(msg_type, msg_name, immutable_args, immutable_multipart_args)
|
|
||||||
|
|
||||||
self._needs_response = needs_response
|
|
||||||
self._response_timeout = response_timeout
|
|
||||||
self._response = None
|
|
||||||
|
|
||||||
# event that is triggered when a matching command response is
|
|
||||||
# recieved and the data parsed into ScoutState. ScoutState should
|
|
||||||
# only be then read via the lock on it.
|
|
||||||
self._responded = threading.Event()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def needs_response(self):
|
|
||||||
return self._needs_response
|
|
||||||
|
|
||||||
@property
|
|
||||||
def response(self):
|
|
||||||
return self._response
|
|
||||||
|
|
||||||
def wait_for_response(self):
|
|
||||||
if not self._needs_response:
|
|
||||||
raise Exception(
|
|
||||||
"Can't wait for response on a message that has not set 'needs_response=True'")
|
|
||||||
if self._responded.wait(self._response_timeout):
|
|
||||||
# Serial thread should have populated self._response by now
|
|
||||||
return self.response
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
class RXMessage(Message):
|
|
||||||
def __init__(self, msg_type, msg_name, arguments, is_multipart=False, multipart_count=0):
|
|
||||||
|
|
||||||
super().__init__(msg_type, msg_name, arguments)
|
|
||||||
|
|
||||||
self._multipart_count = multipart_count
|
|
||||||
self._is_multipart = is_multipart
|
|
||||||
if is_multipart:
|
|
||||||
self._multipart_args = []
|
|
||||||
|
|
||||||
@property
|
|
||||||
def multipart_count(self):
|
|
||||||
"""
|
|
||||||
The declared number of message parts.
|
|
||||||
//Not// necessarily the number or parts that were actually received.
|
|
||||||
For multipart messages that don't declare a count, this is None.
|
|
||||||
"""
|
|
||||||
return self._multipart_count
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_multipart(self):
|
|
||||||
"""
|
|
||||||
Returns true if the message was a multi-part message
|
|
||||||
"""
|
|
||||||
return self._is_multipart
|
|
||||||
|
|
||||||
|
|
||||||
class MessageHandler():
|
|
||||||
def __init__(self, serial_port, baud_rate, response_timeout=0.5, multipart_timeout=0.2, loop_delay=0.01):
|
|
||||||
self._tx_message_queue = queue.Queue()
|
|
||||||
|
|
||||||
self._tx_message = None
|
|
||||||
self._tx_sent_time = None
|
|
||||||
|
|
||||||
self._rx_message = None
|
|
||||||
self._last_rx_time = None
|
|
||||||
|
|
||||||
self._rx_multipart_timeout = multipart_timeout
|
|
||||||
|
|
||||||
# Default, can be overridden by individual messages
|
|
||||||
self.response_timeout = response_timeout
|
|
||||||
|
|
||||||
# delay used in serial processing thread between iterations.
|
|
||||||
# If zero, the handler will just spin constantly asking if more serial bytes are available
|
|
||||||
# (a fcntl.ioctl call). If too large, you lose responsiveness when a new message comes in (when
|
|
||||||
# not waiting for a response the loop delay has an event trigger, so new messages get sent immediately)
|
|
||||||
# Very large values risk filling the serial buffer before data can be processed.
|
|
||||||
|
|
||||||
# pyserial buffer is apparently 1024 or 4096 bytes, and at 57600 baud 10ms delay would only
|
|
||||||
# be 72 bytes.
|
|
||||||
self.loop_delay = loop_delay
|
|
||||||
|
|
||||||
self._rx_string = ""
|
|
||||||
|
|
||||||
self.port = serial.Serial()
|
|
||||||
self.port.baudrate = baud_rate # 57600
|
|
||||||
# Set port to be non-blocking
|
|
||||||
self.port.timeout = 0
|
|
||||||
self.port.port = serial_port
|
|
||||||
|
|
||||||
# self._re_message_frame = re.compile(r"([#!?])(.+?)[\r\n]")
|
|
||||||
self._re_msg_start = re.compile(r"[#!?]")
|
|
||||||
self._frame_end_chars = "\r\n"
|
|
||||||
self._re_msg_bounds = re.compile(r"[#!?\r\n]")
|
|
||||||
|
|
||||||
# start thread
|
|
||||||
self.thread = threading.Thread(target=self._serial_comm_thread, daemon=True)
|
|
||||||
self.thread.start()
|
|
||||||
|
|
||||||
def send_message(self, message):
|
|
||||||
"""
|
|
||||||
Add a message to the queue to be sent
|
|
||||||
"""
|
|
||||||
if not isinstance(message, TXMessage):
|
|
||||||
raise TypeError("'message' argument must be of type TXMessage")
|
|
||||||
self._tx_message_queue.put(message)
|
|
||||||
|
|
||||||
def send_comment(self, message_name, arguments=[], needs_response=False, response_timeout=DEFAULT):
|
|
||||||
if response_timeout is DEFAULT:
|
|
||||||
response_timeout = self.response_timeout
|
|
||||||
msg = TXMessage(MessageType.COMMENT, message_name,
|
|
||||||
arguments, needs_response, response_timeout)
|
|
||||||
self.send_message(msg)
|
|
||||||
return msg
|
|
||||||
|
|
||||||
def send_command(self, message_name, arguments=[], needs_response=True, response_timeout=DEFAULT):
|
|
||||||
if response_timeout is DEFAULT:
|
|
||||||
response_timeout = self.response_timeout
|
|
||||||
msg = TXMessage(MessageType.COMMAND, message_name,
|
|
||||||
arguments, needs_response, response_timeout)
|
|
||||||
self.send_message(msg)
|
|
||||||
return msg
|
|
||||||
|
|
||||||
def send_request(self, message_name, arguments=[], needs_response=True, response_timeout=DEFAULT):
|
|
||||||
if response_timeout is DEFAULT:
|
|
||||||
response_timeout = self.response_timeout
|
|
||||||
msg = TXMessage(MessageType.REQUEST, message_name,
|
|
||||||
arguments, needs_response, response_timeout)
|
|
||||||
self.send_message(msg)
|
|
||||||
return msg
|
|
||||||
|
|
||||||
def response_from_request(self, message_name, arguments=[], response_timeout=DEFAULT):
|
|
||||||
'''
|
|
||||||
Sends request and returns the response. Blocks while waiting. Throws ResponseNotReceivedError
|
|
||||||
if the response times out.
|
|
||||||
'''
|
|
||||||
rqst = self.send_request(message_name, arguments, True, response_timeout)
|
|
||||||
if rqst.wait_for_response():
|
|
||||||
return rqst.response.arguments
|
|
||||||
raise ResponseNotReceivedError(rqst)
|
|
||||||
|
|
||||||
def response_from_command(self, message_name, arguments=[], response_timeout=DEFAULT):
|
|
||||||
'''
|
|
||||||
Sends command and returns the response. Blocks while waiting. Throws ResponseNotReceivedError
|
|
||||||
if the response times out.
|
|
||||||
'''
|
|
||||||
cmd = self.send_command(message_name, arguments, True, response_timeout)
|
|
||||||
if cmd.wait_for_response():
|
|
||||||
return cmd.response.arguments
|
|
||||||
raise ResponseNotReceivedError(cmd)
|
|
||||||
|
|
||||||
def _send_message(self):
|
|
||||||
"""
|
|
||||||
Actually send a message pulled from the queue.
|
|
||||||
Only called from the serial_comm thread
|
|
||||||
"""
|
|
||||||
|
|
||||||
argstr = ""
|
|
||||||
if len(self._tx_message.arguments) > 0:
|
|
||||||
argstr = ':'+','.join(self._tx_message.arguments)
|
|
||||||
|
|
||||||
send_str = F"{self._tx_message.msg_type}{self._tx_message.msg_name}{argstr}\n"
|
|
||||||
self.port.write(send_str.encode('utf-8'))
|
|
||||||
self._tx_sent_time = time.time()
|
|
||||||
log.debug(F"Msg TX: {send_str}")
|
|
||||||
# Only keep the current message around if we need to track a response
|
|
||||||
if not self._tx_message.needs_response:
|
|
||||||
self._tx_message = None
|
|
||||||
|
|
||||||
def _find_msg_frame(self):
|
|
||||||
"""
|
|
||||||
Finds next frame in the _rx_string, trimming any excess.
|
|
||||||
Returns true if it found something and needs to be called again,
|
|
||||||
intended to be called in a loop.
|
|
||||||
|
|
||||||
"""
|
|
||||||
match = self._re_msg_start.search(self._rx_string)
|
|
||||||
if match is None:
|
|
||||||
# No command start characters anywhere in the string, so ditch it
|
|
||||||
self._rx_string = ""
|
|
||||||
return False
|
|
||||||
|
|
||||||
# trim anything before start
|
|
||||||
if match.start() > 0:
|
|
||||||
self._rx_string = self._rx_string[match.start():]
|
|
||||||
|
|
||||||
# Search for next start or end character
|
|
||||||
match = self._re_msg_bounds.search(self._rx_string, 1)
|
|
||||||
if match is None:
|
|
||||||
# No end of frame found
|
|
||||||
if len(self._rx_string) > MAX_MSG_LEN:
|
|
||||||
# We have a message start, but too many characters after it without an end
|
|
||||||
# of frame, so ditch it.
|
|
||||||
self._rx_string = ""
|
|
||||||
return False
|
|
||||||
|
|
||||||
if match[0] in self._frame_end_chars:
|
|
||||||
# Found our frame end
|
|
||||||
self._parse_message_text(self._rx_string[:match.start()])
|
|
||||||
# Trim out the message we just found and start again
|
|
||||||
self._rx_string = self._rx_string[match.end():]
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
# Found another start character, trim and start again
|
|
||||||
self._rx_string = self._rx_string[match.start():]
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _parse_message_text(self, msg_str):
|
|
||||||
"""
|
|
||||||
Parse a recieved message string.
|
|
||||||
'msg_str' contains the start character and everything after that,
|
|
||||||
not including the terminating linefeed.
|
|
||||||
Only called from the serial_comm thread
|
|
||||||
"""
|
|
||||||
|
|
||||||
log.debug(F"Msg RX: {msg_str}")
|
|
||||||
self._last_rx_time = time.time()
|
|
||||||
|
|
||||||
msg_name, _, msg_args_text = msg_str.strip(' \t').partition(':')
|
|
||||||
|
|
||||||
# Convert the character to our enum for later comparison
|
|
||||||
msg_type = MessageType(msg_name[0])
|
|
||||||
|
|
||||||
msg_name = msg_name[1:]
|
|
||||||
msg_args = msg_args_text.split(',')
|
|
||||||
|
|
||||||
# Trim whitespace around name and args
|
|
||||||
msg_name = msg_name.strip(' \t')
|
|
||||||
msg_args = [arg.strip(' \t') for arg in msg_args]
|
|
||||||
|
|
||||||
multipart_count = None
|
|
||||||
multipart_start = False
|
|
||||||
multipart_end = False
|
|
||||||
if msg_args[-1][0] == '<':
|
|
||||||
multipart_start = True
|
|
||||||
# Message is indicating a multipart start
|
|
||||||
count_txt = msg_args[-1][1:].strip(" \t")
|
|
||||||
try:
|
|
||||||
multipart_count = int(count_txt)
|
|
||||||
except ValueError:
|
|
||||||
# If no count supplied or can't understand it, treat it as open
|
|
||||||
multipart_count = None
|
|
||||||
|
|
||||||
# Remove the multipart start indicator from the args
|
|
||||||
msg_args = msg_args[:-1]
|
|
||||||
elif msg_args[-1][0] == '>':
|
|
||||||
multipart_end = True
|
|
||||||
# Remove the multipart end indicator from the args
|
|
||||||
msg_args = msg_args[:-1]
|
|
||||||
|
|
||||||
if self._rx_message is not None:
|
|
||||||
# We have a multipart message in progress
|
|
||||||
if (self._rx_message.msg_name == msg_name) and (self._rx_message.msg_type == msg_type) and (not multipart_start):
|
|
||||||
# Only skip adding args to list if it's a blank mutipart end message
|
|
||||||
if not (multipart_end and (len(msg_args) == 0)):
|
|
||||||
self._rx_message._multipart_args.append(msg_args)
|
|
||||||
|
|
||||||
if multipart_end or (len(self._rx_message.multipart_args) >= self._rx_message.multipart_count):
|
|
||||||
self._process_rx_message()
|
|
||||||
# We're done here
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
# We've got a new message interrupting the in-progress one
|
|
||||||
# Close off existing rx message and continue
|
|
||||||
self._process_rx_message()
|
|
||||||
|
|
||||||
self._rx_message = RXMessage(msg_type, msg_name, msg_args,
|
|
||||||
is_multipart=multipart_start, multipart_count=multipart_count)
|
|
||||||
if not self._rx_message.is_multipart:
|
|
||||||
self._process_rx_message()
|
|
||||||
|
|
||||||
def _process_rx_message(self):
|
|
||||||
"""
|
|
||||||
Process a received and parsed message in self._rx_message,
|
|
||||||
and dispatches any necessary actions.
|
|
||||||
Clears self._rx_message after it's done.
|
|
||||||
"""
|
|
||||||
#print(F"Process rx_msg: {self._rx_message.msg_name}")
|
|
||||||
#print(F"tx message during processing is {self._tx_message.msg_name}")
|
|
||||||
|
|
||||||
if (self._tx_message is not None) and (self._tx_message.msg_name == self._rx_message.msg_name) and (self._rx_message.msg_type == MessageType.COMMENT):
|
|
||||||
# _tx_message only hangs around if it's waiting for response.
|
|
||||||
# can't just assume next received message is a response, as the other
|
|
||||||
# device might be sending something else in the meantime, so check for name.
|
|
||||||
# Responses are always comments.
|
|
||||||
self._tx_message._response = self._rx_message
|
|
||||||
self._tx_message._responded.set()
|
|
||||||
self._tx_message = None
|
|
||||||
else:
|
|
||||||
pass
|
|
||||||
# TODO - Handle other type of received message and dispatch actions here - callbacks?
|
|
||||||
self._rx_message = None
|
|
||||||
|
|
||||||
def _handle_serial_port(self):
|
|
||||||
# If there's bytes, read them and deal with them. The underlying port read locks
|
|
||||||
# the GIL, so use non-blocking mode.
|
|
||||||
if self.port.in_waiting > 0:
|
|
||||||
new_bytes = self.port.read(self.port.in_waiting)
|
|
||||||
self._rx_string = self._rx_string + new_bytes.decode('utf-8')
|
|
||||||
# Find and process message frames
|
|
||||||
while self._find_msg_frame():
|
|
||||||
pass
|
|
||||||
# Loop back to check for more RX characters (the priority)
|
|
||||||
return
|
|
||||||
|
|
||||||
if self._rx_message is not None:
|
|
||||||
if (time.time()-self._last_rx_time) >= self._rx_multipart_timeout:
|
|
||||||
# Been too long waiting for a multipart message part, timeout and process
|
|
||||||
# so the main program can use the message
|
|
||||||
self._process_rx_message()
|
|
||||||
|
|
||||||
if self._tx_message is not None:
|
|
||||||
# Wait for current tx_message to time out
|
|
||||||
if (time.time()-self._tx_sent_time) >= self._tx_message._response_timeout:
|
|
||||||
# Timeout the request
|
|
||||||
self._tx_message = None
|
|
||||||
else:
|
|
||||||
time.sleep(self.loop_delay)
|
|
||||||
else:
|
|
||||||
# Try and get a new message to send
|
|
||||||
try:
|
|
||||||
self._tx_message = self._tx_message_queue.get(
|
|
||||||
block=True, timeout=self.loop_delay)
|
|
||||||
# Only gets here if current_tx_message is actually set
|
|
||||||
self._send_message()
|
|
||||||
except queue.Empty:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _serial_comm_thread(self):
|
|
||||||
while True:
|
|
||||||
# Actual wait is on either the non-empty queue or a serial character to parse
|
|
||||||
# Serial comms is not synchronous, so need to be available to recieve characters
|
|
||||||
# at any point
|
|
||||||
try:
|
|
||||||
log.info(F"Connecting to serial port {self.port.port}, with baud {self.port.baudrate}...")
|
|
||||||
self.port.open()
|
|
||||||
while True:
|
|
||||||
self._handle_serial_port()
|
|
||||||
except serial.SerialException:
|
|
||||||
log.error("Could not open serial port")
|
|
||||||
time.sleep(1)
|
|
||||||
# If there's a serialexception, try to reopen the port
|
|
||||||
|
Before Width: | Height: | Size: 12 KiB |
@ -1,234 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import shutil
|
|
||||||
import os
|
|
||||||
import threading
|
|
||||||
import paramiko
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
# configdef = shepherd.config.definition()
|
|
||||||
|
|
||||||
# Can either import shepherd.config here, and call a function to build a config_def
|
|
||||||
# or can leave a config_def entry point.
|
|
||||||
# probably go with entry point, to stay consistent with the module
|
|
||||||
|
|
||||||
|
|
||||||
# on server side, we want to be able to list commands that a module responds to
|
|
||||||
# without actually instantiating the module class. Add command templates into
|
|
||||||
# the conf_def, than attach to them in the interface? Was worried about having
|
|
||||||
# "two sources of truth", but you already need to match the conf_def to the
|
|
||||||
# name where you access the value in the module. Could have add_command, which
|
|
||||||
# you then add standard conf_def subclasses to, to reuse validation and server
|
|
||||||
# form generation logic...
|
|
||||||
|
|
||||||
|
|
||||||
# The uploader plugin allows the definition of upload "buckets" - essentially
|
|
||||||
# as a way of collecting together settings for upload, timing, and retention settings
|
|
||||||
# Buckets will ignore any filenames ending with ".writing" or ".uploaded"
|
|
||||||
# The "move_to_bucket" interface function is provided, but bucket directories will
|
|
||||||
# also work with any files moved into them externally.
|
|
||||||
|
|
||||||
|
|
||||||
class Destination():
|
|
||||||
def __init__(self, config, node_id, root_dir):
|
|
||||||
self.config = config
|
|
||||||
self.node_id = node_id
|
|
||||||
self.root_dir = root_dir
|
|
||||||
self.sendlist_condition = threading.Condition()
|
|
||||||
self.send_list = []
|
|
||||||
|
|
||||||
self.thread = threading.Thread(target=self._send_files)
|
|
||||||
self.thread.start()
|
|
||||||
|
|
||||||
# Override this in subclasses, implementing the actual upload process.
|
|
||||||
# Return true on success, false on failure.
|
|
||||||
def upload(self, filepath, suffix):
|
|
||||||
print("Dummy uploading "+filepath)
|
|
||||||
return True
|
|
||||||
|
|
||||||
def add_files_to_send(self, file_path_list):
|
|
||||||
self.sendlist_condition.acquire()
|
|
||||||
for file_path in file_path_list:
|
|
||||||
if file_path not in self.send_list:
|
|
||||||
self.send_list.append(file_path)
|
|
||||||
self.sendlist_condition.notify()
|
|
||||||
|
|
||||||
self.sendlist_condition.release()
|
|
||||||
|
|
||||||
def _file_available(self):
|
|
||||||
return len(self.send_list) > 0
|
|
||||||
|
|
||||||
def _send_files(self):
|
|
||||||
while True:
|
|
||||||
self.sendlist_condition.acquire()
|
|
||||||
# this drops through immediately if there is something to send, otherwise waits
|
|
||||||
self.sendlist_condition.wait_for(self._file_available)
|
|
||||||
file_to_send = self.send_list.pop(0)
|
|
||||||
os.rename(file_to_send, file_to_send+".uploading")
|
|
||||||
self.sendlist_condition.release()
|
|
||||||
|
|
||||||
# Rename uploaded file to end with ".uploaded" on success, or back
|
|
||||||
# to original path on failure.
|
|
||||||
try:
|
|
||||||
self.upload(file_to_send, ".uploading")
|
|
||||||
os.rename(file_to_send+".uploading", file_to_send+".uploaded")
|
|
||||||
except Exception as e:
|
|
||||||
print(F"Upload failed with exception {e}")
|
|
||||||
os.rename(file_to_send+".uploading", file_to_send)
|
|
||||||
self.send_list.append(file_to_send)
|
|
||||||
|
|
||||||
|
|
||||||
class SFTPDestination(Destination):
|
|
||||||
def upload(self, filepath, suffix):
|
|
||||||
print("Starting upload...")
|
|
||||||
with paramiko.Transport((self.config["address"],
|
|
||||||
self.config["port"])) as transport:
|
|
||||||
transport.connect(username=self.config["username"],
|
|
||||||
password=self.config["password"])
|
|
||||||
with paramiko.SFTPClient.from_transport(transport) as sftp:
|
|
||||||
print("Uploading "+filepath+" to " +
|
|
||||||
self.config["address"]+" via SFTP")
|
|
||||||
if self.config["add_id_to_path"]:
|
|
||||||
destdir = os.path.join(self.config["path"],
|
|
||||||
self.node_id)
|
|
||||||
else:
|
|
||||||
destdir = self.config["path"]
|
|
||||||
|
|
||||||
try:
|
|
||||||
sftp.listdir(destdir)
|
|
||||||
except IOError:
|
|
||||||
print("Creating remot dir:" + destdir)
|
|
||||||
sftp.mkdir(destdir)
|
|
||||||
|
|
||||||
print("Target dir:"+destdir)
|
|
||||||
sftp.put(filepath+suffix,
|
|
||||||
os.path.join(destdir, os.path.basename(filepath)))
|
|
||||||
|
|
||||||
|
|
||||||
class Bucket():
|
|
||||||
def __init__(self, name, open_link_on_new, opportunistic, keep_copy,
|
|
||||||
destination, node_id, root_dir, path=None, old_path=None):
|
|
||||||
self.newfile_event = threading.Event()
|
|
||||||
self.newfile_event.set()
|
|
||||||
|
|
||||||
self.node_id = node_id
|
|
||||||
self.root_dir = root_dir
|
|
||||||
|
|
||||||
self.destination = destination
|
|
||||||
|
|
||||||
self.path = path
|
|
||||||
if self.path is None:
|
|
||||||
self.path = os.path.join(self.root_dir, name)
|
|
||||||
if not os.path.exists(self.path):
|
|
||||||
os.makedirs(self.path)
|
|
||||||
|
|
||||||
if keep_copy:
|
|
||||||
self.old_path = old_path
|
|
||||||
if self.old_path is None:
|
|
||||||
self.old_path = os.path.join(
|
|
||||||
self.root_dir, name + "_old")
|
|
||||||
if not os.path.exists(self.old_path):
|
|
||||||
os.makedirs(self.old_path)
|
|
||||||
|
|
||||||
self.thread = threading.Thread(target=self._check_files)
|
|
||||||
self.thread.start()
|
|
||||||
|
|
||||||
def _check_files(self):
|
|
||||||
|
|
||||||
while True:
|
|
||||||
# NOTE: The reason we use an event here, rather than a lock or condition
|
|
||||||
# is that we're not sharing any internal state between the threads - just
|
|
||||||
# the filesystem itself and using the atomicity of file operations. While
|
|
||||||
# less clean in a pure python sense, this allows for more flexibility in
|
|
||||||
# allowing other sources of files
|
|
||||||
self.newfile_event.wait(timeout=10)
|
|
||||||
self.newfile_event.clear()
|
|
||||||
bucket_files = []
|
|
||||||
for item in os.listdir(self.path):
|
|
||||||
item_path = os.path.join(self.path, item)
|
|
||||||
if (os.path.isfile(item_path) and
|
|
||||||
(not item.endswith(".writing")) and
|
|
||||||
(not item.endswith(".uploading")) and
|
|
||||||
(not item.endswith(".uploaded"))):
|
|
||||||
bucket_files.append(item_path)
|
|
||||||
#TODO check for .uploaded files and either delete or
|
|
||||||
# if keep_copy, move to self.old_path
|
|
||||||
|
|
||||||
if bucket_files:
|
|
||||||
self.destination.add_files_to_send(bucket_files)
|
|
||||||
|
|
||||||
|
|
||||||
class UploaderPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
dests = confdef.add_def('destination', shconf.TableArrayDef())
|
|
||||||
dests.add_def('name', shconf.StringDef())
|
|
||||||
dests.add_def('protocol', shconf.StringDef())
|
|
||||||
dests.add_def('address', shconf.StringDef(optional=True))
|
|
||||||
dests.add_def('port', shconf.IntDef(optional=True))
|
|
||||||
dests.add_def('path', shconf.StringDef(optional=True))
|
|
||||||
dests.add_def('username', shconf.StringDef(optional=True))
|
|
||||||
dests.add_def('password', shconf.StringDef(optional=True))
|
|
||||||
dests.add_def('keyfile', shconf.StringDef(
|
|
||||||
default="", optional=True))
|
|
||||||
dests.add_def('add_id_to_path', shconf.BoolDef(
|
|
||||||
default=True, optional=True))
|
|
||||||
|
|
||||||
buckets = confdef.add_def('bucket', shconf.TableArrayDef())
|
|
||||||
buckets.add_def('name', shconf.StringDef())
|
|
||||||
buckets.add_def('open_link_on_new', shconf.BoolDef())
|
|
||||||
buckets.add_def('opportunistic', shconf.BoolDef(
|
|
||||||
default=True, optional=True))
|
|
||||||
buckets.add_def('keep_copy', shconf.BoolDef())
|
|
||||||
buckets.add_def('destination', shconf.StringDef())
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.root_dir = os.path.expanduser(pluginInterface.coreconfig["root_dir"])
|
|
||||||
self.id = pluginInterface.coreconfig["id"]
|
|
||||||
|
|
||||||
print("Uploader config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
self.interface.register_function(self.move_to_bucket)
|
|
||||||
|
|
||||||
self.destinations = {}
|
|
||||||
self.buckets = {}
|
|
||||||
|
|
||||||
for dest_conf in self.config["destination"]:
|
|
||||||
if dest_conf["protocol"] == "sftp":
|
|
||||||
self.destinations[dest_conf["name"]] = SFTPDestination(
|
|
||||||
dest_conf, self.id, self.root_dir)
|
|
||||||
else:
|
|
||||||
self.destinations[dest_conf["name"]] = Destination(
|
|
||||||
dest_conf, self.id, self.root_dir)
|
|
||||||
|
|
||||||
for bucketconf in self.config["bucket"]:
|
|
||||||
bucketconf["destination"] = self.destinations[bucketconf["destination"]]
|
|
||||||
self.buckets[bucketconf["name"]] = Bucket(
|
|
||||||
**bucketconf, node_id=self.id, root_dir=self.root_dir)
|
|
||||||
|
|
||||||
def move_to_bucket(self, filepath, bucket_name):
|
|
||||||
# use intermediary step with ".writing" on the filename
|
|
||||||
# in case the source isn't in the same filesystem and so the
|
|
||||||
# move operation might not be atomic. Once it's there, the rename
|
|
||||||
# _is_ atomic
|
|
||||||
dest_path = os.path.join(self.buckets[bucket_name].path,
|
|
||||||
os.path.basename(filepath))
|
|
||||||
temp_dest_path = dest_path + ".writing"
|
|
||||||
shutil.move(filepath, temp_dest_path)
|
|
||||||
os.rename(temp_dest_path, dest_path)
|
|
||||||
|
|
||||||
# notify bucket to check for new files
|
|
||||||
self.buckets[bucket_name].newfile_event.set()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
pass
|
|
||||||
# print("main")
|
|
||||||
# main(sys.argv[1:])
|
|
||||||
@ -1,327 +0,0 @@
|
|||||||
import io
|
|
||||||
import os
|
|
||||||
from datetime import datetime
|
|
||||||
import time
|
|
||||||
import re
|
|
||||||
|
|
||||||
import shepherd.config as shconf
|
|
||||||
import shepherd.plugin
|
|
||||||
|
|
||||||
import threading
|
|
||||||
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
from collections import namedtuple, OrderedDict
|
|
||||||
from operator import itemgetter
|
|
||||||
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
|
|
||||||
asset_dir = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
overlayfont_filename = os.path.join(asset_dir, "DejaVuSansMono.ttf")
|
|
||||||
logo_filename = os.path.join(asset_dir, "smallshepherd.png")
|
|
||||||
|
|
||||||
# Note: Add a lock to the gstreamer function, to avoid multiple triggers colliding
|
|
||||||
|
|
||||||
CameraPort = namedtuple(
|
|
||||||
'CameraPort', ['usbPath', 'devicePath'])
|
|
||||||
|
|
||||||
# Short wrapper to allow use in a ``with`` context
|
|
||||||
class VideoCaptureCtx():
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
self.capture_dev = cv2.VideoCapture(*args, **kwargs)
|
|
||||||
def __enter__(self):
|
|
||||||
return self.capture_dev
|
|
||||||
def __exit__(self, *args):
|
|
||||||
self.capture_dev.release()
|
|
||||||
|
|
||||||
|
|
||||||
def get_connected_cameras():
|
|
||||||
# This will return devices orderd by the USB path, regardless of the order they're connected in
|
|
||||||
device_list_str = subprocess.run(
|
|
||||||
['v4l2-ctl', '--list-devices'], text=True, stdout=subprocess.PIPE).stdout
|
|
||||||
# in each match, first group is the USB path, second group is the device path
|
|
||||||
portlist = re.findall(r"-([\d.]+?)\):\n\s*?(\/dev\S+?)\n", device_list_str)
|
|
||||||
return [CameraPort(*port) for port in portlist]
|
|
||||||
|
|
||||||
|
|
||||||
def get_capture_formats(video_device):
|
|
||||||
"""
|
|
||||||
Call ``v4l2-ctl --device {video_device} --list-formats-ext`` and parse the output into a format dict
|
|
||||||
|
|
||||||
Returns a dict with 4CC format codes as keys, and lists of (width,height) tuples as values
|
|
||||||
"""
|
|
||||||
device_fmt_str = subprocess.run(
|
|
||||||
['v4l2-ctl', '--device', F'{video_device}', '--list-formats-ext'], text=True, stdout=subprocess.PIPE).stdout
|
|
||||||
|
|
||||||
split_fmts = re.split(r"\[\d\]: '(\w{4}).*", device_fmt_str)
|
|
||||||
if len(split_fmts) < 3:
|
|
||||||
raise Exception("Did not get valid device format list output")
|
|
||||||
|
|
||||||
# Iterate through successive pairs in the split, where the first is the format mode and the
|
|
||||||
# second is the text containing all the resolution options. Skip the first bit, which is rubbish
|
|
||||||
format_dict = {}
|
|
||||||
for fourcc, size_text in zip(split_fmts[1::2], split_fmts[2::2]):
|
|
||||||
resolutions = re.findall(r"(\d+?)x(\d+?)\D", size_text)
|
|
||||||
format_dict[fourcc] = resolutions
|
|
||||||
return format_dict
|
|
||||||
|
|
||||||
|
|
||||||
def get_largest_resolution(size_list):
|
|
||||||
"""
|
|
||||||
Accepts a list of tuples where the first element is a width and the second is a height.
|
|
||||||
|
|
||||||
Returns a single resolution tuple representing the largest area from the list
|
|
||||||
"""
|
|
||||||
return max(size_list, key=lambda size: int(size[0]*int(size[1])))
|
|
||||||
|
|
||||||
|
|
||||||
def set_camera_format_v4l2(video_device, fourcc, width, height):
|
|
||||||
"""
|
|
||||||
Set the camera device capture format using the external v4l2-ctl tool
|
|
||||||
"""
|
|
||||||
subprocess.run(['v4l2-ctl', '--device', F'{video_device}',
|
|
||||||
F'--set-fmt-video width={width},height={height},pixelformat={fourcc}'], text=True)
|
|
||||||
|
|
||||||
|
|
||||||
def set_camera_format_opencv(capture_device, fourcc, width, height):
|
|
||||||
"""
|
|
||||||
Set the camera device capture format using internal OpenCV set methods
|
|
||||||
"""
|
|
||||||
# VideoWriter_fourcc expects a list of characters, so need to unpack the string
|
|
||||||
capture_device.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*fourcc))
|
|
||||||
capture_device.set(cv2.CAP_PROP_FRAME_WIDTH, int(width))
|
|
||||||
capture_device.set(cv2.CAP_PROP_FRAME_HEIGHT, int(height))
|
|
||||||
|
|
||||||
|
|
||||||
class USBCamPlugin(shepherd.plugin.Plugin):
|
|
||||||
@staticmethod
|
|
||||||
def define_config(confdef):
|
|
||||||
confdef.add_def('upload_images', shconf.BoolDef(default=False, optional=True,
|
|
||||||
helptext="If true, move to an Uploader bucket. Requires Uploader plugin"))
|
|
||||||
confdef.add_def('upload_bucket', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Name of uploader bucket to shift images to."))
|
|
||||||
confdef.add_def('save_directory', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Name of directory path to save images. If empty, a 'usbcamera' directory under the Shepherd root dir will be used"))
|
|
||||||
confdef.add_def('append_id', shconf.BoolDef(default=True, optional=True,
|
|
||||||
helptext="If true, add the system ID to the end of image filenames"))
|
|
||||||
confdef.add_def('show_overlay', shconf.BoolDef(default=True, optional=True,
|
|
||||||
helptext="If true, add an overlay on each image with the system ID and date."))
|
|
||||||
confdef.add_def('overlay_desc', shconf.StringDef(default="", optional=True,
|
|
||||||
helptext="Text to add to the overlay after the system ID and camera name"))
|
|
||||||
confdef.add_def('jpeg_quality', shconf.IntDef(default=85, minval=60, maxval=95, optional=True,
|
|
||||||
helptext="JPEG quality to save with. Max of 95, passed directly to Pillow"))
|
|
||||||
confdef.add_def('stabilise_delay', shconf.IntDef(default=5, minval=1, maxval=30, optional=True,
|
|
||||||
helptext="Number of seconds to wait after starting each camera for exposure and white balance to settle"))
|
|
||||||
|
|
||||||
array = confdef.add_def('trigger', shconf.TableArrayDef(
|
|
||||||
helptext="Array of triggers that will use all cameras"))
|
|
||||||
array.add_def('hour', shconf.StringDef())
|
|
||||||
array.add_def('minute', shconf.StringDef())
|
|
||||||
array.add_def('second', shconf.StringDef(default="0", optional=True))
|
|
||||||
|
|
||||||
camarray = confdef.add_def('camera', shconf.TableArrayDef(
|
|
||||||
helptext="List of cameras to try and connect to. Multiple ports may be listed, and any not connected will be skipped on each trigger."))
|
|
||||||
camarray.add_def('name', shconf.StringDef(default="", optional=False,
|
|
||||||
helptext="Name of camera, appended to filename and added to overlay"))
|
|
||||||
camarray.add_def('usb_port', shconf.StringDef(default="*", optional=False,
|
|
||||||
helptext="USB port descriptor of the from '3.4.1' (which would indicate port1 on a hub plugged into port4 on a hub plugged into port 3 of the system). This can be found by running 'v4l2-ctl --list-devices'. A single camera with a wildcard '*' port is also allowed, and will match any remaining available camera."))
|
|
||||||
|
|
||||||
def __init__(self, pluginInterface, config):
|
|
||||||
super().__init__(pluginInterface, config)
|
|
||||||
self.config = config
|
|
||||||
self.interface = pluginInterface
|
|
||||||
self.plugins = pluginInterface.other_plugins
|
|
||||||
self.hooks = pluginInterface.hooks
|
|
||||||
|
|
||||||
self.root_dir = os.path.expanduser(pluginInterface.coreconfig["root_dir"])
|
|
||||||
self.id = pluginInterface.coreconfig["id"]
|
|
||||||
|
|
||||||
self.interface.register_hook("pre_cam")
|
|
||||||
self.interface.register_hook("post_cam")
|
|
||||||
self.interface.register_function(self.camera_job)
|
|
||||||
self.interface.register_function(self.run_cameras)
|
|
||||||
# do some camera init stuff
|
|
||||||
|
|
||||||
print("USBCamera config:")
|
|
||||||
print(self.config)
|
|
||||||
|
|
||||||
self.gstlock = threading.Lock()
|
|
||||||
|
|
||||||
if self.config["save_directory"] is "":
|
|
||||||
self.save_directory = os.path.join(self.root_dir, "usbcamera")
|
|
||||||
else:
|
|
||||||
self.save_directory = self.config["save_directory"]
|
|
||||||
|
|
||||||
if not os.path.exists(self.save_directory):
|
|
||||||
os.makedirs(self.save_directory)
|
|
||||||
|
|
||||||
if self.config["show_overlay"]:
|
|
||||||
# Load assets
|
|
||||||
self.logo_im = Image.open(logo_filename)
|
|
||||||
|
|
||||||
self.font_size_cache = {}
|
|
||||||
self.logo_size_cache = {}
|
|
||||||
|
|
||||||
# Dict of camera names storing the USB path as the value
|
|
||||||
self.defined_cams = OrderedDict()
|
|
||||||
# List of wildcard camera names
|
|
||||||
self.wildcard_cams = []
|
|
||||||
|
|
||||||
# Go through camera configs sorted by name
|
|
||||||
for camera in sorted(self.config["camera"], key=itemgetter("name")):
|
|
||||||
if camera["name"] in self.defined_cams:
|
|
||||||
raise shconf.InvalidConfigError(
|
|
||||||
"Can't have more than one usb camera defined with the same config name")
|
|
||||||
if camera["usb_port"] == '*':
|
|
||||||
self.wildcard_cams.append(camera["name"])
|
|
||||||
else:
|
|
||||||
self.defined_cams[camera["name"]] = camera["usb_port"]
|
|
||||||
|
|
||||||
for trigger in self.config["trigger"]:
|
|
||||||
trigger_id = trigger["hour"]+'-' + trigger["minute"]+'-'+trigger["second"]
|
|
||||||
self.interface.add_job(
|
|
||||||
self.camera_job, trigger["hour"], trigger["minute"], trigger["second"], job_name=trigger_id)
|
|
||||||
|
|
||||||
def _generate_overlay(self, width, height, image_time, camera_name):
|
|
||||||
font_size = int(height/40)
|
|
||||||
margin_size = int(font_size/5)
|
|
||||||
|
|
||||||
if font_size not in self.font_size_cache:
|
|
||||||
self.font_size_cache[font_size] = ImageFont.truetype(
|
|
||||||
overlayfont_filename, int(font_size*0.9))
|
|
||||||
thisfont = self.font_size_cache[font_size]
|
|
||||||
|
|
||||||
if font_size not in self.logo_size_cache:
|
|
||||||
newsize = (int(self.logo_im.width*(
|
|
||||||
font_size/self.logo_im.height)), font_size)
|
|
||||||
self.logo_size_cache[font_size] = self.logo_im.resize(
|
|
||||||
newsize, Image.BILINEAR)
|
|
||||||
thislogo = self.logo_size_cache[font_size]
|
|
||||||
|
|
||||||
desc_text = camera_name + " " + self.config["overlay_desc"]
|
|
||||||
if self.config["append_id"]:
|
|
||||||
desc_text = self.id + " " + desc_text
|
|
||||||
|
|
||||||
time_text = image_time.strftime("%Y-%m-%d %H:%M:%S")
|
|
||||||
|
|
||||||
overlay = Image.new('RGBA', (width, font_size+(2*margin_size)), (0, 0, 0))
|
|
||||||
overlay.paste(thislogo, (int((overlay.width-thislogo.width)/2), margin_size))
|
|
||||||
|
|
||||||
draw = ImageDraw.Draw(overlay)
|
|
||||||
draw.text((margin_size*2, margin_size), desc_text,
|
|
||||||
font=thisfont, fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
datewidth, _ = draw.textsize(time_text, thisfont)
|
|
||||||
draw.text((overlay.width-(margin_size*2)-datewidth, margin_size), time_text, font=thisfont,
|
|
||||||
fill=(255, 255, 255, 255))
|
|
||||||
|
|
||||||
# make whole overlay half transparent
|
|
||||||
overlay.putalpha(128)
|
|
||||||
return overlay
|
|
||||||
|
|
||||||
def _process_image(self, cv_frame, camera_name):
|
|
||||||
|
|
||||||
image_time = datetime.now()
|
|
||||||
|
|
||||||
# Convert over to PIL. Mostly so we can use our own font.
|
|
||||||
img = Image.fromarray(cv2.cvtColor(cv_frame, cv2.COLOR_BGR2RGB))
|
|
||||||
|
|
||||||
if self.config["show_overlay"]:
|
|
||||||
overlay = self._generate_overlay(img.width, img.height, image_time, camera_name)
|
|
||||||
img.paste(overlay, (0, img.height-overlay.height), overlay)
|
|
||||||
|
|
||||||
image_filename = image_time.strftime("%Y-%m-%d %H-%M-%S")
|
|
||||||
if self.config["append_id"]:
|
|
||||||
image_filename = image_filename + " " + self.id
|
|
||||||
|
|
||||||
if camera_name is not "":
|
|
||||||
image_filename = image_filename+" "+camera_name
|
|
||||||
image_filename = image_filename + ".jpg"
|
|
||||||
image_filename = os.path.join(self.save_directory, image_filename)
|
|
||||||
img.save(image_filename+".writing", "JPEG", quality=self.config["jpeg_quality"])
|
|
||||||
os.rename(image_filename+".writing", image_filename)
|
|
||||||
|
|
||||||
if self.config["upload_images"]:
|
|
||||||
self.plugins["uploader"].move_to_bucket(image_filename, self.config["upload_bucket"])
|
|
||||||
|
|
||||||
def _capture_image(self, device_path, camera_name):
|
|
||||||
print("Running camera "+camera_name)
|
|
||||||
|
|
||||||
with self.gstlock:
|
|
||||||
|
|
||||||
#gst_str = ('v4l2src device='+device_path+' ! '
|
|
||||||
# 'videoconvert ! appsink drop=true max-buffers=1 sync=false')
|
|
||||||
|
|
||||||
#vidcap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
|
|
||||||
|
|
||||||
fmts = get_capture_formats(device_path)
|
|
||||||
|
|
||||||
with VideoCaptureCtx(device_path, cv2.CAP_V4L2) as vidcap:
|
|
||||||
if "MJPG" in fmts:
|
|
||||||
size = get_largest_resolution(fmts["MJPG"])
|
|
||||||
set_camera_format_opencv(vidcap, "MJPG", size[0], size[1])
|
|
||||||
|
|
||||||
|
|
||||||
# stream only starts after first grab
|
|
||||||
|
|
||||||
print("Starting cam")
|
|
||||||
read_flag, frame = vidcap.read()
|
|
||||||
delay_start = time.time()
|
|
||||||
while (time.time() - delay_start) < self.config["stabilise_delay"]:
|
|
||||||
vidcap.grab()
|
|
||||||
#time.sleep(self.config["stabilise_delay"])
|
|
||||||
# clear old buffer
|
|
||||||
#print("Flushing capture")
|
|
||||||
#vidcap.grab()
|
|
||||||
print("Reading")
|
|
||||||
read_flag, frame = vidcap.read()
|
|
||||||
#print("Changing to YUYV")
|
|
||||||
#if "YUYV" in fmts:
|
|
||||||
# size = get_largest_resolution(fmts["YUYV"])
|
|
||||||
# set_camera_format_opencv(vidcap, "YUYV", size[0], size[1])
|
|
||||||
#print("Reading again")
|
|
||||||
#read_flag, frame2 = vidcap.read()
|
|
||||||
|
|
||||||
|
|
||||||
if read_flag:
|
|
||||||
self._process_image(frame, camera_name)
|
|
||||||
#self._process_image(frame2, camera_name+"(2)")
|
|
||||||
else:
|
|
||||||
print("Could not read camera "+camera_name +
|
|
||||||
" on USB port "+device_path)
|
|
||||||
|
|
||||||
def run_cameras(self, name_suffix = ""):
|
|
||||||
connected_cams = OrderedDict(get_connected_cameras())
|
|
||||||
|
|
||||||
for defined_name, defined_usb_path in self.defined_cams.items():
|
|
||||||
if defined_usb_path in connected_cams:
|
|
||||||
|
|
||||||
self._capture_image(connected_cams.pop(
|
|
||||||
defined_usb_path), defined_name+name_suffix)
|
|
||||||
|
|
||||||
else:
|
|
||||||
print("USB Camera "+defined_name+" on port " +
|
|
||||||
defined_usb_path+" is not currently connected")
|
|
||||||
|
|
||||||
for cam_name in self.wildcard_cams:
|
|
||||||
if len(connected_cams) > 0:
|
|
||||||
self._capture_image(connected_cams.popitem(
|
|
||||||
last=False)[1], cam_name+name_suffix)
|
|
||||||
else:
|
|
||||||
print(
|
|
||||||
"No connected USB cameras are currently left to match to "+cam_name+" ")
|
|
||||||
break
|
|
||||||
|
|
||||||
def camera_job(self):
|
|
||||||
self.hooks.pre_cam()
|
|
||||||
self.run_cameras()
|
|
||||||
self.hooks.post_cam()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
pass
|
|
||||||
# print("main")
|
|
||||||
# main(sys.argv[1:])
|
|
||||||
|
Before Width: | Height: | Size: 12 KiB |
@ -1,13 +0,0 @@
|
|||||||
[shepherd]
|
|
||||||
name = "test-node"
|
|
||||||
plugin_dir = "./"
|
|
||||||
root_dir = "~/shepherd/"
|
|
||||||
hostname = "shepherd-test"
|
|
||||||
control_server = "api.shepherd.distreon.net"
|
|
||||||
#control_server = "127.0.0.1:5000"
|
|
||||||
control_api_key = "v2EgvYzx79c8fCP4P7jlWxTZ3pc"
|
|
||||||
[scout]
|
|
||||||
boardver = "3"
|
|
||||||
serialport = "/dev/ttyUSB0"
|
|
||||||
|
|
||||||
|
|
||||||
@ -0,0 +1,94 @@
|
|||||||
|
# pylint: disable=no-self-argument
|
||||||
|
from configspec import *
|
||||||
|
from shepherd import PluginInterface, plugin_class, plugin_function, plugin_hook
|
||||||
|
from shepherd import plugin_attachment, plugin_run, plugin_init
|
||||||
|
|
||||||
|
"""
|
||||||
|
Plugin to test the plugin class systems and the various decorator markers
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
interface = PluginInterface()
|
||||||
|
|
||||||
|
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("spec1", StringSpec(helptext="helping!"))
|
||||||
|
|
||||||
|
|
||||||
|
@plugin_function
|
||||||
|
def module_function(a):
|
||||||
|
return F"module func {a}"
|
||||||
|
|
||||||
|
|
||||||
|
@plugin_hook
|
||||||
|
def module_hook(a, b):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@plugin_attachment("module_hook")
|
||||||
|
def module_attachment(a, b):
|
||||||
|
return F"module attachment {a} {b}"
|
||||||
|
|
||||||
|
|
||||||
|
@plugin_class
|
||||||
|
class ClassPlugin():
|
||||||
|
def __init__(self):
|
||||||
|
self.config = interface.config
|
||||||
|
self.interface = interface
|
||||||
|
self.plugins = interface.plugins
|
||||||
|
self.hooks = interface.hooks
|
||||||
|
self.interface.init_method_called = True
|
||||||
|
|
||||||
|
# Interface functions
|
||||||
|
@plugin_function
|
||||||
|
def instance_method(self, a):
|
||||||
|
return F"instance method {a}"
|
||||||
|
|
||||||
|
@plugin_function
|
||||||
|
@classmethod
|
||||||
|
def class_method(cls, a):
|
||||||
|
return F"class method {a}"
|
||||||
|
|
||||||
|
@plugin_function
|
||||||
|
@staticmethod
|
||||||
|
def static_method(a):
|
||||||
|
return F"static method {a}"
|
||||||
|
|
||||||
|
# Hooks
|
||||||
|
@plugin_hook(name="instance_hook")
|
||||||
|
def instance_hook_name(a, b):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@plugin_hook
|
||||||
|
@staticmethod
|
||||||
|
def static_hook(a, b):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@plugin_hook
|
||||||
|
@staticmethod
|
||||||
|
def static_hook2(a, b):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Attachments (these are bound before attachment, so self and cls work as normal, and are
|
||||||
|
# not included in the signature)
|
||||||
|
@plugin_attachment("instance_hook")
|
||||||
|
def instance_attach(self, a, b):
|
||||||
|
return F"instance attachment {a} {b}"
|
||||||
|
|
||||||
|
@plugin_attachment("static_hook2")
|
||||||
|
@classmethod
|
||||||
|
def class_attach(cls, a, b):
|
||||||
|
return F"class attachment {a} {b}"
|
||||||
|
|
||||||
|
@plugin_attachment("classtestplugin.static_hook")
|
||||||
|
@staticmethod
|
||||||
|
def static_attach(a, b):
|
||||||
|
return F"static attachment {a} {b}"
|
||||||
|
|
||||||
|
@plugin_init
|
||||||
|
def plugin_init2_method(self):
|
||||||
|
self.interface.init2_method_called = True
|
||||||
|
|
||||||
|
@plugin_run
|
||||||
|
def plugin_run_method(self):
|
||||||
|
self.interface.run_method_called = True
|
||||||
@ -0,0 +1,12 @@
|
|||||||
|
from configspec import *
|
||||||
|
from shepherd import PluginInterface
|
||||||
|
|
||||||
|
interface = PluginInterface()
|
||||||
|
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("spec1", StringSpec())
|
||||||
|
|
||||||
|
confspec2 = ConfigSpecification()
|
||||||
|
confspec2.add_spec("spec2", StringSpec())
|
||||||
|
|
||||||
|
interface.register_confspec(confspec2)
|
||||||
@ -0,0 +1,4 @@
|
|||||||
|
[shepherd]
|
||||||
|
name = "shepherd-test"
|
||||||
|
root_dir ="./"
|
||||||
|
compiled_config_path = ""
|
||||||
@ -0,0 +1,7 @@
|
|||||||
|
[shepherd]
|
||||||
|
name = "shepherd-test"
|
||||||
|
root_dir ="./"
|
||||||
|
compiled_config_path = ""
|
||||||
|
plugin_dir = "./"
|
||||||
|
[classtestplugin]
|
||||||
|
spec1 = "a"
|
||||||
@ -0,0 +1,55 @@
|
|||||||
|
from inspect import signature
|
||||||
|
from configspec import *
|
||||||
|
from shepherd import PluginInterface
|
||||||
|
|
||||||
|
"""
|
||||||
|
Plugin to test basic registration calls.
|
||||||
|
"""
|
||||||
|
|
||||||
|
interface = PluginInterface()
|
||||||
|
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("spec1", StringSpec())
|
||||||
|
|
||||||
|
interface.register_confspec(confspec)
|
||||||
|
|
||||||
|
|
||||||
|
def my_interface_function():
|
||||||
|
return 42
|
||||||
|
|
||||||
|
|
||||||
|
interface.register_function(my_interface_function)
|
||||||
|
|
||||||
|
|
||||||
|
def basic_attachment():
|
||||||
|
return "basic attachment"
|
||||||
|
|
||||||
|
|
||||||
|
def attachment_with_args(arg_a, arg_b):
|
||||||
|
return F"attachment with args: {arg_a}, {arg_b}"
|
||||||
|
|
||||||
|
|
||||||
|
def attachment_with_fancy_args(arg_a, arg_b, arg_c=True):
|
||||||
|
return F"attachment with fancy args: {arg_a}, {arg_b}, {arg_c}"
|
||||||
|
|
||||||
|
|
||||||
|
interface.register_hook("basic_hook")
|
||||||
|
interface.register_hook("hook_with_args", ("arg_a", "arg_b"))
|
||||||
|
interface.register_hook("hook_with_fancy_args", signature(lambda arg_a, arg_b, arg_c=True: None))
|
||||||
|
|
||||||
|
interface.register_attachment(basic_attachment, "basic_hook")
|
||||||
|
interface.register_attachment(attachment_with_args, "simpletestplugin.hook_with_args")
|
||||||
|
interface.register_attachment(attachment_with_fancy_args, "hook_with_fancy_args")
|
||||||
|
|
||||||
|
|
||||||
|
def my_init_func():
|
||||||
|
# Create a dummy variable
|
||||||
|
interface.init_func_called = True
|
||||||
|
|
||||||
|
|
||||||
|
def my_run_func():
|
||||||
|
interface.run_func_called = True
|
||||||
|
|
||||||
|
|
||||||
|
interface.register_init(my_init_func)
|
||||||
|
interface.register_run(my_run_func)
|
||||||
@ -0,0 +1,8 @@
|
|||||||
|
from configspec import *
|
||||||
|
from shepherd import PluginInterface, plugin, plugin_function, plugin_hook, plugin_attachment
|
||||||
|
|
||||||
|
interface = PluginInterface()
|
||||||
|
|
||||||
|
|
||||||
|
confspec = ConfigSpecification()
|
||||||
|
confspec.add_spec("spec2", StringSpec(helptext="helping!"))
|
||||||
@ -0,0 +1,32 @@
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@plugin
|
||||||
|
class SystemPlugin():
|
||||||
|
def __init__(self, pluginInterface, config):
|
||||||
|
super().__init__(pluginInterface, config)
|
||||||
|
self.config = config
|
||||||
|
self.interface = pluginInterface
|
||||||
|
self.plugins = pluginInterface.other_plugins
|
||||||
|
self.hooks = pluginInterface.hooks
|
||||||
|
|
||||||
|
self.interface.register_function(self.echo)
|
||||||
|
self.interface.register_function(self.exec)
|
||||||
|
|
||||||
|
@plugin_function()
|
||||||
|
def echo(self, string: str):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def exec(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@plugin_hook
|
||||||
|
def callback(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@plugin_attachment("pluginname.hookname")
|
||||||
|
def caller(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# interface.register_plugin(SystemPlugin)
|
||||||
@ -0,0 +1,59 @@
|
|||||||
|
# pylint: disable=redefined-outer-name
|
||||||
|
from pathlib import Path
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from click.testing import CliRunner
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from shepherd.agent.cli import cli
|
||||||
|
|
||||||
|
|
||||||
|
def test_shepherd_template():
|
||||||
|
# Note that the CliRunner doesn't catch log output
|
||||||
|
runner = CliRunner()
|
||||||
|
result = runner.invoke(cli, ['template'])
|
||||||
|
assert """
|
||||||
|
.: Shepherd - Template :.
|
||||||
|
|
||||||
|
:: Config template for [shepherd]
|
||||||
|
|
||||||
|
[shepherd]
|
||||||
|
name =""" in result.output
|
||||||
|
|
||||||
|
|
||||||
|
def test_shepherd_optional_template():
|
||||||
|
runner = CliRunner()
|
||||||
|
result = runner.invoke(cli, ['template', '-a'])
|
||||||
|
assert """
|
||||||
|
.: Shepherd - Template :.
|
||||||
|
|
||||||
|
:: Config template for [shepherd]
|
||||||
|
|
||||||
|
[shepherd]
|
||||||
|
name =
|
||||||
|
root_dir = "./"
|
||||||
|
custom_config_path =
|
||||||
|
compiled_config_path = "compiled-config.toml"
|
||||||
|
plugin_dir = "./shepherd-plugins"
|
||||||
|
|
||||||
|
[shepherd.session]
|
||||||
|
resume_delay = 180
|
||||||
|
enable_suspend = true
|
||||||
|
min_suspend_time = 300
|
||||||
|
|
||||||
|
[shepherd.control]
|
||||||
|
server =
|
||||||
|
intro_key =""" in result.output
|
||||||
|
|
||||||
|
|
||||||
|
def test_plugin_template(request):
|
||||||
|
plugindir = Path(request.fspath.dirname)/'assets'
|
||||||
|
runner = CliRunner()
|
||||||
|
result = runner.invoke(cli, ['template', '-d', str(plugindir), 'simpletestplugin'])
|
||||||
|
assert """
|
||||||
|
.: Shepherd - Template :.
|
||||||
|
|
||||||
|
:: Config template for [simpletestplugin]
|
||||||
|
|
||||||
|
[simpletestplugin]
|
||||||
|
spec1 =""" in result.output
|
||||||
@ -1,18 +0,0 @@
|
|||||||
import shepherd.config as config
|
|
||||||
|
|
||||||
def test_freeze():
|
|
||||||
confdef = config.ConfDefinition()
|
|
||||||
conf_def_dict = confdef.add_def('dictval', config.DictDef())
|
|
||||||
conf_def_dict.add_def('intval', config.IntDef())
|
|
||||||
conf_def_dict.add_def('strtval', config.StringDef())
|
|
||||||
|
|
||||||
confman = config.ConfigManager()
|
|
||||||
confman.add_confdef("test_bundle",confdef)
|
|
||||||
confman.load({"test_bundle": {'dictval': {'intval': 34, 'strval': "a"}}})
|
|
||||||
|
|
||||||
confman.freeze_value("test_bundle","dictval", "intval")
|
|
||||||
|
|
||||||
confman.load({"test_bundle": {'dictval': {'intval': 34, 'strval': "b"}}})
|
|
||||||
breakpoint()
|
|
||||||
print(confman.root_config)
|
|
||||||
|
|
||||||
@ -0,0 +1,201 @@
|
|||||||
|
# pylint: disable=redefined-outer-name
|
||||||
|
import secrets
|
||||||
|
from base64 import b64encode
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
import pytest
|
||||||
|
import responses
|
||||||
|
import statesman
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
from configspec import ConfigSpecification
|
||||||
|
|
||||||
|
from shepherd.agent import control
|
||||||
|
from shepherd.agent import plugin
|
||||||
|
|
||||||
|
|
||||||
|
def test_device_id(monkeypatch, tmpdir):
|
||||||
|
with pytest.raises(FileNotFoundError):
|
||||||
|
control.load_device_identity(tmpdir)
|
||||||
|
|
||||||
|
def fixed_token_hex(_):
|
||||||
|
return '0123456789abcdef0123456789abcdef'
|
||||||
|
monkeypatch.setattr(secrets, "token_hex", fixed_token_hex)
|
||||||
|
|
||||||
|
dev_secret, dev_id = control.generate_device_identity(tmpdir)
|
||||||
|
assert dev_secret == '0123456789abcdef0123456789abcdef'
|
||||||
|
assert dev_id == '3dead5e4'
|
||||||
|
|
||||||
|
dev_secret, dev_id = control.load_device_identity(tmpdir)
|
||||||
|
assert dev_secret == '0123456789abcdef0123456789abcdef'
|
||||||
|
assert dev_id == '3dead5e4'
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def control_config():
|
||||||
|
return {'server': 'api.shepherd.test', 'intro_key': 'abcdefabcdefabcdef'}
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def registered_interface():
|
||||||
|
interface = plugin.PluginInterface()
|
||||||
|
interface.register_confspec(ConfigSpecification())
|
||||||
|
control.register_on(interface)
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def test_config(control_config, registered_interface):
|
||||||
|
registered_interface.confspec.validate({'control': control_config})
|
||||||
|
|
||||||
|
|
||||||
|
def test_url():
|
||||||
|
assert control.clean_https_url('api.shepherd.test') == 'https://api.shepherd.test'
|
||||||
|
assert control.clean_https_url('api.shepherd.test/foo') == 'https://api.shepherd.test/foo'
|
||||||
|
assert control.clean_https_url('http://api.shepherd.test') == 'https://api.shepherd.test'
|
||||||
|
|
||||||
|
|
||||||
|
@responses.activate
|
||||||
|
def test_control_thread(control_config, tmpdir, caplog):
|
||||||
|
# Testing threads is a pain, as exceptions (including assertions) thrown in the thread don't
|
||||||
|
# cause the test to fail. We can cheat a little here, as the 'responses' mock framework will
|
||||||
|
# throw a requests.exceptions.ConnectionError if the request isn't recognised, and we're
|
||||||
|
# already logging those in Control.
|
||||||
|
|
||||||
|
responses.add(responses.POST, 'https://api.shepherd.test/agent/update', json={})
|
||||||
|
responses.add(responses.POST, 'https://api.shepherd.test/agent/pluginupdate/plugin_A', json={})
|
||||||
|
responses.add(responses.POST, 'https://api.shepherd.test/agent/pluginupdate/plugin_B', json={})
|
||||||
|
|
||||||
|
core_update_state = control.CoreUpdateState(
|
||||||
|
statesman.SequenceReader(), statesman.SequenceWriter())
|
||||||
|
core_update_state.set_static_state({'the_local_config': 'val'}, {
|
||||||
|
'the_applied_config': 'val'}, {})
|
||||||
|
plugin_update_states = {'plugin_A': control.PluginUpdateState(),
|
||||||
|
'plugin_B': control.PluginUpdateState()}
|
||||||
|
|
||||||
|
control_thread = control.start_control(
|
||||||
|
control_config, tmpdir, core_update_state, plugin_update_states)
|
||||||
|
control.stop()
|
||||||
|
control_thread.join()
|
||||||
|
|
||||||
|
# Check there were no connection exceptions
|
||||||
|
for record in caplog.records:
|
||||||
|
assert record.levelno <= logging.WARNING
|
||||||
|
|
||||||
|
# There is a log line present if the thread stopped properly
|
||||||
|
assert ("shepherd.agent.control", logging.WARNING,
|
||||||
|
"Control thread stopping...") in caplog.record_tuples
|
||||||
|
|
||||||
|
|
||||||
|
@responses.activate
|
||||||
|
def test_control(control_config, tmpdir, caplog, monkeypatch):
|
||||||
|
# Here we skip control_init and just run the update loop directly, to keep things in the same
|
||||||
|
# thread
|
||||||
|
|
||||||
|
def fixed_token_hex(_):
|
||||||
|
return '0123456789abcdef0123456789abcdef'
|
||||||
|
monkeypatch.setattr(secrets, "token_hex", fixed_token_hex)
|
||||||
|
|
||||||
|
core_topic_bundle = statesman.TopicBundle()
|
||||||
|
|
||||||
|
core_topic_bundle.add('status', statesman.StateReader())
|
||||||
|
core_topic_bundle.add('config-spec', statesman.StateReader())
|
||||||
|
core_topic_bundle.add('device-config', statesman.StateReader())
|
||||||
|
core_topic_bundle.add('applied-config', statesman.StateReader())
|
||||||
|
core_topic_bundle.add('control-commands', statesman.SequenceWriter())
|
||||||
|
core_topic_bundle.add('command-results', statesman.SequenceReader())
|
||||||
|
|
||||||
|
core_callback_count = 0
|
||||||
|
|
||||||
|
def core_update_callback(request):
|
||||||
|
nonlocal core_callback_count
|
||||||
|
core_callback_count += 1
|
||||||
|
payload = json.loads(request.body)
|
||||||
|
assert 'applied-config' in payload
|
||||||
|
assert 'device-config' in payload
|
||||||
|
|
||||||
|
core_topic_bundle.process_message(payload)
|
||||||
|
resp_body = core_topic_bundle.get_payload()
|
||||||
|
|
||||||
|
basic_auth = b64encode(
|
||||||
|
b"0123456789abcdef0123456789abcdef:abcdefabcdefabcdef").decode("ascii")
|
||||||
|
assert request.headers['authorization'] == F"Basic {basic_auth}"
|
||||||
|
|
||||||
|
return (200, {}, json.dumps(resp_body))
|
||||||
|
|
||||||
|
responses.add_callback(
|
||||||
|
responses.POST, 'https://api.shepherd.test/agent/update',
|
||||||
|
callback=core_update_callback,
|
||||||
|
content_type='application/json')
|
||||||
|
|
||||||
|
responses.add(responses.POST, 'https://api.shepherd.test/agent/pluginupdate/plugin_A', json={})
|
||||||
|
responses.add(responses.POST, 'https://api.shepherd.test/agent/pluginupdate/plugin_B', json={})
|
||||||
|
|
||||||
|
core_update_state = control.CoreUpdateState(
|
||||||
|
statesman.SequenceReader(), statesman.SequenceWriter())
|
||||||
|
core_update_state.set_static_state({'the_local_config': 'val'}, {
|
||||||
|
'the_applied_config': 'val'}, {})
|
||||||
|
plugin_update_states = {'plugin_A': control.PluginUpdateState(),
|
||||||
|
'plugin_B': control.PluginUpdateState()}
|
||||||
|
plugin_update_states['plugin_A'].set_status({"status1": '1'})
|
||||||
|
|
||||||
|
# control._stop_event.clear()
|
||||||
|
control._stop_event.set()
|
||||||
|
# With the stop event set, the loop should run through and update everything once before
|
||||||
|
# breaking
|
||||||
|
control._control_update_loop(control_config, tmpdir, core_update_state, plugin_update_states)
|
||||||
|
|
||||||
|
assert core_callback_count == 1
|
||||||
|
|
||||||
|
assert not core_update_state.topic_bundle.is_update_required()
|
||||||
|
|
||||||
|
# Check there were no connection exceptions
|
||||||
|
for record in caplog.records:
|
||||||
|
assert record.levelno <= logging.WARNING
|
||||||
|
|
||||||
|
|
||||||
|
def test_command_runner():
|
||||||
|
func_a_was_called = False
|
||||||
|
|
||||||
|
def func_a():
|
||||||
|
nonlocal func_a_was_called
|
||||||
|
func_a_was_called = True
|
||||||
|
test_function_a = plugin.InterfaceFunction(func_a, 'function_a')
|
||||||
|
|
||||||
|
func_b_was_called = False
|
||||||
|
|
||||||
|
def func_b(arg1):
|
||||||
|
nonlocal func_b_was_called
|
||||||
|
func_b_was_called = True
|
||||||
|
return arg1+1
|
||||||
|
test_function_b = plugin.InterfaceFunction(func_b, 'function_b')
|
||||||
|
|
||||||
|
func_tuple = namedtuple('test_functions', ('function_a', 'function_b')
|
||||||
|
)(test_function_a, test_function_b)
|
||||||
|
if_functions = {'test_plugin': func_tuple}
|
||||||
|
cmd_runner = control.CommandRunner(if_functions)
|
||||||
|
|
||||||
|
assert not func_a_was_called
|
||||||
|
cmd_runner._process_command(10, plugin.InterfaceCall('test_plugin', 'function_a', None))
|
||||||
|
assert func_a_was_called
|
||||||
|
|
||||||
|
assert not func_b_was_called
|
||||||
|
cmd_runner._process_command(12, plugin.InterfaceCall('test_plugin', 'function_b', {'arg1': 5}))
|
||||||
|
assert func_b_was_called
|
||||||
|
# Get most recent writer message
|
||||||
|
wr_msg = list(cmd_runner.cmd_result_writer._messages.values())[-1]
|
||||||
|
assert wr_msg == [12, 6]
|
||||||
|
|
||||||
|
func_b_was_called = False
|
||||||
|
cmd_runner.on_new_command_message(
|
||||||
|
[15, plugin.InterfaceCall('test_plugin', 'function_b', {'arg1': 8})])
|
||||||
|
while 15 in cmd_runner.current_commands:
|
||||||
|
time.sleep(0.01)
|
||||||
|
|
||||||
|
assert func_b_was_called
|
||||||
|
wr_msg = list(cmd_runner.cmd_result_writer._messages.values())[-1]
|
||||||
|
assert wr_msg == [15, 9]
|
||||||
|
|
||||||
|
# Control/Plugin integration tests
|
||||||
|
|
||||||
|
# Test command_runner with actual plugin
|
||||||
@ -0,0 +1,102 @@
|
|||||||
|
# pylint: disable=redefined-outer-name
|
||||||
|
from pathlib import Path
|
||||||
|
import logging
|
||||||
|
import importlib
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from shepherd.agent import core
|
||||||
|
from shepherd.agent import plugin
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def fresh_agent_state():
|
||||||
|
plugin.unload_plugins()
|
||||||
|
importlib.reload(core)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def basic_config(tmp_path):
|
||||||
|
def_conf_file = tmp_path / "shepherd_default.toml"
|
||||||
|
def_conf_file.write_text("""
|
||||||
|
[shepherd]
|
||||||
|
name = "shepherd-test"
|
||||||
|
""")
|
||||||
|
return def_conf_file
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def custom_config(tmp_path):
|
||||||
|
def_conf_file = tmp_path / "shepherd_default.toml"
|
||||||
|
def_conf_file.write_text("""
|
||||||
|
[shepherd]
|
||||||
|
name = "shepherd-test"
|
||||||
|
custom_config_path = "shepherd_custom.toml"
|
||||||
|
""")
|
||||||
|
custom_conf_file = tmp_path / "shepherd_custom.toml"
|
||||||
|
custom_conf_file.write_text("""
|
||||||
|
[shepherd]
|
||||||
|
name = "shepherd-custom"
|
||||||
|
""")
|
||||||
|
return def_conf_file
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def plugin_config(tmp_path, request):
|
||||||
|
plugin_dir = Path(request.fspath.dirname)/'assets'
|
||||||
|
def_conf_file = tmp_path / "shepherd_default.toml"
|
||||||
|
def_conf_file.write_text(F"""
|
||||||
|
[shepherd]
|
||||||
|
name = "shepherd-test"
|
||||||
|
plugin_dir = "{plugin_dir}"
|
||||||
|
[classtestplugin]
|
||||||
|
spec1 = "asdf"
|
||||||
|
""")
|
||||||
|
return def_conf_file
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_agent(basic_config):
|
||||||
|
core.Agent(basic_config)
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_compiled_conf(basic_config):
|
||||||
|
core.Agent(basic_config)
|
||||||
|
compiled_conf = (basic_config.parent / "compiled-config.toml").read_text()
|
||||||
|
assert 'name = "shepherd-test"' in compiled_conf
|
||||||
|
# Paths should be resolved to absolute
|
||||||
|
assert 'plugin_dir = "/' in compiled_conf
|
||||||
|
assert 'Compiled Shepherd config' in compiled_conf
|
||||||
|
|
||||||
|
|
||||||
|
def test_custom_conf_load(custom_config):
|
||||||
|
agent = core.Agent(custom_config)
|
||||||
|
assert agent.core_config["name"] == "shepherd-custom"
|
||||||
|
|
||||||
|
|
||||||
|
def test_new_device_trigger(custom_config, caplog):
|
||||||
|
caplog.set_level(logging.INFO)
|
||||||
|
(custom_config.parent / "shepherd.new").touch()
|
||||||
|
core.Agent(custom_config)
|
||||||
|
assert "'new device' mode enabled" in caplog.text
|
||||||
|
assert (custom_config.parent / "shepherd.identity").exists()
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_agent_start(basic_config):
|
||||||
|
agent = core.Agent(basic_config)
|
||||||
|
agent.start()
|
||||||
|
|
||||||
|
|
||||||
|
def test_local_agent_plugin_start(plugin_config):
|
||||||
|
agent = core.Agent(plugin_config)
|
||||||
|
agent.start()
|
||||||
|
assert agent.plugin_interfaces["classtestplugin"].run_method_called is True
|
||||||
|
assert agent.interface_functions["classtestplugin"].instance_method(
|
||||||
|
3) == "instance method 3"
|
||||||
|
|
||||||
|
|
||||||
|
def test_core_interface(plugin_config):
|
||||||
|
agent = core.Agent(plugin_config)
|
||||||
|
agent.start()
|
||||||
|
plugin_interface = agent.plugin_interfaces["classtestplugin"]
|
||||||
|
assert plugin_interface.plugins["shepherd"].device_name() == "shepherd-test"
|
||||||
|
assert plugin_interface.plugins["shepherd"].root_dir() == str(plugin_config.parent)
|
||||||
@ -0,0 +1,119 @@
|
|||||||
|
# pylint: disable=redefined-outer-name
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from shepherd.agent import plugin
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def simple_plugin(request):
|
||||||
|
# Load a simple plugin as a custom plugin, using `./assets` as the plugin dir
|
||||||
|
interface = plugin.load_plugin("simpletestplugin", Path(request.fspath.dirname)/'assets')
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def test_simple_plugin_load(simple_plugin: plugin.PluginInterface):
|
||||||
|
assert simple_plugin._plugin_name == "simpletestplugin"
|
||||||
|
|
||||||
|
|
||||||
|
def test_simple_interface_function_load(simple_plugin: plugin.PluginInterface):
|
||||||
|
|
||||||
|
# Check register_function()
|
||||||
|
assert "my_interface_function" in simple_plugin._functions
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def simple_initialised_plugin(request):
|
||||||
|
plugin.unload_plugin("simpletestplugin")
|
||||||
|
interface = plugin.load_plugin("simpletestplugin", Path(request.fspath.dirname)/'assets')
|
||||||
|
# The plugin system is _not_ responsible for making sure the config passed to it is valid. It
|
||||||
|
# stores and provides the conf-spec, but it's up to Core.Agent to actually validate the
|
||||||
|
# config - otherwise we wouldn't be able to do the whole multiple-layer-fallback thing before
|
||||||
|
# actually initialising the plugin.
|
||||||
|
# Therefore, part of the promise we make with the plugin interface is that the config we pass
|
||||||
|
# in _will_ fit the config-spec
|
||||||
|
template_config = interface.confspec.get_template()
|
||||||
|
plugin.init_plugins({"simpletestplugin": template_config})
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def test_simple_plugin_init(simple_initialised_plugin):
|
||||||
|
assert simple_initialised_plugin._plugin_name == "simpletestplugin"
|
||||||
|
# Check registered init function has run
|
||||||
|
assert simple_initialised_plugin.init_func_called is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_simple_interface_functions(simple_initialised_plugin):
|
||||||
|
|
||||||
|
# Check module level function dict
|
||||||
|
assert simple_initialised_plugin._functions["my_interface_function"]() == 42
|
||||||
|
|
||||||
|
# Check functions handed back to plugin
|
||||||
|
assert simple_initialised_plugin.plugins["simpletestplugin"].my_interface_function() == 42
|
||||||
|
|
||||||
|
|
||||||
|
def test_simple_hook_attachments(simple_initialised_plugin):
|
||||||
|
assert "basic_hook" in simple_initialised_plugin._hooks
|
||||||
|
assert simple_initialised_plugin._hooks['basic_hook'](
|
||||||
|
) == {'simpletestplugin': "basic attachment"}
|
||||||
|
assert simple_initialised_plugin.hooks.hook_with_args(
|
||||||
|
3, 7) == {'simpletestplugin': "attachment with args: 3, 7"}
|
||||||
|
assert simple_initialised_plugin.hooks.hook_with_fancy_args(
|
||||||
|
2, 4) == {'simpletestplugin': "attachment with fancy args: 2, 4, True"}
|
||||||
|
|
||||||
|
with pytest.raises(TypeError, match="takes 2 positional arguments but 3 were"):
|
||||||
|
simple_initialised_plugin.hooks.hook_with_args(3, 7, 5)
|
||||||
|
|
||||||
|
|
||||||
|
def test_dirty_plugin_load(request):
|
||||||
|
"""
|
||||||
|
Corner cases in plugin load
|
||||||
|
"""
|
||||||
|
interface = plugin.load_plugin("dirtytestplugin", Path(request.fspath.dirname)/'assets')
|
||||||
|
|
||||||
|
# Should prefer the confspec actually registered, even if declared after
|
||||||
|
assert "spec2" in interface.confspec.spec_dict
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def running_class_plugin(request):
|
||||||
|
plugin.unload_plugin("classtestplugin")
|
||||||
|
interface = plugin.load_plugin("classtestplugin", Path(request.fspath.dirname)/'assets')
|
||||||
|
template_config = interface.confspec.get_template()
|
||||||
|
plugin.init_plugins({"classtestplugin": template_config})
|
||||||
|
return interface
|
||||||
|
|
||||||
|
|
||||||
|
def test_class_plugin_init(running_class_plugin):
|
||||||
|
assert running_class_plugin._plugin_name == "classtestplugin"
|
||||||
|
# Check plugin object init method has run
|
||||||
|
assert running_class_plugin.init_method_called is True
|
||||||
|
# Check registered init method has run
|
||||||
|
assert running_class_plugin.init2_method_called is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_class_interface_functions(running_class_plugin):
|
||||||
|
ifuncs = running_class_plugin.plugins["classtestplugin"]
|
||||||
|
assert ifuncs.module_function(1) == "module func 1"
|
||||||
|
assert ifuncs.instance_method(2) == "instance method 2"
|
||||||
|
assert ifuncs.class_method(3) == "class method 3"
|
||||||
|
assert ifuncs.static_method(4) == "static method 4"
|
||||||
|
|
||||||
|
|
||||||
|
def test_class_hook_attachments(running_class_plugin):
|
||||||
|
assert running_class_plugin._hooks.keys() == {"module_hook", "instance_hook",
|
||||||
|
"static_hook", "static_hook2"}
|
||||||
|
# Internal hooks dict
|
||||||
|
assert running_class_plugin._hooks['module_hook'](1, 2) == {'classtestplugin':
|
||||||
|
"module attachment 1 2"}
|
||||||
|
# Interface hooks namespace
|
||||||
|
assert running_class_plugin.hooks.instance_hook(3, 4) == {'classtestplugin':
|
||||||
|
"instance attachment 3 4"}
|
||||||
|
# Replaced attr in plugin object
|
||||||
|
assert running_class_plugin._plugin_obj.static_hook(5, 6) == {'classtestplugin':
|
||||||
|
"static attachment 5 6"}
|
||||||
|
assert running_class_plugin.hooks.static_hook2(7, 8) == {'classtestplugin':
|
||||||
|
"class attachment 7 8"}
|
||||||
|
with pytest.raises(TypeError, match="takes 2 positional arguments but 3 were"):
|
||||||
|
running_class_plugin.hooks.static_hook(3, 7, 5)
|
||||||
Loading…
Reference in new issue