Skip to content

Commit

Permalink
docstring: fix all warnings
Browse files Browse the repository at this point in the history
Fixes:

Engine/Engine.py:docstring of ClusterShell.Engine.Engine.EngineBaseTimer.set_nextfire:3: WARNING: Inline interpreted text or phrase reference start-string without end-string.
MsgTree.py:docstring of ClusterShell.MsgTree.MsgTree.__init__:3: WARNING: Inline interpreted text or phrase reference start-string without end-string.
NodeSet.py:docstring of ClusterShell.NodeSet.NodeSetBase.__ior__:1: WARNING: Inline substitution_reference start-string without end-string.
NodeUtils.py:docstring of ClusterShell.NodeUtils.GroupSource.__init__:5: ERROR: Unknown target name: "groups".
Task.py:docstring of ClusterShell.Task.Task:45: WARNING: Block quote ends without a blank line; unexpected unindent.
Task.py:docstring of ClusterShell.Task.Task.shell:12: ERROR: Unexpected indentation.
Task.py:docstring of ClusterShell.Task.Task.timer:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Task.py:docstring of ClusterShell.Task.Task.timer:7: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Task.py:docstring of ClusterShell.Task.Task.timer:9: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Task.py:docstring of ClusterShell.Task.Task.timer:16: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Worker/Exec.py:docstring of ClusterShell.Worker.Exec.ExecClient.__init__:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker:8: ERROR: Unexpected indentation.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker.read:8: WARNING: Definition list ends without a blank line; unexpected unindent.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker.set_reader:7: ERROR: Unexpected indentation.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker.set_reader:8: WARNING: Block quote ends without a blank line; unexpected unindent.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker.set_writer:7: ERROR: Unexpected indentation.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.StreamWorker.set_writer:8: WARNING: Block quote ends without a blank line; unexpected unindent.
Worker/Worker.py:docstring of ClusterShell.Worker.Worker.Worker.read:8: WARNING: Definition list ends without a blank line; unexpected unindent.
Worker/Pdsh.py:docstring of ClusterShell.Worker.Pdsh.PdshClient.__init__:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
Worker/Popen.py:docstring of ClusterShell.Worker.Popen.PopenClient.__init__:11: ERROR: Unexpected indentation.
Worker/Rsh.py:docstring of ClusterShell.Worker.Rsh.RshClient.__init__:1: WARNING: Inline interpreted text or phrase reference start-string without end-string.
  • Loading branch information
thiell committed Sep 19, 2023
1 parent 8a3b999 commit 617831c
Show file tree
Hide file tree
Showing 8 changed files with 57 additions and 54 deletions.
2 changes: 1 addition & 1 deletion lib/ClusterShell/Engine/Engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ def set_nextfire(self, fire_delay, interval=-1):
"""
Set the next firing delay in seconds for an EngineTimer object.
The optional parameter `interval' sets the firing interval
The optional parameter *interval* sets the firing interval
of the timer. If not specified, the timer fires once and then
is automatically invalidated.
Expand Down
2 changes: 1 addition & 1 deletion lib/ClusterShell/MsgTree.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ class MsgTree(object):
def __init__(self, mode=MODE_DEFER):
"""MsgTree initializer
The `mode' parameter should be set to one of the following constant:
The *mode* parameter should be set to one of the following constant:
MODE_DEFER: all messages are processed immediately, saving memory from
duplicate message lines, but keys are associated to tree elements
Expand Down
4 changes: 2 additions & 2 deletions lib/ClusterShell/NodeSet.py
Original file line number Diff line number Diff line change
Expand Up @@ -557,8 +557,8 @@ def clear(self):

def __ior__(self, other):
"""
Implements the |= operator. So ``s |= t`` returns nodeset s with
elements added from t. (Python version 2.5+ required)
Implements the ``|=`` operator. So ``s |= t`` returns nodeset s
with elements added from t. (Python version 2.5+ required)
"""
self._binary_sanity_check(other)
self.update(other)
Expand Down
2 changes: 1 addition & 1 deletion lib/ClusterShell/NodeUtils.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def __init__(self, name, groups=None, allgroups=None):
:param name: group source name
:param groups: group to nodes dict
:param allgroups: optional _all groups_ result (string)
:param allgroups: optional "all groups" result (string)
"""
self.name = name
self.groups = groups or {} # we avoid the use of {} as default argument
Expand Down
61 changes: 31 additions & 30 deletions lib/ClusterShell/Task.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ class Task(object):
the task associated thread):
>>> task.resume()
or:
>>> task.run()
Expand Down Expand Up @@ -463,6 +464,7 @@ def set_default(self, default_key, value):
using this method and retrieve them with default().
Task default_keys are:
- "stderr": Boolean value indicating whether to enable
stdout/stderr separation when using task.shell(), if not
specified explicitly (default: False).
Expand All @@ -478,8 +480,8 @@ def set_default(self, default_key, value):
- "worker": Worker-based class used when spawning workers through
shell()/run().
Threading considerations
========================
Threading considerations:
Unlike set_info(), when called from the task's thread or
not, set_default() immediately updates the underlying
dictionary in a thread-safe manner. This method doesn't
Expand Down Expand Up @@ -517,6 +519,7 @@ def set_info(self, info_key, value):
>>> task.set_info('debug', True)
Task info_keys are:
- "debug": Boolean value indicating whether to enable library
debugging messages (default: False).
- "print_debug": Debug messages processing function. This
Expand All @@ -535,8 +538,8 @@ def set_info(self, info_key, value):
- "tree_default:<key>": In tree mode, overrides the key <key>
in Defaults (settings normally set in defaults.conf)
Threading considerations
========================
Threading considerations:
Unlike set_default(), the underlying info dictionary is only
modified from the task's thread. So calling set_info() from
another thread leads to queueing the request for late apply
Expand All @@ -559,6 +562,7 @@ def shell(self, command, **kwargs):
The following optional parameters are passed to the underlying local
or remote Worker constructor:
- handler: EventHandler instance to notify (on event) -- default is
no handler (None)
- timeout: command timeout delay expressed in second using a floating
Expand All @@ -570,16 +574,16 @@ def shell(self, command, **kwargs):
- stdin: enable stdin if set to True or prevent its use otherwise --
default is True.
Local usage::
Local usage:
task.shell(command [, key=key] [, handler=handler]
[, timeout=secs] [, autoclose=enable_autoclose]
[, stderr=enable_stderr][, stdin=enable_stdin]))
[, timeout=secs] [, autoclose=enable_autoclose]
[, stderr=enable_stderr][, stdin=enable_stdin]))
Distant usage::
Distant usage:
task.shell(command, nodes=nodeset [, handler=handler]
[, timeout=secs], [, autoclose=enable_autoclose]
[, tree=None|False|True] [, remote=False|True]
[, stderr=enable_stderr][, stdin=enable_stdin]))
[, timeout=secs], [, autoclose=enable_autoclose]
[, tree=None|False|True] [, remote=False|True]
[, stderr=enable_stderr][, stdin=enable_stdin]))
Example:
Expand Down Expand Up @@ -713,21 +717,21 @@ def port(self, handler=None, autoclose=False):
def timer(self, fire, handler, interval=-1.0, autoclose=False):
"""
Create a timer bound to this task that fires at a preset time
in the future by invoking the ev_timer() method of `handler'
in the future by invoking the ev_timer() method of *handler*
(provided EventHandler object). Timers can fire either only
once or repeatedly at fixed time intervals. Repeating timers
can also have their next firing time manually adjusted.
The mandatory parameter `fire' sets the firing delay in seconds.
The mandatory parameter *fire* sets the firing delay in seconds.
The optional parameter `interval' sets the firing interval of
The optional parameter *interval* sets the firing interval of
the timer. If not specified, the timer fires once and then is
automatically invalidated.
Time values are expressed in second using floating point
values. Precision is implementation (and system) dependent.
The optional parameter `autoclose', if set to True, creates
The optional parameter *autoclose*, if set to True, creates
an "autoclosing" timer: it will be automatically invalidated
as soon as all other non-autoclosing task's objects (workers,
ports, timers) have finished. Default value is False, which
Expand Down Expand Up @@ -1208,16 +1212,15 @@ def key_retcode(self, key):

def max_retcode(self):
"""
Get max return code encountered during last run
or None in the following cases:
- all commands timed out,
- no command-based worker was executed.
Get max return code encountered during last run or None in the
following cases:
How retcodes work
=================
If the process exits normally, the return code is its exit
status. If the process is terminated by a signal, the return
code is 128 + signal number.
- all commands timed out
- no command-based worker was executed
How do retcodes work? If the process exits normally, the return
code is its exit status. If the process is terminated by a
signal, the return code is 128 + signal number.
"""
return self._max_rc

Expand Down Expand Up @@ -1261,13 +1264,11 @@ def iter_retcodes(self, match_keys=None):
Iterate over return codes of command-based workers, returns a
tuple (rc, keys).
Optional parameter match_keys add filtering on these keys.
Optional parameter *match_keys* add filtering on these keys.
How retcodes work
=================
If the process exits normally, the return code is its exit
status. If the process is terminated by a signal, the return
code is 128 + signal number.
How do retcodes work? If the process exits normally, the return
code is its exit status. If the process is terminated by a
signal, the return code is 128 + signal number.
"""
if match_keys:
# Use the items iterator for the underlying dict.
Expand Down
2 changes: 1 addition & 1 deletion lib/ClusterShell/Worker/Exec.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ class ExecClient(EngineClient):
def __init__(self, node, command, worker, stderr, timeout, autoclose=False,
rank=None):
"""
Create an EngineClient-type instance to locally run `command'.
Create an EngineClient-type instance to locally run *command*.
:param node: will be used as key.
"""
Expand Down
1 change: 1 addition & 0 deletions lib/ClusterShell/Worker/Popen.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
class PopenClient(StreamClient):

def __init__(self, worker, key, stderr, timeout, autoclose):
"""PopenClient initializer"""
StreamClient.__init__(self, worker, key, stderr, timeout, autoclose)
self.popen = None
self.rc = None
Expand Down
37 changes: 19 additions & 18 deletions lib/ClusterShell/Worker/Worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,9 +206,9 @@ def read(self, node=None, sname='stdout'):
Return stream read buffer of current worker.
Arguments:
node -- node name; can also be set to None for simple worker
having worker.key defined (default is None)
sname -- stream name (default is 'stdout')
:param node: node name, can also be set to None for simple worker
having worker.key defined (default is None)
:param sname: stream name (default is 'stdout')
"""
self._task_bound_check()
return self.task._msg_by_source(self, node, sname)
Expand Down Expand Up @@ -463,6 +463,7 @@ class StreamWorker(Worker):
it does not execute any external commands by itself. Rather, it
should be pre-bound to "streams", ie. file(s) or file descriptor(s),
using the two following methods:
>>> worker.set_reader('stream1', fd1)
>>> worker.set_writer('stream2', fd2)
Expand Down Expand Up @@ -492,12 +493,12 @@ def set_reader(self, sname, sfile, retain=True, closefd=True):
"""Add a readable stream to StreamWorker.
Arguments:
sname -- the name of the stream (string)
sfile -- the stream file or file descriptor
retain -- whether the stream retains engine client
(default is True)
closefd -- whether to close fd when the stream is closed
(default is True)
:param sname: the name of the stream (string)
:param sfile: the stream file or file descriptor
:param retain: whether the stream retains engine client
(default is True)
:param closefd: whether to close fd when the stream is closed
(default is True)
"""
if not self.clients[0].registered:
self.clients[0].streams.set_reader(sname, sfile, retain, closefd)
Expand All @@ -508,12 +509,12 @@ def set_writer(self, sname, sfile, retain=True, closefd=True):
"""Set a writable stream to StreamWorker.
Arguments:
sname -- the name of the stream (string)
sfile -- the stream file or file descriptor
retain -- whether the stream retains engine client
(default is True)
closefd -- whether to close fd when the stream is closed
(default is True)
:param sname: the name of the stream (string)
:param sfile: the stream file or file descriptor
:param retain: whether the stream retains engine client
(default is True)
:param closefd: whether to close fd when the stream is closed
(default is True)
"""
if not self.clients[0].registered:
self.clients[0].streams.set_writer(sname, sfile, retain, closefd)
Expand Down Expand Up @@ -585,9 +586,9 @@ def read(self, node=None, sname='stdout'):
Return stream read buffer of current worker.
Arguments:
node -- node name; can also be set to None for simple worker
having worker.key defined (default is None)
sname -- stream name (default is 'stdout')
:param node: node name, can also be set to None for simple worker
having worker.key defined (default is None)
:param sname: stream name (default is 'stdout')
"""
return Worker.read(self, node or self.clients[0].key, sname)

Expand Down

0 comments on commit 617831c

Please sign in to comment.