diff --git a/README.md b/README.md index b582f60b85..11f424fe80 100644 --- a/README.md +++ b/README.md @@ -5,112 +5,99 @@ Indy. As such, it provides features somewhat similar in scope to those found in Fabric. However, it is special-purposed for use in an identity system, whereas Fabric is general purpose. -You can log bugs against Plenum in [Hyperledger's Jira](https://jira.hyperledger.org); use -project "INDY". - -Plenum makes extensive use of coroutines and the async/await keywords in -Python, and as such, requires Python version 3.5.0 or later. Plenum also -depends on libsodium, an awesome crypto library. These need to be installed -separately. Read below to see how. - -Plenum has other dependencies, including the impressive -[RAET](https://github.com/saltstack/raet) for secure reliable communication -over UDP, but this and other dependencies are installed automatically with -Plenum. - -### Installing Plenum - -``` -pip install indy-plenum -``` - -From here, you can play with the command-line interface (see the [tutorial](https://github.com/hyperledger/indy-plenum/wiki))... - -Note: For Windows, we recommended using either [cmder](http://cmder.net/) or [conemu](https://conemu.github.io/). - -``` -plenum -``` - -...or run the tests. - -``` -git clone https://github.com/hyperledger/indy-plenum.git -cd indy-plenum -python -m plenum.test -``` - -**Details about the protocol, including a great tutorial, can be found on the [wiki](https://github.com/hyperledger/indy-plenum/wiki).** - -### Installing python 3.5 and libsodium: - -**Ubuntu:** - -1. Run ```sudo add-apt-repository ppa:fkrull/deadsnakes``` - -2. Run ```sudo apt-get update``` - -3. On Ubuntu 14, run ```sudo apt-get install python3.5``` (python3.5 is pre-installed on most Ubuntu 16 systems; if not, do it there as well.) - -4. We need to install libsodium with the package manager. This typically requires a package repo that's not active by default. Inspect ```/etc/apt/sources.list``` file with your favorite editor (using sudo). On ubuntu 16, you are looking for a line that says ```deb http://us.archive.ubuntu.com/ubuntu xenial main universe```. On ubuntu 14, look for or add: ```deb http://ppa.launchpad.net/chris-lea/libsodium/ubuntu trusty main``` and ```deb-src http://ppa.launchpad.net/chris-lea/libsodium/ubuntu trusty main```. - -5. Run ```sudo apt-get update```. On ubuntu 14, if you get a GPG error about public key not available, run this command and then, after, retry apt-get update: ```sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys B9316A7BC7917B12``` - -6. Install libsodium; the version depends on your distro version. On Ubuntu 14, run ```sudo apt-get install libsodium13```; on Ubuntu 16, run ```sudo apt-get install libsodium18``` - -8. If you still get the error ```E: Unable to locate package libsodium13``` then add ```deb http://ppa.launchpad.net/chris-lea/libsodium/ubuntu trusty main``` and ```deb-src http://ppa.launchpad.net/chris-lea/libsodium/ubuntu trusty main``` to your ```/etc/apt/sources.list```. -Now run ```sudo apt-get update``` and then ```sudo apt-get install libsodium13``` - -**CentOS/Redhat:** +## Other Documentation + +- Details about the protocol, including a great tutorial, can be found on the [wiki](https://github.com/hyperledger/indy-plenum/wiki). +- Please have a look at aggregated documentation at [indy-node-documentation](https://github.com/hyperledger/indy-node/blob/master/README.md) which describes workflows and setup scripts common for both projects. + +## Indy Plenum Repository Structure + +- plenum: + - the main codebase for plenum including Byzantine Fault Tolerant Protocol based on [RBFT](https://pakupaku.me/plaublin/rbft/5000a297.pdf) +- common: + - common and utility code +- crypto: + - basic crypto-related code (in particular, [indy-crypto](https://github.com/hyperledger/indy-crypto) wrappers) +- ledger: + - Provides a simple, python-based, immutable, ordered log of transactions +backed by a merkle tree. + - This is an efficient way to generate verifiable proofs of presence +and data consistency. + - The scope of concerns here is fairly narrow; it is not a full-blown +distributed ledger technology like Fabric, but simply the persistence +mechanism that Plenum needs. +- state: + - state storage using python 3 version of Ethereum's Patricia Trie +- stp: + - secure transport abstraction + - it has two implementations: RAET and ZeroMQ + - Although RAET implementation is there, it's not supported anymore, and [ZeroMQ](http://zeromq.org/) is the default secure transport in plenum. +- storage: + - key-value storage abstractions + - contains [leveldb](http://leveldb.org/) implementation as the main key-valued storage used in Plenum (for ledger, state, etc.) -1. Run ```sudo yum install python3.5``` +## Dependencies -2. Run ```sudo yum install libsodium-devel``` +- Plenum makes extensive use of coroutines and the async/await keywords in +Python, and as such, requires Python version 3.5.0 or later. +- Plenum also depends on [libsodium](https://download.libsodium.org/doc/), an awesome crypto library. These need to be installed +separately. +- Plenum uses [ZeroMQ](http://zeromq.org/) as a secure transport +- [indy-crypto](https://github.com/hyperledger/indy-crypto) + - A shared crypto library + - It's based on [AMCL](https://github.com/milagro-crypto/amcl) + - In particular, it contains BLS multi-signature crypto needed for state proofs support in Indy. -**Mac:** +## Contact us -1. Go to [python.org](https://www.python.org) and from the "Downloads" menu, download the Python 3.5.0 package (python-3.5.0-macosx10.6.pkg) or later. +- Bugs, stories, and backlog for this codebase are managed in [Hyperledger's Jira](https://jira.hyperledger.org). +Use project name `INDY`. +- Join us on [Jira's Rocket.Chat](https://chat.hyperledger.org/channel/indy) at `#indy` and/or `#indy-node` channels to discuss. -2. Open the downloaded file to install it. +## How to Contribute -3. If you are a homebrew fan, you can install it using this brew command: ```brew install python3``` +- We'd love your help; see these [instructions on how to contribute](http://bit.ly/2ugd0bq). +- You may also want to read this info about [maintainers](https://github.com/hyperledger/indy-node/blob/stable/MAINTAINERS.md). -4. To install homebrew package manager, see: [brew.sh](http://brew.sh/) -5. Once you have homebrew installed, run ```brew install libsodium``` to install libsodium. +## How to Start Working with the Code +Please have a look at [Dev Setup](https://github.com/hyperledger/indy-node/blob/master/docs/setup-dev.md) in indy-node repo. +It contains common setup for both indy-plenum and indy-node. -**Windows:** -1. Go to https://download.libsodium.org/libsodium/releases/ and download the latest libsodium package (libsodium-1.0.8-mingw.tar.gz is the latest version as of this writing) +## Installing Plenum -2. When you extract the contents of the downloaded tar file, you will see 2 folders with the names libsodium-win32 and libsodium-win64. +#### Install from pypi -3. As the name suggests, use the libsodium-win32 if you are using 32-bit machine or libsodium-win64 if you are using a 64-bit operating system. -4. Copy the libsodium-x.dll from libsodium-win32\bin or libsodium-win64\bin to C:\Windows\System or System32 and rename it to libsodium.dll. +``` +pip install indy-plenum +``` -5. Download the latest build (pywin32-220.win-amd64-py3.5.exe is the latest build as of this writing) from [here](https://sourceforge.net/projects/pywin32/files/pywin32/Build%20220/) and run the downloaded executable. +From here, you can play with the command-line interface (see the [tutorial](https://github.com/hyperledger/indy-plenum/wiki)). +Note: For Windows, we recommended using either [cmder](http://cmder.net/) or [conemu](https://conemu.github.io/). -### Using a virtual environment (recommended) -We recommend creating a new Python virtual environment for trying out Plenum. -a virtual environment is a Python environment which is isolated from the -system's default Python environment (you can change that) and any other -virtual environment you create. You can create a new virtual environment by: ``` -virtualenv -p python3.5 +plenum ``` -And activate it by: +...or run the tests. ``` -source /bin/activate +git clone https://github.com/hyperledger/indy-plenum.git +cd indy-plenum +python -m plenum.test ``` -### Initializing Keep +#### Initializing Keys +Each Node needs to have keys initialized + - ed25519 transport keys (used by ZMQ for Node-to-Node and Node-to-Client communication) + - BLS keys for BLS multi-signature and state proofs support + ``` init_plenum_keys --name Alpha --seeds 000000000000000000000000000Alpha Alpha000000000000000000000000000 --force ``` @@ -129,21 +116,21 @@ init_plenum_keys --name Delta --seeds 000000000000000000000000000Delta Delta0000 Note: Seed can be any randomly chosen 32 byte value. It does not have to be in the format `00..`. -### Seeds used for generating clients +#### Seeds used for generating clients 1. Seed used for steward Bob's signing key pair ```11111111111111111111111111111111``` 2. Seed used for steward Bob's public private key pair ```33333333333333333333333333333333``` 3. Seed used for client Alice's signing key pair ```22222222222222222222222222222222``` 4. Seed used for client Alice's public private key pair ```44444444444444444444444444444444``` -### Running Node +#### Running Node ``` start_plenum_node Alpha ``` -### Updating configuration +#### Updating configuration To update any configuration parameters, you need to update the `plenum_config.py` in `.plenum/YOUR_NETWORK_NAME` directory inside your home directory. eg. To update the node registry to use `127.0.0.1` as host put these in your `plenum_config.py`. @@ -164,26 +151,3 @@ cliNodeReg = OrderedDict([ ('DeltaC', (('127.0.0.1', 9708), '3af81a541097e3e042cacbe8761c0f9e54326049e1ceda38017c95c432312f6f', '8b112025d525c47e9df81a6de2966e1b4ee1ac239766e769f19d831175a04264')) ]) ``` - -# Immutable Ledger used in Plenum. - -This codebase provides a simple, python-based, immutable, ordered log of transactions -backed by a merkle tree. This is an efficient way to generate verifiable proofs of presence -and data consistency. - -The scope of concerns here is fairly narrow; it is not a full-blown -distributed ledger technology like Fabric, but simply the persistence -mechanism that Plenum needs. The repo is intended to be collapsed into the indy-node codebase -over time; hence there is no wiki, no documentation, and no intention to -use github issues to track bugs. - -You can log issues against this codebase in [Hyperledger's Jira](https://jira.hyperledger.org). - -Join us on [Hyperledger's Rocket.Chat](http://chat.hyperledger.org), on the #indy -channel, to discuss. - -# state -Plenum's state storage using python 3 version of Ethereum's Patricia Trie - -# stp -Secure Transport Protocol diff --git a/common/serializers/serialization.py b/common/serializers/serialization.py index 16a0caf600..1956ec1dc8 100644 --- a/common/serializers/serialization.py +++ b/common/serializers/serialization.py @@ -15,6 +15,7 @@ multi_sig_store_serializer = JsonSerializer() state_roots_serializer = Base58Serializer() proof_nodes_serializer = Base64Serializer() +multi_signature_value_serializer = MsgPackSerializer() # TODO: separate data, metadata and signature, so that we don't need to have topLevelKeysToIgnore diff --git a/crypto/bls/bls_crypto.py b/crypto/bls/bls_crypto.py index 70a8d2cdec..e111e9c960 100644 --- a/crypto/bls/bls_crypto.py +++ b/crypto/bls/bls_crypto.py @@ -26,7 +26,7 @@ def generate_keys(params: GroupParams, seed=None) -> (str, str): pass @abstractmethod - def sign(self, message: str) -> str: + def sign(self, message: bytes) -> str: pass @@ -36,9 +36,9 @@ def create_multi_sig(self, signatures: Sequence[str]) -> str: pass @abstractmethod - def verify_sig(self, signature: str, message: str, pk: str) -> bool: + def verify_sig(self, signature: str, message: bytes, pk: str) -> bool: pass @abstractmethod - def verify_multi_sig(self, signature: str, message: str, pks: Sequence[str]) -> bool: + def verify_multi_sig(self, signature: str, message: bytes, pks: Sequence[str]) -> bool: pass diff --git a/crypto/bls/bls_multi_signature.py b/crypto/bls/bls_multi_signature.py index fa3e7669b8..5b444f66db 100644 --- a/crypto/bls/bls_multi_signature.py +++ b/crypto/bls/bls_multi_signature.py @@ -1,5 +1,7 @@ from collections import OrderedDict +from common.serializers.serialization import multi_signature_value_serializer + class MultiSignatureValue: """ @@ -37,8 +39,7 @@ def __init__(self, self.timestamp = timestamp def as_single_value(self): - values = [str(v) for v in self.as_dict().values()] - return "".join(values) + return multi_signature_value_serializer.serialize(self.as_dict()) def as_dict(self): return OrderedDict(sorted(self.__dict__.items())) @@ -55,6 +56,9 @@ def as_list(self): def __eq__(self, other): return isinstance(other, MultiSignatureValue) and self.as_dict() == other.as_dict() + def __str__(self) -> str: + return str(self.as_dict()) + class MultiSignature: """ @@ -104,3 +108,6 @@ def as_list(self): def __eq__(self, other): return isinstance(other, MultiSignature) and self.as_dict() == other.as_dict() + + def __str__(self) -> str: + return str(self.as_dict()) diff --git a/crypto/bls/indy_crypto/bls_crypto_indy_crypto.py b/crypto/bls/indy_crypto/bls_crypto_indy_crypto.py index 733cc092cd..a51bd6168c 100644 --- a/crypto/bls/indy_crypto/bls_crypto_indy_crypto.py +++ b/crypto/bls/indy_crypto/bls_crypto_indy_crypto.py @@ -39,10 +39,6 @@ def bls_from_str(v: str, cls) -> Optional[BlsEntity]: return None return cls.from_bytes(bts) - @staticmethod - def msg_to_bls_bytes(msg: str) -> bytes: - return msg.encode() - @staticmethod def prepare_seed(seed): seed_bytes = None @@ -65,7 +61,7 @@ def __init__(self, params: GroupParams): self._generator = \ IndyCryptoBlsUtils.bls_from_str(params.g, Generator) # type: Generator - def verify_sig(self, signature: str, message: str, pk: str) -> bool: + def verify_sig(self, signature: str, message: bytes, pk: str) -> bool: bls_signature = IndyCryptoBlsUtils.bls_from_str(signature, Signature) if bls_signature is None: return False @@ -73,11 +69,11 @@ def verify_sig(self, signature: str, message: str, pk: str) -> bool: if bls_pk is None: return False return Bls.verify(bls_signature, - IndyCryptoBlsUtils.msg_to_bls_bytes(message), + message, bls_pk, self._generator) - def verify_multi_sig(self, signature: str, message: str, pks: Sequence[str]) -> bool: + def verify_multi_sig(self, signature: str, message: bytes, pks: Sequence[str]) -> bool: epks = [IndyCryptoBlsUtils.bls_from_str(p, VerKey) for p in pks] if None in epks: return False @@ -87,9 +83,8 @@ def verify_multi_sig(self, signature: str, message: str, pks: Sequence[str]) -> if multi_signature is None: return False - message_bytes = IndyCryptoBlsUtils.msg_to_bls_bytes(message) return Bls.verify_multi_sig(multi_sig=multi_signature, - message=message_bytes, + message=message, ver_keys=epks, gen=self._generator) @@ -117,7 +112,6 @@ def generate_keys(params: GroupParams, seed=None) -> (str, str): vk_str = IndyCryptoBlsUtils.bls_to_str(vk) return sk_str, vk_str - def sign(self, message: str) -> str: - bts = IndyCryptoBlsUtils.msg_to_bls_bytes(message) - sign = Bls.sign(bts, self._sk_bls) + def sign(self, message: bytes) -> str: + sign = Bls.sign(message, self._sk_bls) return IndyCryptoBlsUtils.bls_to_str(sign) diff --git a/crypto/test/bls/indy_crypto/test_bls_crypto_indy_crypto.py b/crypto/test/bls/indy_crypto/test_bls_crypto_indy_crypto.py index cb0ec7bd76..ccdcc5a9b4 100644 --- a/crypto/test/bls/indy_crypto/test_bls_crypto_indy_crypto.py +++ b/crypto/test/bls/indy_crypto/test_bls_crypto_indy_crypto.py @@ -36,6 +36,11 @@ def bls_verifier(default_params): return BlsCryptoVerifierIndyCrypto(default_params) +@pytest.fixture() +def message(): + return 'Hello!'.encode() + + def test_default_params(default_params): group_name, g = default_params assert group_name == 'generator' @@ -122,35 +127,35 @@ def test_generate_different_keys(default_params): def test_sign(bls_signer1): - sig = bls_signer1.sign('Hello!') + sig = bls_signer1.sign('Hello!'.encode()) assert sig def test_multi_sign(bls_signer1, bls_verifier): sigs = [] - sigs.append(bls_signer1.sign('Hello!')) - sigs.append(bls_signer1.sign('Hello!')) - sigs.append(bls_signer1.sign('Hello!!!!!')) + sigs.append(bls_signer1.sign('Hello!'.encode())) + sigs.append(bls_signer1.sign('Hello!'.encode())) + sigs.append(bls_signer1.sign('Hello!!!!!'.encode())) sig = bls_verifier.create_multi_sig(sigs) assert sig -def test_verify_one_signature(bls_signer1, bls_signer2, bls_verifier): +def test_verify_one_signature(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - sig1 = bls_signer1.sign('Hello!') - sig2 = bls_signer2.sign('Hello!') + sig1 = bls_signer1.sign(message) + sig2 = bls_signer2.sign(message) - assert bls_verifier.verify_sig(sig1, 'Hello!', pk1) - assert bls_verifier.verify_sig(sig2, 'Hello!', pk2) + assert bls_verifier.verify_sig(sig1, message, pk1) + assert bls_verifier.verify_sig(sig2, message, pk2) def test_verify_one_signature_long_message(bls_signer1, bls_signer2, bls_verifier): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' * 1000000 + msg = ('Hello!' * 1000000).encode() sig1 = bls_signer1.sign(msg) sig2 = bls_signer2.sign(msg) @@ -158,132 +163,137 @@ def test_verify_one_signature_long_message(bls_signer1, bls_signer2, bls_verifie assert bls_verifier.verify_sig(sig2, msg, pk2) -def test_verify_non_base58_signature(bls_signer1, bls_verifier): +def test_verify_non_base58_signature(bls_signer1, bls_verifier, message): pk = bls_signer1.pk - assert not bls_verifier.verify_sig('Incorrect Signature 1', 'Hello!', pk) + assert not bls_verifier.verify_sig('Incorrect Signature 1', + message, + pk) -def test_verify_non_base58_pk(bls_signer1, bls_verifier): +def test_verify_non_base58_pk(bls_signer1, bls_verifier, message): sig = bls_signer1.sign('Hello!') - assert not bls_verifier.verify_sig(sig, 'Hello!', 'Incorrect pk 1') + assert not bls_verifier.verify_sig(sig, + message, + 'Incorrect pk 1') -def test_verify_non_base58_sig_and_pk(bls_verifier): - assert not bls_verifier.verify_sig('Incorrect Signature 1', 'Hello!', 'Incorrect pk 1') +def test_verify_non_base58_sig_and_pk(bls_verifier, message): + assert not bls_verifier.verify_sig('Incorrect Signature 1', + message, + 'Incorrect pk 1') -def test_verify_invalid_signature(bls_signer1, bls_verifier): +def test_verify_invalid_signature(bls_signer1, bls_verifier, message): pk = bls_signer1.pk - sig = bls_signer1.sign('Hello!') + sig = bls_signer1.sign(message) assert not bls_verifier.verify_sig(sig[:-2], - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig[:-5], - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig + '0', - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig + base58.b58encode(b'0'), - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig + base58.b58encode(b'somefake'), - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(base58.b58encode(b'somefakesignaturesomefakesignature'), - 'Hello!', pk) + message, pk) -def test_verify_invalid_pk(bls_signer1, bls_verifier): +def test_verify_invalid_pk(bls_signer1, bls_verifier, message): pk = bls_signer1.pk - sig = bls_signer1.sign('Hello!') + sig = bls_signer1.sign(message) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk[:-2]) + message, pk[:-2]) assert not bls_verifier.verify_sig(sig, - 'Helo!', pk[:-5]) + message, pk[:-5]) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk + '0') + message, pk + '0') assert not bls_verifier.verify_sig(sig, - 'Hello!', pk + base58.b58encode(b'0')) + message, pk + base58.b58encode(b'0')) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk + base58.b58encode(b'somefake')) + message, pk + base58.b58encode(b'somefake')) assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'somefakepksomefakepk')) + message, base58.b58encode(b'somefakepksomefakepk')) -def test_verify_invalid_short_signature(bls_signer1, bls_verifier): +def test_verify_invalid_short_signature(bls_signer1, bls_verifier, message): pk = bls_signer1.pk - sig = bls_signer1.sign('Hello!') + sig = bls_signer1.sign(message) assert not bls_verifier.verify_sig(sig[:1], - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig[:2], - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(sig[:5], - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig('', - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(base58.b58encode(b'1' * 10), - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(base58.b58encode(b'1' * 2), - 'Hello!', pk) + message, pk) -def test_verify_invalid_short_pk(bls_signer1, bls_verifier): +def test_verify_invalid_short_pk(bls_signer1, bls_verifier, message): pk = bls_signer1.pk - sig = bls_signer1.sign('Hello!') + sig = bls_signer1.sign(message) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk[:1]) + message, pk[:1]) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk[:2]) + message, pk[:2]) assert not bls_verifier.verify_sig(sig, - 'Hello!', pk[:5]) + message, pk[:5]) assert not bls_verifier.verify_sig(sig, - 'Hello!', '') + message, '') assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'1' * 10)) + message, base58.b58encode(b'1' * 10)) assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'1' * 2)) + message, base58.b58encode(b'1' * 2)) -def test_verify_invalid_long_signature(bls_signer1, bls_verifier): +def test_verify_invalid_long_signature(bls_signer1, bls_verifier, message): pk = bls_signer1.pk assert not bls_verifier.verify_sig(base58.b58encode(b'1' * 500), - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(base58.b58encode(b'1' * 1000), - 'Hello!', pk) + message, pk) assert not bls_verifier.verify_sig(base58.b58encode(b'1' * 10000), - 'Hello!', pk) + message, pk) -def test_verify_invalid_long_pk(bls_signer1, bls_verifier): - sig = bls_signer1.sign('Hello!') +def test_verify_invalid_long_pk(bls_signer1, bls_verifier, message): + sig = bls_signer1.sign(message) assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'1' * 500)) + message, base58.b58encode(b'1' * 500)) assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'1' * 1000)) + message, base58.b58encode(b'1' * 1000)) assert not bls_verifier.verify_sig(sig, - 'Hello!', base58.b58encode(b'1' * 10000)) + message, base58.b58encode(b'1' * 10000)) -def test_verify_multi_signature(bls_signer1, bls_signer2, bls_verifier): +def test_verify_multi_signature(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' pks = [pk1, pk2] sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig1 = bls_verifier.create_multi_sig(sigs) - assert bls_verifier.verify_multi_sig(multi_sig1, msg, pks) + assert bls_verifier.verify_multi_sig(multi_sig1, message, pks) def test_verify_multi_signature_long_message(bls_signer1, bls_signer2, bls_verifier): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' * 1000000 + msg = ('Hello!' * 1000000).encode() pks = [pk1, pk2] sigs = [] @@ -294,199 +304,189 @@ def test_verify_multi_signature_long_message(bls_signer1, bls_signer2, bls_verif assert bls_verifier.verify_multi_sig(multi_sig, msg, pks) -def test_verify_non_base_58_multi_signature(bls_signer1, bls_signer2, bls_verifier): +def test_verify_non_base_58_multi_signature(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' pks = [pk1, pk2] multi_sig = 'Incorrect multi signature 1' - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) -def test_verify_non_base_58_pk_multi_signature(bls_signer1, bls_signer2, bls_verifier): +def test_verify_non_base_58_pk_multi_signature(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' - sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig = bls_verifier.create_multi_sig(sigs) pks = [pk1, 'Incorrect multi signature 1'] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = ['Incorrect multi signature 1', pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = ['Incorrect multi signature 1', 'Incorrect multi signature 2'] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) -def test_verify_invalid_multi_signature(bls_signer1, bls_signer2, bls_verifier): +def test_verify_invalid_multi_signature(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' pks = [pk1, pk2] sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig = bls_verifier.create_multi_sig(sigs) assert not bls_verifier.verify_multi_sig(multi_sig[:-2], - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig[:-5], - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig + '0', - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig + base58.b58encode(b'0'), - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig + base58.b58encode(b'somefake'), - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(base58.b58encode(b'somefakesignaturesomefakesignature'), - msg, pks) + message, pks) -def test_verify_invalid_multi_signature_short(bls_signer1, bls_signer2, bls_verifier): +def test_verify_invalid_multi_signature_short(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' pks = [pk1, pk2] sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig = bls_verifier.create_multi_sig(sigs) assert not bls_verifier.verify_multi_sig(multi_sig[:1], - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig[:2], - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(multi_sig[:5], - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig('', - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(base58.b58encode(b'1' * 10), - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(base58.b58encode(b'1' * 2), - msg, pks) + message, pks) -def test_verify_invalid_multi_signature_long(bls_signer1, bls_signer2, bls_verifier): +def test_verify_invalid_multi_signature_long(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' pks = [pk1, pk2] assert not bls_verifier.verify_multi_sig(base58.b58encode(b'1' * 500), - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(base58.b58encode(b'1' * 1000), - msg, pks) + message, pks) assert not bls_verifier.verify_multi_sig(base58.b58encode(b'1' * 10000), - msg, pks) + message, pks) -def test_verify_multi_signature_invalid_pk(bls_signer1, bls_signer2, bls_verifier): +def test_verify_multi_signature_invalid_pk(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' - sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig = bls_verifier.create_multi_sig(sigs) pks = [pk1, pk2[:-2]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:-2], pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:-2], pk2[:-2]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[-5], pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2[-5]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[-5], pk2[-5]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1 + '0', pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2 + '0'] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1 + '0', pk2 + '0'] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1 + base58.b58encode(b'somefake'), pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2 + base58.b58encode(b'somefake')] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1 + base58.b58encode(b'somefake'), pk2 + base58.b58encode(b'somefake')] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) -def test_verify_multi_signature_invalid_short_pk(bls_signer1, bls_signer2, bls_verifier): +def test_verify_multi_signature_invalid_short_pk(bls_signer1, bls_signer2, bls_verifier, message): pk1 = bls_signer1.pk pk2 = bls_signer2.pk - msg = 'Hello!' - sigs = [] - sigs.append(bls_signer1.sign(msg)) - sigs.append(bls_signer2.sign(msg)) + sigs.append(bls_signer1.sign(message)) + sigs.append(bls_signer2.sign(message)) multi_sig = bls_verifier.create_multi_sig(sigs) pks = [pk1, ''] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = ['', pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = ['', ''] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:1], pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2[:1]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:1], pk2[:1]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:2], pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2[:2]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:2], pk2[:2]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:5], pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, pk2[:5]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1[:5], pk2[:5]] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [base58.b58encode(b'1' * 10), pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, base58.b58encode(b'1' * 10)] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [base58.b58encode(b'1' * 10), base58.b58encode(b'1' * 10)] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [base58.b58encode(b'1' * 2), pk2] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [pk1, base58.b58encode(b'1' * 2)] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) pks = [base58.b58encode(b'1' * 2), base58.b58encode(b'1' * 2)] - assert not bls_verifier.verify_multi_sig(multi_sig, msg, pks) + assert not bls_verifier.verify_multi_sig(multi_sig, message, pks) diff --git a/crypto/test/test_multi_signature.py b/crypto/test/test_multi_signature.py index 6d5f20b5f8..0577b57033 100644 --- a/crypto/test/test_multi_signature.py +++ b/crypto/test/test_multi_signature.py @@ -3,6 +3,7 @@ import base58 import pytest +from common.serializers.serialization import multi_signature_value_serializer from crypto.bls.bls_multi_signature import MultiSignature, MultiSignatureValue from plenum.common.util import get_utc_epoch @@ -53,14 +54,9 @@ def expected_sig_value_list(): @pytest.fixture() -def expected_single_sig_value(): +def expected_single_sig_value(expected_sig_value_dict): # we expected it always in sorted order by keys - return \ - str(ledger_id) + \ - pool_state_root_hash + \ - state_root_hash + \ - str(timestamp) + \ - txn_root_hash + return multi_signature_value_serializer.serialize(expected_sig_value_dict) @pytest.fixture() diff --git a/ledger/genesis_txn/genesis_txn_file_util.py b/ledger/genesis_txn/genesis_txn_file_util.py index 682ce92858..8ff3a9aa6a 100644 --- a/ledger/genesis_txn/genesis_txn_file_util.py +++ b/ledger/genesis_txn/genesis_txn_file_util.py @@ -13,13 +13,6 @@ def genesis_txn_path(base_dir, transaction_file): return os.path.join(base_dir, genesis_txn_file(transaction_file)) -def update_genesis_txn_file_name_if_outdated(base_dir, transaction_file): - old_named_path = os.path.join(base_dir, transaction_file) - new_named_path = os.path.join(base_dir, genesis_txn_file(transaction_file)) - if not os.path.exists(new_named_path) and os.path.isfile(old_named_path): - os.rename(old_named_path, new_named_path) - - def create_genesis_txn_init_ledger(data_dir, txn_file): from ledger.genesis_txn.genesis_txn_initiator_from_file import GenesisTxnInitiatorFromFile initiator = GenesisTxnInitiatorFromFile(data_dir, txn_file) diff --git a/plenum/cli/cli.py b/plenum/cli/cli.py index d9b3a3841d..60f6731638 100644 --- a/plenum/cli/cli.py +++ b/plenum/cli/cli.py @@ -7,8 +7,7 @@ from jsonpickle import json from ledger.compact_merkle_tree import CompactMerkleTree -from ledger.genesis_txn.genesis_txn_file_util import create_genesis_txn_init_ledger, \ - update_genesis_txn_file_name_if_outdated +from ledger.genesis_txn.genesis_txn_file_util import create_genesis_txn_init_ledger from ledger.genesis_txn.genesis_txn_initiator_from_file import GenesisTxnInitiatorFromFile from ledger.ledger import Ledger from plenum.cli.command import helpCmd, statusNodeCmd, statusClientCmd, \ @@ -69,7 +68,7 @@ firstValue, randomString, bootstrapClientKeys, \ getFriendlyIdentifier, \ normalizedWalletFileName, getWalletFilePath, \ - getLastSavedWalletFileName, updateWalletsBaseDirNameIfOutdated + getLastSavedWalletFileName from stp_core.common.log import \ getlogger, Logger from plenum.server.node import Node @@ -259,8 +258,6 @@ def __init__(self, looper, basedirpath: str, ledger_base_dir: str, nodeReg=None, tp = loadPlugins(self.basedirpath) self.logger.debug("total plugins loaded in cli: {}".format(tp)) - updateWalletsBaseDirNameIfOutdated(self.config) - self.restoreLastActiveWallet() self.checkIfCmdHandlerAndCmdMappingExists() diff --git a/plenum/client/client.py b/plenum/client/client.py index 60dd48c9ef..4d04b3b386 100644 --- a/plenum/client/client.py +++ b/plenum/client/client.py @@ -6,6 +6,7 @@ import copy import os +import random import time from collections import deque, OrderedDict from functools import partial @@ -284,22 +285,19 @@ def submitReqs(self, *reqs: Request) -> List[Request]: errs = [] for request in reqs: - if (self.mode == Mode.discovered and self.hasSufficientConnections) or \ - (self.hasAnyConnections and - (request.txn_type in self._read_only_requests or request.isForced())): - - recipients = \ - {r.name - for r in self.nodestack.remotes.values() - if self.nodestack.isRemoteConnected(r)} + is_read_only = request.txn_type in self._read_only_requests + if self.can_send_request(request): + recipients = self._connected_node_names + if is_read_only and len(recipients) > 1: + recipients = random.sample(list(recipients), 1) logger.debug('Client {} sending request {} to recipients {}' .format(self, request, recipients)) - stat, err_msg = self.send(request, *recipients) + stat, err_msg = self.sendToNodes(request, names=recipients) if stat: - self.expectingFor(request, recipients) + self._expect_replies(request, recipients) else: errs.append(err_msg) logger.debug( @@ -344,29 +342,40 @@ def handleOneNodeMsg(self, wrappedMsg, excludeFromCli=None) -> None: self.ledgerManager.processCatchupRep(cMsg, frm) elif msg[OP_FIELD_NAME] == REQACK: self.reqRepStore.addAck(msg, frm) - self.gotExpected(msg, frm) + self._got_expected(msg, frm) elif msg[OP_FIELD_NAME] == REQNACK: self.reqRepStore.addNack(msg, frm) - self.gotExpected(msg, frm) + self._got_expected(msg, frm) elif msg[OP_FIELD_NAME] == REJECT: self.reqRepStore.addReject(msg, frm) - self.gotExpected(msg, frm) + self._got_expected(msg, frm) elif msg[OP_FIELD_NAME] == REPLY: result = msg[f.RESULT.nm] identifier = msg[f.RESULT.nm][f.IDENTIFIER.nm] reqId = msg[f.RESULT.nm][f.REQ_ID.nm] - numReplies = self.reqRepStore.addReply(identifier, reqId, frm, + numReplies = self.reqRepStore.addReply(identifier, + reqId, + frm, result) - self.gotExpected(msg, frm) + + self._got_expected(msg, frm) self.postReplyRecvd(identifier, reqId, frm, result, numReplies) def postReplyRecvd(self, identifier, reqId, frm, result, numReplies): - if not self.txnLog.hasTxn(identifier, reqId) and numReplies > self.f: - replies = self.reqRepStore.getReplies(identifier, reqId).values() - reply = checkIfMoreThanFSameItems(replies, self.f) + if not self.txnLog.hasTxn(identifier, reqId): + reply, _ = self.getReply(identifier, reqId) if reply: self.txnLog.append(identifier, reqId, reply) return reply + # Reply is not verified + key = (identifier, reqId) + if key not in self.expectingRepliesFor and numReplies == 1: + # only one node was asked, but its reply cannot be confirmed, + # so ask other nodes + recipients = self._connected_node_names.difference({frm}) + self.resendRequests({ + (identifier, reqId): recipients + }, force_expect=True) def _statusChanged(self, old, new): # do nothing for now @@ -593,20 +602,33 @@ def hasSufficientConnections(self): def hasAnyConnections(self): return len(self.nodestack.conns) > 0 - def hasMadeRequest(self, identifier, reqId: int): - return self.reqRepStore.hasRequest(identifier, reqId) - - def isRequestSuccessful(self, identifier, reqId): - acks = self.reqRepStore.getAcks(identifier, reqId) - nacks = self.reqRepStore.getNacks(identifier, reqId) - f = getMaxFailures(len(self.nodeReg)) - if len(acks) > f: - return True, "Done" - elif len(nacks) > f: - # TODO: What if the the nacks were different from each node? - return False, list(nacks.values())[0] - else: - return None + def can_send_write_requests(self): + if not Mode.is_done_discovering(self.mode): + return False + if not self.hasSufficientConnections: + return False + return True + + def can_send_read_requests(self): + if not Mode.is_done_discovering(self.mode): + return False + if not self.hasAnyConnections: + return False + return True + + def can_send_request(self, request): + if not Mode.is_done_discovering(self.mode): + return False + if self.hasSufficientConnections: + return True + if not self.hasAnyConnections: + return False + if request.isForced(): + return True + is_read_only = request.txn_type in self._read_only_requests + if is_read_only: + return True + return False def pendReqsTillConnection(self, request, signer=None): """ @@ -626,104 +648,108 @@ def flushMsgsPendingConnection(self): tmp = deque() while self.reqsPendingConnection: req, signer = self.reqsPendingConnection.popleft() - if (self.hasSufficientConnections and self.mode == Mode.discovered) or ( - req.isForced() and self.hasAnyConnections): + if self.can_send_request(req): self.send(req, signer=signer) else: tmp.append((req, signer)) self.reqsPendingConnection.extend(tmp) - def expectingFor(self, request: Request, nodes: Optional[Set[str]] = None): - nodes = nodes or {r.name for r in self.nodestack.remotes.values() - if self.nodestack.isRemoteConnected(r)} + def _expect_replies(self, request: Request, + nodes: Optional[Set[str]] = None): + nodes = nodes if nodes else self._connected_node_names now = time.perf_counter() self.expectingAcksFor[request.key] = (nodes, now, 0) self.expectingRepliesFor[request.key] = (copy.copy(nodes), now, 0) - self.startRepeating(self.retryForExpected, + self.startRepeating(self._retry_for_expected, self.config.CLIENT_REQACK_TIMEOUT) - def gotExpected(self, msg, frm): + @property + def _connected_node_names(self): + return { + remote.name + for remote in self.nodestack.remotes.values() + if self.nodestack.isRemoteConnected(remote) + } + + def _got_expected(self, msg, sender): + + def drop(req, register): + key = (req.get(f.IDENTIFIER.nm), req.get(f.REQ_ID.nm)) + if key in register: + received = register[key][0] + if sender in received: + received.remove(sender) + if not received: + register.pop(key) + if msg[OP_FIELD_NAME] == REQACK: - container = msg - colls = (self.expectingAcksFor,) + drop(msg, self.expectingAcksFor) elif msg[OP_FIELD_NAME] == REPLY: - container = msg[f.RESULT.nm] - # If an REQACK sent by node was lost, the request when sent again - # would fetch the reply or the client might just lose REQACK and not - # REPLY so when REPLY received, request does not need to be resent - colls = (self.expectingAcksFor, self.expectingRepliesFor) + drop(msg[f.RESULT.nm], self.expectingAcksFor) + drop(msg[f.RESULT.nm], self.expectingRepliesFor) elif msg[OP_FIELD_NAME] in (REQNACK, REJECT): - container = msg - colls = (self.expectingAcksFor, self.expectingRepliesFor) + drop(msg, self.expectingAcksFor) + drop(msg, self.expectingRepliesFor) else: raise RuntimeError("{} cannot retry {}".format(self, msg)) - idr = container.get(f.IDENTIFIER.nm) - reqId = container.get(f.REQ_ID.nm) - key = (idr, reqId) - for coll in colls: - if key in coll: - if frm in coll[key][0]: - coll[key][0].remove(frm) - if not coll[key][0]: - coll.pop(key) - - if not (self.expectingAcksFor or self.expectingRepliesFor): - self.stopRetrying() - - def stopRetrying(self): - self.stopRepeating(self.retryForExpected, strict=False) - - def _filterExpected(self, now, queue, retryTimeout, maxRetry): - deadRequests = [] - aliveRequests = {} - notAnsweredNodes = set() - for requestKey, (expectedFrom, lastTried, retries) in queue.items(): - if now < lastTried + retryTimeout: + if not self.expectingAcksFor and not self.expectingRepliesFor: + self._stop_expecting() + + def _stop_expecting(self): + self.stopRepeating(self._retry_for_expected, strict=False) + + def _filter_expected(self, now, queue, retry_timeout, max_retry): + dead_requests = [] + alive_requests = {} + not_answered_nodes = set() + for requestKey, (expected_from, last_tried, retries) in queue.items(): + if now < last_tried + retry_timeout: continue - if retries >= maxRetry: - deadRequests.append(requestKey) + if retries >= max_retry: + dead_requests.append(requestKey) continue - if requestKey not in aliveRequests: - aliveRequests[requestKey] = set() - aliveRequests[requestKey].update(expectedFrom) - notAnsweredNodes.update(expectedFrom) - return deadRequests, aliveRequests, notAnsweredNodes + if requestKey not in alive_requests: + alive_requests[requestKey] = set() + alive_requests[requestKey].update(expected_from) + not_answered_nodes.update(expected_from) + return dead_requests, alive_requests, not_answered_nodes - def retryForExpected(self): + def _retry_for_expected(self): now = time.perf_counter() - requestsWithNoAck, aliveRequests, notAckedNodes = \ - self._filterExpected(now, - self.expectingAcksFor, - self.config.CLIENT_REQACK_TIMEOUT, - self.config.CLIENT_MAX_RETRY_ACK) + requests_with_no_ack, alive_requests, not_acked_nodes = \ + self._filter_expected(now, + self.expectingAcksFor, + self.config.CLIENT_REQACK_TIMEOUT, + self.config.CLIENT_MAX_RETRY_ACK) - requestsWithNoReply, aliveRequests, notRepliedNodes = \ - self._filterExpected(now, - self.expectingRepliesFor, - self.config.CLIENT_REPLY_TIMEOUT, - self.config.CLIENT_MAX_RETRY_REPLY) + requests_with_no_reply, alive_requests, not_replied_nodes = \ + self._filter_expected(now, + self.expectingRepliesFor, + self.config.CLIENT_REPLY_TIMEOUT, + self.config.CLIENT_MAX_RETRY_REPLY) - for requestKey in requestsWithNoAck: + for request_key in requests_with_no_ack: logger.debug('{} have got no ACKs for {} and will not try again' - .format(self, requestKey)) - self.expectingAcksFor.pop(requestKey) + .format(self, request_key)) + self.expectingAcksFor.pop(request_key) - for requestKey in requestsWithNoReply: + for request_key in requests_with_no_reply: logger.debug('{} have got no REPLYs for {} and will not try again' - .format(self, requestKey)) - self.expectingRepliesFor.pop(requestKey) + .format(self, request_key)) + self.expectingRepliesFor.pop(request_key) - if notAckedNodes: + if not_acked_nodes: logger.debug('{} going to retry for {}' .format(self, self.expectingAcksFor.keys())) - for nm in notAckedNodes: + + for node_name in not_acked_nodes: try: - remote = self.nodestack.getRemote(nm) + remote = self.nodestack.getRemote(node_name) except RemoteNotFound: logger.warning('{}{} could not find remote {}' - .format(CONNECTION_PREFIX, self, nm)) + .format(CONNECTION_PREFIX, self, node_name)) continue logger.debug('Remote {} of {} being joined since REQACK for not ' 'received for request'.format(remote, self)) @@ -734,7 +760,7 @@ def retryForExpected(self): # self.nodestack.connect(name=remote.name) self.nodestack.maintainConnections(force=True) - if aliveRequests: + if alive_requests: # Need a delay in case connection has to be established with some # nodes, a better way is not to assume the delay value but only # send requests once the connection is established. Also it is @@ -743,10 +769,10 @@ def retryForExpected(self): # the value in stats of the stack and look for changes in count of # `message_reject_rx` but that is not very helpful either since # it does not record which node rejected - delay = 3 if notAckedNodes else 0 - self._schedule(partial(self.resendRequests, aliveRequests), delay) + delay = 3 if not_acked_nodes else 0 + self._schedule(partial(self.resendRequests, alive_requests), delay) - def resendRequests(self, keys): + def resendRequests(self, keys, force_expect=False): for key, nodes in keys.items(): if not nodes: continue @@ -759,6 +785,8 @@ def resendRequests(self, keys): if key in queue: _, _, retries = queue[key] queue[key] = (nodes, now, retries + 1) + elif force_expect: + queue[key] = (nodes, now, 1) def sendLedgerStatus(self, nodeName: str): ledgerStatus = LedgerStatus( @@ -776,7 +804,7 @@ def send(self, msg: Any, *rids: Iterable[int], signer: Signer = None): def sendToNodes(self, msg: Any, names: Iterable[str]): rids = [rid for rid, r in self.nodestack.remotes.items() if r.name in names] - self.send(msg, *rids) + return self.send(msg, *rids) @staticmethod def verifyMerkleProof(*replies: Tuple[Reply]) -> bool: diff --git a/plenum/client/pool_manager.py b/plenum/client/pool_manager.py index c0cbb17d34..4e89833fc7 100644 --- a/plenum/client/pool_manager.py +++ b/plenum/client/pool_manager.py @@ -1,8 +1,6 @@ import collections import json -from ledger.genesis_txn.genesis_txn_file_util import \ - update_genesis_txn_file_name_if_outdated from ledger.util import F from stp_core.network.exceptions import RemoteNotFound @@ -25,8 +23,6 @@ def __init__(self): self._ledgerFile = None TxnStackManager.__init__(self, self.name, self.basedirpath, isNode=False) - update_genesis_txn_file_name_if_outdated(self.basedirpath, - self.ledgerFile) _, cliNodeReg, nodeKeys = self.parseLedgerForHaAndKeys(self.ledger) self.nodeReg = cliNodeReg self.addRemoteKeysFromLedger(nodeKeys) diff --git a/plenum/client/wallet.py b/plenum/client/wallet.py index bd68b9aed5..d105fe6a35 100644 --- a/plenum/client/wallet.py +++ b/plenum/client/wallet.py @@ -6,9 +6,6 @@ import jsonpickle from jsonpickle import JSONBackend -from jsonpickle import tags -from jsonpickle.unpickler import loadclass -from jsonpickle.util import importable_name from libnacl import crypto_secretbox_open, randombytes, \ crypto_secretbox_NONCEBYTES, crypto_secretbox from plenum.common.constants import CURRENT_PROTOCOL_VERSION @@ -41,17 +38,6 @@ def decrypt(self, key) -> 'Wallet': ("lastReqId", int)]) -def getClassVersionKey(cls): - """ - Gets the wallet class version key for use in a serialized representation - of the wallet. - - :param cls: the wallet class - :return: the class version key - """ - return 'classver/{}'.format(importable_name(cls)) - - class Wallet: def __init__(self, name: str=None, @@ -418,6 +404,9 @@ def loadWallet(self, fpath): return wallet +WALLET_RAW_MIGRATORS = [] + + class WalletCompatibilityBackend(JSONBackend): """ Jsonpickle backend providing conversion of raw representations @@ -425,21 +414,11 @@ class WalletCompatibilityBackend(JSONBackend): to the current version. """ - def _getUpToDateClassName(self, pickledClassName): - return pickledClassName.replace('sovrin_client', 'indy_client') - def decode(self, string): raw = super().decode(string) # Note that backend.decode may be called not only for the whole object # representation but also for representations of non-string keys of # dictionaries. - # Here we assume that if the string represents a class instance and - # this class contains makeRawCompatible method then this class is - # a wallet class supporting backward compatibility - if isinstance(raw, dict) and tags.OBJECT in raw: - clsName = raw[tags.OBJECT] - cls = loadclass(self._getUpToDateClassName(clsName)) - if hasattr(cls, 'makeRawCompatible') \ - and callable(getattr(cls, 'makeRawCompatible')): - cls.makeRawCompatible(raw) + for migrator in WALLET_RAW_MIGRATORS: + migrator(raw) return raw diff --git a/plenum/common/test_network_setup.py b/plenum/common/test_network_setup.py index a4657b241f..399bb251ff 100644 --- a/plenum/common/test_network_setup.py +++ b/plenum/common/test_network_setup.py @@ -2,6 +2,8 @@ import ipaddress import os from collections import namedtuple +import fileinput +import shutil from ledger.genesis_txn.genesis_txn_file_util import create_genesis_txn_init_ledger @@ -12,7 +14,7 @@ from plenum.common.keygen_utils import initNodeKeysForBothStacks, init_bls_keys from plenum.common.constants import STEWARD, TRUSTEE -from plenum.common.util import hexToFriendly +from plenum.common.util import hexToFriendly, is_hostname_valid from plenum.common.signer_did import DidSigner from stp_core.common.util import adict from plenum.common.sys_util import copyall @@ -70,31 +72,17 @@ def bootstrapTestNodesCore( else: _localNodes = {int(_) for _ in localNodes} except BaseException as exc: - raise RuntimeError( - 'nodeNum must be an int or set of ints') from exc + raise RuntimeError('nodeNum must be an int or set of ints') from exc - baseDir = cls.setup_base_dir(config, envName) + baseDir = cls.setup_base_dir(config, envName) if _localNodes else cls.setup_clibase_dir(config, envName) + poolLedger = cls.init_pool_ledger(appendToLedgers, baseDir, config, envName) + domainLedger = cls.init_domain_ledger(appendToLedgers, baseDir, config, envName, domainTxnFieldOrder) - poolLedger = cls.init_pool_ledger(appendToLedgers, baseDir, config, - envName) - - domainLedger = cls.init_domain_ledger(appendToLedgers, baseDir, config, - envName, domainTxnFieldOrder) - - trustee_txn = Member.nym_txn( - trustee_def.nym, - trustee_def.name, - verkey=trustee_def.verkey, - role=TRUSTEE) + trustee_txn = Member.nym_txn(trustee_def.nym, trustee_def.name, verkey=trustee_def.verkey, role=TRUSTEE) domainLedger.add(trustee_txn) for sd in steward_defs: - nym_txn = Member.nym_txn( - sd.nym, - sd.name, - verkey=sd.verkey, - role=STEWARD, - creator=trustee_def.nym) + nym_txn = Member.nym_txn(sd.nym, sd.name, verkey=sd.verkey, role=STEWARD, creator=trustee_def.nym) domainLedger.add(nym_txn) key_dir = os.path.expanduser(baseDir) @@ -108,13 +96,10 @@ def bootstrapTestNodesCore( if nd.ip != '127.0.0.1': paramsFilePath = os.path.join(baseDir, nodeParamsFileName) - print('Nodes will not run locally, so writing ' - '{}'.format(paramsFilePath)) - TestNetworkSetup.writeNodeParamsFile( - paramsFilePath, nd.name, nd.port, nd.client_port) + print('Nodes will not run locally, so writing {}'.format(paramsFilePath)) + TestNetworkSetup.writeNodeParamsFile(paramsFilePath, nd.name, nd.port, nd.client_port) - print("This node with name {} will use ports {} and {} for " - "nodestack and clientstack respectively" + print("This node with name {} will use ports {} and {} for nodestack and clientstack respectively" .format(nd.name, nd.port, nd.client_port)) else: verkey = nd.verkey @@ -126,8 +111,7 @@ def bootstrapTestNodesCore( poolLedger.add(node_txn) for cd in client_defs: - txn = Member.nym_txn( - cd.nym, cd.name, verkey=cd.verkey, creator=trustee_def.nym) + txn = Member.nym_txn(cd.nym, cd.name, verkey=cd.verkey, creator=trustee_def.nym) domainLedger.add(txn) poolLedger.stop() @@ -142,11 +126,9 @@ def init_pool_ledger(cls, appendToLedgers, baseDir, config, envName): return pool_ledger @classmethod - def init_domain_ledger(cls, appendToLedgers, baseDir, config, envName, - domainTxnFieldOrder): + def init_domain_ledger(cls, appendToLedgers, baseDir, config, envName, domainTxnFieldOrder): domain_txn_file = cls.domain_ledger_file_name(config, envName) - domain_ledger = create_genesis_txn_init_ledger( - baseDir, domain_txn_file) + domain_ledger = create_genesis_txn_init_ledger(baseDir, domain_txn_file) if not appendToLedgers: domain_ledger.reset() return domain_ledger @@ -174,35 +156,28 @@ def setup_clibase_dir(cls, config, network_name): return cli_base_net @classmethod - def bootstrapTestNodes(cls, config, startingPort, - nodeParamsFileName, domainTxnFieldOrder): - - parser = argparse.ArgumentParser( - description="Generate pool transactions for testing") - + def bootstrapTestNodes(cls, config, startingPort, nodeParamsFileName, domainTxnFieldOrder): + parser = argparse.ArgumentParser(description="Generate pool transactions for testing") parser.add_argument('--nodes', required=True, help='node count should be less than 100', - type=cls._bootstrapArgsTypeNodeCount, - ) + type=cls._bootstrapArgsTypeNodeCount) parser.add_argument('--clients', required=True, type=int, help='client count') - parser.add_argument('--nodeNum', type=int, + parser.add_argument('--nodeNum', type=int, nargs='+', help='the number of the node that will ' 'run on this machine') parser.add_argument('--ips', - help='IPs of the nodes, provide comma separated' - ' IPs, if no of IPs provided are less than ' - 'number of nodes then the ' - 'remaining nodes are assigned the loopback ' - 'IP, i.e 127.0.0.1', - type=cls._bootstrapArgsTypeIps) - + help='IPs/hostnames of the nodes, provide comma ' + 'separated IPs, if no of IPs provided are less' + ' than number of nodes then the remaining ' + 'nodes are assigned the loopback IP, ' + 'i.e 127.0.0.1', + type=cls._bootstrap_args_type_ips_hosts) parser.add_argument('--network', help='Network name (default sandbox)', type=str, default="sandbox", required=False) - parser.add_argument( '--appendToLedgers', help="Determine if ledger files needs to be erased " @@ -211,12 +186,13 @@ def bootstrapTestNodes(cls, config, startingPort, args = parser.parse_args() - if args.nodeNum: - assert 0 <= args.nodeNum <= args.nodes, \ - "nodeNum should be less ore equal to nodeCount" + if isinstance(args.nodeNum, int): + assert 1 <= args.nodeNum <= args.nodes, "nodeNum should be less or equal to nodeCount" + elif isinstance(args.nodeNum, list): + bad_idxs = [x for x in args.nodeNum if not (1 <= x <= args.nodes)] + assert not bad_idxs, "nodeNum should be less or equal to nodeCount" - steward_defs, node_defs = cls.gen_defs( - args.ips, args.nodes, startingPort) + steward_defs, node_defs = cls.gen_defs(args.ips, args.nodes, startingPort) client_defs = cls.gen_client_defs(args.clients) trustee_def = cls.gen_trustee_def(1) cls.bootstrapTestNodesCore(config, args.network, args.appendToLedgers, @@ -224,10 +200,18 @@ def bootstrapTestNodes(cls, config, startingPort, steward_defs, node_defs, client_defs, args.nodeNum, nodeParamsFileName) - # copy configs to client folder - basedir = cls.setup_base_dir(config, args.network) - clidir = cls.setup_clibase_dir(config, args.network) - copyall(basedir, clidir) + # edit NETWORK_NAME in config + for line in fileinput.input(['/etc/indy/indy_config.py'], inplace=True): + if 'NETWORK_NAME' not in line: + print(line, end="") + with open('/etc/indy/indy_config.py', 'a') as cfgfile: + cfgfile.write("NETWORK_NAME = '{}'".format(args.network)) + + # in case of client only delete unnecessary key dir + if args.nodeNum is None: + key_dir = cls.setup_clibase_dir(config, args.network) + key_dir = os.path.join(key_dir, "keys") + shutil.rmtree(key_dir, ignore_errors=True) @staticmethod def _bootstrapArgsTypeNodeCount(nodesStrArg): @@ -246,18 +230,21 @@ def _bootstrapArgsTypeNodeCount(nodesStrArg): return n @staticmethod - def _bootstrapArgsTypeIps(ipsStrArg): + def _bootstrap_args_type_ips_hosts(ips_hosts_str_arg): ips = [] - for ip in ipsStrArg.split(','): - ip = ip.strip() + for arg in ips_hosts_str_arg.split(','): + arg = arg.strip() try: - ipaddress.ip_address(ip) + ipaddress.ip_address(arg) except ValueError: - raise argparse.ArgumentTypeError( - "'{}' is an invalid IP address".format(ip) - ) + if not is_hostname_valid(arg): + raise argparse.ArgumentTypeError( + "'{}' is not a valid IP or hostname".format(arg) + ) + else: + ips.append(arg) else: - ips.append(ip) + ips.append(arg) return ips @classmethod diff --git a/plenum/common/util.py b/plenum/common/util.py index abddb191e5..eb97a8ce86 100644 --- a/plenum/common/util.py +++ b/plenum/common/util.py @@ -10,6 +10,7 @@ import math import os import random +import re import time from binascii import unhexlify, hexlify from collections import Counter, defaultdict @@ -57,6 +58,7 @@ def randomStr(size): assert (size > 0), "Expected random string size cannot be less than 1" # Approach 1 rv = randombytes(size // 2).hex() + return rv if size % 2 == 0 else rv + hex(randombytes_uniform(15))[-1] # Approach 2 this is faster than Approach 1, but lovesh had a doubt @@ -68,6 +70,16 @@ def randomStr(size): return randomStr(size) +def random_from_alphabet(size, alphabet): + """ + Takes *size* random elements from provided alphabet + :param size: + :param alphabet: + """ + import random + return list(random.choice(alphabet) for _ in range(size)) + + def randomSeed(size=32): return randomString(size) @@ -496,6 +508,17 @@ def is_network_ip_address_valid(ip_address): return True +def is_hostname_valid(hostname): + # Taken from https://stackoverflow.com/a/2532344 + if len(hostname) > 255: + return False + if hostname[-1] == ".": + hostname = hostname[:-1] # strip exactly one dot from the right, + # if present + allowed = re.compile("(?!-)[A-Z\d-]{1,63}(? Replicas: return Replicas(self, self.monitor) @@ -1581,15 +1583,14 @@ def allLedgersCaughtUp(self): last_caught_up_3PC = self.ledgerManager.last_caught_up_3PC if compare_3PC_keys(self.master_last_ordered_3PC, last_caught_up_3PC) > 0: - self.master_replica.caught_up_till_3pc(last_caught_up_3PC) + for replica in self.replicas: + replica.caught_up_till_3pc(last_caught_up_3PC) logger.info('{}{} caught up till {}' .format(CATCH_UP_PREFIX, self, last_caught_up_3PC), extra={'cli': True}) - # TODO: Maybe a slight optimisation is to check result of # `self.num_txns_caught_up_in_last_catchup()` self.processStashedOrderedReqs() - if self.is_catchup_needed(): logger.info('{} needs to catchup again'.format(self)) self.start_catchup() @@ -2053,6 +2054,14 @@ def checkPerformance(self): if not self.isParticipating: return + last_num_ordered = self._last_performance_check_data.get('num_ordered') + num_ordered = sum(num for num, _ in self.monitor.numOrderedRequests) + nothing_changed = num_ordered == last_num_ordered + if nothing_changed: + return + + self._last_performance_check_data['num_ordered'] = num_ordered + if self.instances.masterId is not None: self.sendNodeRequestSpike() if self.monitor.isMasterDegraded(): @@ -2083,7 +2092,8 @@ def sendNodeRequestSpike(self): self.nodeRequestSpikeMonitorData, requests, self.config.notifierEventTriggeringConfig['nodeRequestSpike'], - self.name + self.name, + self.config.SpikeEventsEnabled ) def _create_instance_change_msg(self, view_no, suspicion_code): diff --git a/plenum/server/notifier_plugin_manager.py b/plenum/server/notifier_plugin_manager.py index 72ab947235..ece3bf4fa1 100644 --- a/plenum/server/notifier_plugin_manager.py +++ b/plenum/server/notifier_plugin_manager.py @@ -56,20 +56,32 @@ def sendMessageUponSuspiciousSpike( historicalData: Dict, newVal: float, config: Dict, - nodeName: str): + nodeName: str, + enabled: bool): assert 'value' in historicalData assert 'cnt' in historicalData assert 'minCnt' in config assert 'coefficient' in config + assert 'minActivityThreshold' in config + assert 'enabled' in config + + if not (enabled and config['enabled']): + logger.debug('Suspicious Spike check is disabled') + return None coefficient = config['coefficient'] minCnt = config['minCnt'] + val_thres = config['minActivityThreshold'] val = historicalData['value'] cnt = historicalData['cnt'] historicalData['value'] = \ val * (cnt / (cnt + 1)) + newVal / (cnt + 1) historicalData['cnt'] += 1 + if val < val_thres: + logger.debug('Current activity {} is below threshold level {}'.format(val, val_thres)) + return None + if cnt < minCnt: logger.debug('Not enough data to detect a {} spike'.format(event)) return None diff --git a/plenum/server/primary_selector.py b/plenum/server/primary_selector.py index c7daf22b06..3a58c5e268 100644 --- a/plenum/server/primary_selector.py +++ b/plenum/server/primary_selector.py @@ -81,7 +81,7 @@ def get_msgs_for_lagged_nodes(self) -> List[ViewChangeDone]: def decidePrimaries(self): if self.node.is_synced and self.master_replica.isPrimary is None: self._send_view_change_done_message() - self._startSelection() + self._start_selection() # Question: Master is always 0, until we change that rule why incur cost # of a method call, also name is confusing @@ -138,20 +138,20 @@ def _processViewChangeDoneMessage(self, logger.debug) return False - self._startSelection() + self._start_selection() def _verify_view_change(self): if not self.has_acceptable_view_change_quorum: - return False + return "has no view change quorum or no message from next primary" rv = self.has_sufficient_same_view_change_done_messages if rv is None: - return False + return "there are not sufficient same ViewChangeDone messages" if not self._verify_primary(*rv): - return False + return "failed to verify primary" - return True + return None def _verify_primary(self, new_primary, ledger_info): """ @@ -257,12 +257,13 @@ def is_behind_for_view(self) -> bool: return True return False - def _startSelection(self): + def _start_selection(self): + + error = self._verify_view_change() - if not self._verify_view_change(): - logger.debug('{} cannot start primary selection found failure in ' - 'primary verification. This can happen due to lack ' - 'of appropriate ViewChangeDone messages'.format(self)) + if error: + logger.debug('{} cannot start primary selection because {}' + .format(self, error)) return if not self.node.is_synced: diff --git a/plenum/server/propagator.py b/plenum/server/propagator.py index 9f00cc4296..787185d41e 100644 --- a/plenum/server/propagator.py +++ b/plenum/server/propagator.py @@ -241,12 +241,12 @@ def forward(self, request: Request): :param request: the REQUEST to propagate """ key = request.key - logger.debug('{} forwarding request {} to {} replicas'.format( - self, key, self.replicas.sum_inbox_len)) - + num_replicas = self.replicas.num_replicas + logger.debug('{} forwarding request {} to {} replicas' + .format(self, key, num_replicas)) self.replicas.pass_message(ReqKey(*key)) self.monitor.requestUnOrdered(*key) - self.requests.mark_as_forwarded(request, self.replicas.num_replicas) + self.requests.mark_as_forwarded(request, num_replicas) # noinspection PyUnresolvedReferences def recordAndPropagate(self, request: Request, clientName): diff --git a/plenum/server/replica.py b/plenum/server/replica.py index 2b5f87408c..f17b26e8d0 100644 --- a/plenum/server/replica.py +++ b/plenum/server/replica.py @@ -965,6 +965,10 @@ def processPrepare(self, prepare: Prepare, sender: str) -> None: :param prepare: a PREPARE msg :param sender: name of the node that sent the PREPARE """ + key = (prepare.viewNo, prepare.ppSeqNo) + logger.debug("{} received PREPARE{} from {}" + .format(self, key, sender)) + # TODO move this try/except up higher if self.isPpSeqNoStable(prepare.ppSeqNo): self.discard(prepare, diff --git a/plenum/test/bls/helper.py b/plenum/test/bls/helper.py index 540d44b4d3..9c6d589ac9 100644 --- a/plenum/test/bls/helper.py +++ b/plenum/test/bls/helper.py @@ -5,14 +5,16 @@ from plenum.common.constants import DOMAIN_LEDGER_ID, ALIAS, BLS_KEY from plenum.common.keygen_utils import init_bls_keys from plenum.common.messages.node_messages import Commit, Prepare, PrePrepare -from plenum.common.util import get_utc_epoch, randomString +from plenum.common.util import get_utc_epoch, randomString, random_from_alphabet from plenum.test.helper import sendRandomRequests, waitForSufficientRepliesForRequests from plenum.test.node_catchup.helper import waitNodeDataEquality, ensureClientConnectedToNodesAndPoolLedgerSame -from plenum.test.pool_transactions.helper import updateNodeData, buildPoolClientAndWallet, new_client +from plenum.test.pool_transactions.helper import updateNodeData, new_client + def generate_state_root(): return base58.b58encode(os.urandom(32)) + def check_bls_multi_sig_after_send(looper, txnPoolNodeSet, client, wallet, saved_multi_sigs_count): @@ -154,7 +156,12 @@ def change_bls_key(looper, txnPoolNodeSet, tdirWithPoolTxns, steward_client, steward_wallet, add_wrong=False): new_blspk = init_bls_keys(tdirWithPoolTxns, node.name) - key_in_txn = new_blspk if not add_wrong else randomString(32) + + key_in_txn = \ + new_blspk \ + if not add_wrong \ + else ''.join(random_from_alphabet(32, base58.alphabet)) + node_data = { ALIAS: node.name, BLS_KEY: key_in_txn diff --git a/plenum/test/bls/test_add_bls_key.py b/plenum/test/bls/test_add_bls_key.py index 57a973c563..98650d2c8e 100644 --- a/plenum/test/bls/test_add_bls_key.py +++ b/plenum/test/bls/test_add_bls_key.py @@ -10,7 +10,7 @@ # As we use tests with Module scope, results from previous tests are accumulated, so # rotating BLS keys one by one, eventually we will have all keys changed -def test_update_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -24,7 +24,7 @@ def test_update_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, stewards_and_wallets=stewards_and_wallets) -def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -38,7 +38,7 @@ def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, stewards_and_wallets=stewards_and_wallets) -def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -56,7 +56,7 @@ def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, stewards_and_wallets=stewards_and_wallets) -def test_update_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' diff --git a/plenum/test/bls/test_add_incorrect_bls_key.py b/plenum/test/bls/test_add_incorrect_bls_key.py index 70f208d4e7..3fc9913494 100644 --- a/plenum/test/bls/test_add_incorrect_bls_key.py +++ b/plenum/test/bls/test_add_incorrect_bls_key.py @@ -25,7 +25,7 @@ def test_add_incorrect_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_incorrect_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -40,7 +40,7 @@ def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_incorrect_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -59,7 +59,7 @@ def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_add_incorrect_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' diff --git a/plenum/test/bls/test_no_state_proof.py b/plenum/test/bls/test_no_state_proof.py index 659f084a76..f06a67c344 100644 --- a/plenum/test/bls/test_no_state_proof.py +++ b/plenum/test/bls/test_no_state_proof.py @@ -15,7 +15,7 @@ def test_make_proof_bls_disabled(looper, txnPoolNodeSet, req = reqs[0] for node in txnPoolNodeSet: - key = node.reqHandler.prepare_buy_key(req.identifier, req.reqId) + key = node.reqHandler.prepare_buy_key(req.identifier) proof = node.reqHandler.make_proof(key) assert not proof @@ -27,7 +27,7 @@ def test_make_result_bls_disabled(looper, txnPoolNodeSet, req = reqs[0] for node in txnPoolNodeSet: - key = node.reqHandler.prepare_buy_key(req.identifier, req.reqId) + key = node.reqHandler.prepare_buy_key(req.identifier) proof = node.reqHandler.make_proof(key) result = node.reqHandler.make_result(req, {TXN_TYPE: "buy"}, diff --git a/plenum/test/bls/test_state_proof.py b/plenum/test/bls/test_state_proof.py index 69c8097075..62483d34c6 100644 --- a/plenum/test/bls/test_state_proof.py +++ b/plenum/test/bls/test_state_proof.py @@ -17,7 +17,7 @@ def check_result(txnPoolNodeSet, req, client, should_have_proof): for node in txnPoolNodeSet: - key = node.reqHandler.prepare_buy_key(req.identifier, req.reqId) + key = node.reqHandler.prepare_buy_key(req.identifier) proof = node.reqHandler.make_proof(key) txn_time = get_utc_epoch() @@ -47,7 +47,7 @@ def test_make_proof_bls_enabled(looper, txnPoolNodeSet, req = reqs[0] for node in txnPoolNodeSet: - key = node.reqHandler.prepare_buy_key(req.identifier, req.reqId) + key = node.reqHandler.prepare_buy_key(req.identifier) proof = node.reqHandler.make_proof(key) assert proof diff --git a/plenum/test/bls/test_update_incorrect_bls_key.py b/plenum/test/bls/test_update_incorrect_bls_key.py index 1955f6b6e7..a2c0b1b990 100644 --- a/plenum/test/bls/test_update_incorrect_bls_key.py +++ b/plenum/test/bls/test_update_incorrect_bls_key.py @@ -10,7 +10,7 @@ # As we use tests with Module scope, results from previous tests are accumulated, so # rotating BLS keys one by one, eventually we will have all keys changed -def test_add_incorrect_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_update_incorrect_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -29,7 +29,7 @@ def test_add_incorrect_bls_one_node(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_update_incorrect_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -44,7 +44,7 @@ def test_update_bls_two_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_update_incorrect_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' @@ -59,7 +59,7 @@ def test_update_bls_three_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, add_wrong=True) -def test_update_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, +def test_update_incorrect_bls_all_nodes(looper, txnPoolNodeSet, tdirWithPoolTxns, poolTxnClientData, stewards_and_wallets): ''' diff --git a/plenum/test/client/test_client_can_send.py b/plenum/test/client/test_client_can_send.py new file mode 100644 index 0000000000..293633669f --- /dev/null +++ b/plenum/test/client/test_client_can_send.py @@ -0,0 +1,113 @@ +import pytest + +from plenum.test.delayers import reset_delays_and_process_delayeds, lsDelay, \ + reset_delays_and_process_delayeds_for_client +from plenum.test.exceptions import TestException +from plenum.test.helper import random_requests +from plenum.test.pool_transactions.helper import buildPoolClientAndWallet +from stp_core.loop.eventually import eventually +from plenum.test.pool_transactions.conftest import looper, clientAndWallet1, \ + client1, wallet1, client1Connected, steward1, stewardWallet, stewardAndWallet1 + + +def new_client(poolTxnClientData, tdirWithPoolTxns): + client, wallet = buildPoolClientAndWallet(poolTxnClientData, + tdirWithPoolTxns) + return (client, wallet) + + +def slow_catch_up(nodes, timeout): + for node in nodes: + node.nodeIbStasher.delay(lsDelay(timeout)) + node.clientIbStasher.delay(lsDelay(timeout)) + + +def cancel_slow_catch_up(nodes): + reset_delays_and_process_delayeds(nodes) + reset_delays_and_process_delayeds_for_client(nodes) + + +def can_send_write_requests(client): + if not client.can_send_write_requests(): + raise TestException('Client must be able to send write requests') + + +def can_send_read_requests(client): + if not client.can_send_read_requests(): + raise TestException('Client must be able to send read requests') + + +def can_send_request(client, req): + if not client.can_send_request(req): + raise TestException('Client must be able to send a request {}'.format(req)) + + +def test_client_can_send_write_requests_no_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + looper.run(eventually(can_send_write_requests, client)) + + +def test_client_can_send_read_requests_no_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + looper.run(eventually(can_send_read_requests, client)) + + +def test_client_can_send_request_no_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + req = random_requests(1)[0] + looper.run(eventually(can_send_request, client, req)) + + +def test_client_can_not_send_write_requests_until_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + slow_catch_up(txnPoolNodeSet, 60) + + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + + with pytest.raises(TestException): + looper.run(eventually(can_send_write_requests, client, timeout=4)) + + cancel_slow_catch_up(txnPoolNodeSet) + looper.run(eventually(can_send_write_requests, client)) + + +def test_client_can_send_read_requests_until_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + slow_catch_up(txnPoolNodeSet, 60) + + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + + with pytest.raises(TestException): + looper.run(eventually(can_send_read_requests, client, timeout=4)) + + cancel_slow_catch_up(txnPoolNodeSet) + looper.run(eventually(can_send_read_requests, client)) + + +def test_client_can_send_request_until_catchup(looper, + poolTxnClientData, tdirWithPoolTxns, + txnPoolNodeSet): + slow_catch_up(txnPoolNodeSet, 60) + + client, _ = new_client(poolTxnClientData, tdirWithPoolTxns) + looper.add(client) + req = random_requests(1)[0] + + with pytest.raises(TestException): + looper.run(eventually(can_send_request, client, req, timeout=4)) + + cancel_slow_catch_up(txnPoolNodeSet) + looper.run(eventually(can_send_request, client, req)) diff --git a/plenum/test/client/test_client_resends_not_confirmed_request.py b/plenum/test/client/test_client_resends_not_confirmed_request.py new file mode 100644 index 0000000000..0a08874bb7 --- /dev/null +++ b/plenum/test/client/test_client_resends_not_confirmed_request.py @@ -0,0 +1,50 @@ +import random + +from stp_core.common.log import getlogger +from plenum.test.client.conftest import passThroughReqAcked1 +from plenum.test.helper import send_signed_requests, \ + waitForSufficientRepliesForRequests +from plenum.test.malicious_behaviors_client import \ + genDoesntSendRequestToSomeNodes + +logger = getlogger() + +nodeCount = 4 +clientFault = genDoesntSendRequestToSomeNodes("AlphaC") +reqAcked1 = passThroughReqAcked1 +nodes_with_bls = 0 + + +def test_client_resends_not_confirmed_request(looper, + client1, + wallet1, + nodeSet): + """ + Check that client resends request to all nodes if it was previously sent + to one node but reply cannot be verified + """ + client = client1 + wallet = wallet1 + + initial_submit_count = client.spylog.count(client.submitReqs) + initial_resent_count = client.spylog.count(client.resendRequests) + + def sign_and_send(op): + signed = wallet.signOp(op) + return send_signed_requests(client, [signed]) + + buy = {'type': 'buy', 'amount': random.randint(10, 100)} + requests = sign_and_send(buy) + waitForSufficientRepliesForRequests(looper, client, requests=requests) + + buy = {'type': 'get_buy'} + client._read_only_requests.add('get_buy') + requests = sign_and_send(buy) + waitForSufficientRepliesForRequests(looper, client, requests=requests) + + # submitReqs should be called twice: first for but and then got get_buy + assert initial_submit_count + 2 == \ + client.spylog.count(client.submitReqs) + + assert initial_resent_count + 1 == \ + client.spylog.count(client.resendRequests) diff --git a/plenum/test/client/test_client_sends_get_request_to_one_node.py b/plenum/test/client/test_client_sends_get_request_to_one_node.py index 14d228d8d6..57db7c093c 100644 --- a/plenum/test/client/test_client_sends_get_request_to_one_node.py +++ b/plenum/test/client/test_client_sends_get_request_to_one_node.py @@ -1,9 +1,10 @@ import random +from plenum.test.spy_helpers import getAllArgs from stp_core.common.log import getlogger from stp_core.loop.eventually import eventually - from plenum.test.client.conftest import passThroughReqAcked1 +from plenum.test.helper import stopNodes, send_signed_requests from plenum.test.helper import stopNodes, send_signed_requests from plenum.test.malicious_behaviors_client import \ @@ -15,12 +16,45 @@ clientFault = genDoesntSendRequestToSomeNodes("AlphaC") reqAcked1 = passThroughReqAcked1 + def test_client_sends_get_request_to_one_node(looper, client1, wallet1, nodeSet): """ - Check that read only equest can be sent + Check that client sends read only request to one node only + """ + client = client1 + wallet = wallet1 + + def sign_and_send(op): + signed = wallet.signOp(op) + send_signed_requests(client, [signed]) + + logger.info("Send set request") + buy = {'type': 'buy', 'amount': random.randint(10, 100)} + sign_and_send(buy) + + send_args = getAllArgs(client, client.send) + rids = send_args[0]['rids'] + assert len(rids) > 1 + + logger.info("Send get request") + get_buy = {'type': 'get_buy'} + client._read_only_requests.add('get_buy') + sign_and_send(get_buy) + + send_args = getAllArgs(client, client.send) + rids = send_args[0]['rids'] + assert len(rids) == 1 + + +def test_client_can_send_get_request_to_one_node(looper, + client1, + wallet1, + nodeSet): + """ + Check that read only request can be sent without having connection to all nodes """ client = client1 diff --git a/plenum/test/delayers.py b/plenum/test/delayers.py index d8ae5f3458..d1b1d97428 100644 --- a/plenum/test/delayers.py +++ b/plenum/test/delayers.py @@ -242,3 +242,7 @@ def delay_3pc_messages(nodes, inst_id, delay=None, min_delay=None, def reset_delays_and_process_delayeds(nodes): for node in nodes: node.reset_delays_and_process_delayeds() + +def reset_delays_and_process_delayeds_for_client(nodes): + for node in nodes: + node.reset_delays_and_process_delayeds_for_clients() \ No newline at end of file diff --git a/plenum/test/exceptions.py b/plenum/test/exceptions.py index 66f58da534..133a432035 100644 --- a/plenum/test/exceptions.py +++ b/plenum/test/exceptions.py @@ -1,2 +1,5 @@ class NotFullyConnected(Exception): pass + +class TestException(Exception): + pass diff --git a/plenum/test/helper.py b/plenum/test/helper.py index b3f6eee8ab..dd70aed497 100644 --- a/plenum/test/helper.py +++ b/plenum/test/helper.py @@ -300,15 +300,12 @@ async def aSetupClient(looper: Looper, def randomOperation(): return { "type": "buy", - "amount": random.randint(10, 100) + "amount": random.randint(10, 100000) } def random_requests(count): - return [{ - "type": "buy", - "amount": random.randint(10, 100) - } for _ in range(count)] + return [randomOperation() for _ in range(count)] def random_request_objects(count, protocol_version): @@ -920,6 +917,7 @@ def chk_all_funcs(looper, funcs, acceptable_fails=0, retry_wait=None, # TODO: Move this logic to eventuallyAll def chk(): fails = 0 + last_ex = None for func in funcs: try: func() @@ -927,7 +925,8 @@ def chk(): fails += 1 if fails >= acceptable_fails: logger.debug('Too many fails, the last one: {}'.format(repr(ex))) - assert fails <= acceptable_fails + last_ex = ex + assert fails <= acceptable_fails, str(last_ex) kwargs = {} if retry_wait: diff --git a/plenum/test/monitoring/test_no_check_if_no_new_requests.py b/plenum/test/monitoring/test_no_check_if_no_new_requests.py new file mode 100644 index 0000000000..3e34c98d82 --- /dev/null +++ b/plenum/test/monitoring/test_no_check_if_no_new_requests.py @@ -0,0 +1,35 @@ +from stp_core.loop.looper import Looper +from plenum.test.test_node import TestNodeSet +from plenum.test.helper import sendReqsToNodesAndVerifySuffReplies + + +def test_not_check_if_no_new_requests(looper: Looper, + nodeSet: TestNodeSet, + wallet1, client1): + """ + Checks that node does not do performance check if there were no new + requests since previous check + """ + + # Ensure that nodes participating, because otherwise they do not do check + for node in list(nodeSet): + assert node.isParticipating + + # Check that first performance checks passes, but further do not + for node in list(nodeSet): + assert node.checkPerformance() + assert not node.checkPerformance() + assert not node.checkPerformance() + assert not node.checkPerformance() + + # Send new request and check that after it nodes can do + # performance check again + num_requests = 1 + sendReqsToNodesAndVerifySuffReplies(looper, + wallet1, + client1, + num_requests, + nodeSet.f) + + for node in list(nodeSet): + assert node.checkPerformance() diff --git a/plenum/test/node_catchup/test_node_catchup_causes_no_desync.py b/plenum/test/node_catchup/test_node_catchup_causes_no_desync.py new file mode 100644 index 0000000000..b898170942 --- /dev/null +++ b/plenum/test/node_catchup/test_node_catchup_causes_no_desync.py @@ -0,0 +1,95 @@ +from plenum.common.util import compare_3PC_keys +from plenum.test.delayers import pDelay, cDelay, ppDelay +from plenum.test.node_catchup.test_node_reject_invalid_txn_during_catchup import \ + get_any_non_primary_node +from stp_core.common.log import getlogger +from plenum.test.helper import sendReqsToNodesAndVerifySuffReplies +from plenum.test.node_catchup.helper import \ + waitNodeDataEquality, \ + waitNodeDataInequality +from plenum.test.pool_transactions.helper import \ + disconnect_node_and_ensure_disconnected, \ + reconnect_node_and_ensure_connected + +# noinspection PyUnresolvedReferences +from plenum.test.pool_transactions.conftest import \ + clientAndWallet1, client1, wallet1, client1Connected, looper +from stp_core.loop.eventually import eventually + +logger = getlogger() +txnCount = 5 + + +def make_master_replica_lag(node): + + # class AbysmalBox(list): + # def append(self, object) -> None: + # pass + # + # node.replicas._master_replica.inBox = AbysmalBox() + node.nodeIbStasher.delay(ppDelay(1200, 0)) + node.nodeIbStasher.delay(pDelay(1200, 0)) + node.nodeIbStasher.delay(cDelay(1200, 0)) + + +def compare_last_ordered_3pc(node): + last_ordered_by_master = node.replicas._master_replica.last_ordered_3pc + comparison_results = { + compare_3PC_keys(replica.last_ordered_3pc, last_ordered_by_master) + for replica in node.replicas if not replica.isMaster + } + assert len(comparison_results) == 1 + return comparison_results.pop() + + +def backup_replicas_run_forward(node): + assert compare_last_ordered_3pc(node) < 0 + + +def replicas_synced(node): + assert compare_last_ordered_3pc(node) == 0 + + +def test_node_catchup_causes_no_desync(looper, + txnPoolNodeSet, + client1, + wallet1, + client1Connected): + """ + Checks that transactions received by catchup do not + break performance monitoring + """ + + client, wallet = client1, wallet1 + lagging_node = get_any_non_primary_node(txnPoolNodeSet) + rest_nodes = set(txnPoolNodeSet).difference({lagging_node}) + + # Make master replica lagging by hiding all messages sent to it + make_master_replica_lag(lagging_node) + + # Send some requests and check that all replicas except master executed it + sendReqsToNodesAndVerifySuffReplies(looper, wallet, client, 5) + waitNodeDataInequality(looper, lagging_node, *rest_nodes) + looper.run(eventually(backup_replicas_run_forward, lagging_node)) + + # Disconnect lagging node, send some more requests and start it back + # After start it should fall in a such state that it needs to make catchup + disconnect_node_and_ensure_disconnected(looper, + txnPoolNodeSet, + lagging_node, + stopNode=False) + looper.removeProdable(lagging_node) + sendReqsToNodesAndVerifySuffReplies(looper, wallet, client, 5) + looper.add(lagging_node) + reconnect_node_and_ensure_connected(looper, txnPoolNodeSet, lagging_node) + + # Check that catchup done + waitNodeDataEquality(looper, lagging_node, *rest_nodes) + + # Send some more requests to ensure that backup and master replicas + # are in the same state + sendReqsToNodesAndVerifySuffReplies(looper, wallet, client, 5) + looper.run(eventually(replicas_synced, lagging_node)) + + # Check that master is not considered to be degraded + assert not lagging_node.monitor.isMasterDegraded() diff --git a/plenum/test/plugin/test_notifier_plugin_manager.py b/plenum/test/plugin/test_notifier_plugin_manager.py index d5a0cf01b7..5978d2a603 100644 --- a/plenum/test/plugin/test_notifier_plugin_manager.py +++ b/plenum/test/plugin/test_notifier_plugin_manager.py @@ -55,11 +55,13 @@ def testPluginManagerSendMessageUponSuspiciousSpikeFailsOnMinCnt( newVal = 10 config = { 'coefficient': 2, - 'minCnt': 10 + 'minCnt': 10, + 'minActivityThreshold': 0, + 'enabled': True } assert pluginManagerWithImportedModules\ .sendMessageUponSuspiciousSpike(topic, historicalData, - newVal, config, name)is None + newVal, config, name, enabled=True) is None def testPluginManagerSendMessageUponSuspiciousSpikeFailsOnCoefficient( @@ -73,11 +75,13 @@ def testPluginManagerSendMessageUponSuspiciousSpikeFailsOnCoefficient( newVal = 15 config = { 'coefficient': 2, - 'minCnt': 10 + 'minCnt': 10, + 'minActivityThreshold': 0, + 'enabled': True } assert pluginManagerWithImportedModules\ .sendMessageUponSuspiciousSpike(topic, historicalData, - newVal, config, name) is None + newVal, config, name, enabled=True) is None def testPluginManagerSendMessageUponSuspiciousSpike( @@ -91,21 +95,26 @@ def testPluginManagerSendMessageUponSuspiciousSpike( newVal = 25 config = { 'coefficient': 2, - 'minCnt': 10 + 'minCnt': 10, + 'minActivityThreshold': 0, + 'enabled': True } sent, found = pluginManagerWithImportedModules\ .sendMessageUponSuspiciousSpike(topic, historicalData, - newVal, config, name) + newVal, config, name, enabled=True) assert sent == 3 def testNodeSendNodeRequestSpike(pluginManagerWithImportedModules, testNode): def mockProcessRequest(obj, inc=1): obj.nodeRequestSpikeMonitorData['accum'] += inc + testNode.config.SpikeEventsEnabled = True testNode.config.notifierEventTriggeringConfig['nodeRequestSpike'] = { 'coefficient': 3, 'minCnt': 1, - 'freq': 60 + 'freq': 60, + 'minActivityThreshold': 0, + 'enabled': True } mockProcessRequest(testNode) assert testNode.sendNodeRequestSpike() is None @@ -121,10 +130,79 @@ def testMonitorSendClusterThroughputSpike(pluginManagerWithImportedModules, testNode.monitor.clusterThroughputSpikeMonitorData['accum'] = [1] testNode.monitor.notifierEventTriggeringConfig['clusterThroughputSpike'] = { - 'coefficient': 3, 'minCnt': 1, 'freq': 60} + 'coefficient': 3, 'minCnt': 1, 'freq': 60, 'minActivityThreshold': 0, 'enabled': True} assert testNode.monitor.sendClusterThroughputSpike() is None testNode.monitor.clusterThroughputSpikeMonitorData['accum'] = [2] assert testNode.monitor.sendClusterThroughputSpike() is None testNode.monitor.clusterThroughputSpikeMonitorData['accum'] = [4.6] sent, found = testNode.monitor.sendClusterThroughputSpike() assert sent == 3 + + +def test_suspicious_spike_check_disabled_config(pluginManagerWithImportedModules): + topic = randomText(10) + name = randomText(10) + hdata = {'value': 10, 'cnt': 10} + nval = 25 + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 2, 'enabled': True} + + sent, _ = pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) + assert sent == 3 + + config['enabled'] = False + hdata['value'] = 10 + hdata['cnt'] = 10 + assert pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) is None + + +def test_suspicious_spike_check_disabled_func(pluginManagerWithImportedModules): + topic = randomText(10) + name = randomText(10) + hdata = {'value': 10, 'cnt': 10} + nval = 25 + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 2, 'enabled': True} + + sent, _ = pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) + assert sent == 3 + + hdata['value'] = 10 + hdata['cnt'] = 10 + assert pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=False) is None + + +def test_no_message_from_0_to_1(pluginManagerWithImportedModules): + topic = randomText(10) + name = randomText(10) + hdata = {'value': 0, 'cnt': 10} + nval = 1 + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 0, 'enabled': True} + + sent, _ = pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) + assert sent == 3 + + hdata = {'value': 0, 'cnt': 10} + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 2, 'enabled': True} + assert pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) is None + + +def test_no_message_from_1_to_0(pluginManagerWithImportedModules): + topic = randomText(10) + name = randomText(10) + hdata = {'value': 1, 'cnt': 10} + nval = 0 + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 0, 'enabled': True} + + sent, _ = pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) + assert sent == 3 + + hdata = {'value': 1, 'cnt': 10} + config = {'coefficient': 2, 'minCnt': 10, 'minActivityThreshold': 2, 'enabled': True} + assert pluginManagerWithImportedModules.sendMessageUponSuspiciousSpike(topic, hdata, nval, + config, name, enabled=True) is None diff --git a/plenum/test/primary_selection/test_primary_selector.py b/plenum/test/primary_selection/test_primary_selector.py index dadda1ce2a..dd3c7ca44b 100644 --- a/plenum/test/primary_selection/test_primary_selector.py +++ b/plenum/test/primary_selection/test_primary_selector.py @@ -261,7 +261,7 @@ def test_process_view_change_done(tmpdir): selector._processViewChangeDoneMessage(msg, 'Node3') assert selector._verify_primary(msg.name, msg.ledgerInfo) - selector._startSelection() + selector._start_selection() assert selector._view_change_done # Since the FakeNode does not have setting of mode # assert node.is_primary_found() diff --git a/plenum/test/script/test_bootstrap_test_node.py b/plenum/test/script/test_bootstrap_test_node.py index 7d9e598510..85db3eb5a1 100644 --- a/plenum/test/script/test_bootstrap_test_node.py +++ b/plenum/test/script/test_bootstrap_test_node.py @@ -1,3 +1,7 @@ +from argparse import ArgumentTypeError + +import pytest + from plenum.common.test_network_setup import TestNetworkSetup from plenum.common.txn_util import getTxnOrderedFields from plenum.common.util import randomString @@ -5,18 +9,58 @@ portsStart = 9600 -def testBootstrapTestNode(tconf): - # TODO: Need to add some asserts +@pytest.fixture(scope='module') +def params(tconf): steward_defs, node_defs = TestNetworkSetup.gen_defs( ips=None, nodeCount=4, starting_port=portsStart) client_defs = TestNetworkSetup.gen_client_defs(clientCount=1) trustee_def = TestNetworkSetup.gen_trustee_def(1) nodeParamsFile = randomString() + return steward_defs, node_defs, client_defs, trustee_def, nodeParamsFile + + +def testBootstrapTestNode(params, tconf): + # TODO: Need to add some asserts + steward_defs, node_defs, client_defs, trustee_def, nodeParamsFile = params TestNetworkSetup.bootstrapTestNodesCore( config=tconf, envName="test", appendToLedgers=False, domainTxnFieldOrder=getTxnOrderedFields(), trustee_def=trustee_def, steward_defs=steward_defs, node_defs=node_defs, client_defs=client_defs, localNodes=1, nodeParamsFileName=nodeParamsFile) + + +def test_check_valid_ip_host(params, tconf): + _, _, client_defs, trustee_def, nodeParamsFile = params + + valid = [ + '34.200.79.65,52.38.24.189', + 'ec2-54-173-9-185.compute-1.amazonaws.com,ec2-52-38-24-189.compute-1.amazonaws.com', + 'ec2-54-173-9-185.compute-1.amazonaws.com,52.38.24.189,34.200.79.65', + '52.38.24.189,ec2-54-173-9-185.compute-1.amazonaws.com,34.200.79.65', + 'ledger.net,ledger.net' + ] + + invalid = [ + '34.200.79()3.65,52.38.24.189', + '52.38.24.189,ec2-54-173$-9-185.compute-1.amazonaws.com,34.200.79.65', + '52.38.24.189,ec2-54-173-9-185.com$pute-1.amazonaws.com,34.200.79.65', + '52.38.24.189,ec2-54-173-9-185.com&pute-1.amazonaws.com,34.200.79.65', + '52.38.24.189,ec2-54-173-9-185.com*pute-1.amazonaws.com,34.200.79.65', + ] + for v in valid: + assert v.split(',') == TestNetworkSetup._bootstrap_args_type_ips_hosts(v) + steward_defs, node_defs = TestNetworkSetup.gen_defs( + ips=None, nodeCount=2, starting_port=portsStart) + TestNetworkSetup.bootstrapTestNodesCore( + config=tconf, envName="test", appendToLedgers=False, + domainTxnFieldOrder=getTxnOrderedFields(), + trustee_def=trustee_def, steward_defs=steward_defs, + node_defs=node_defs, client_defs=client_defs, localNodes=1, + nodeParamsFileName=nodeParamsFile) + + for v in invalid: + with pytest.raises(ArgumentTypeError): + TestNetworkSetup._bootstrap_args_type_ips_hosts(v) diff --git a/plenum/test/test_node.py b/plenum/test/test_node.py index 17f11741f4..95f0035798 100644 --- a/plenum/test/test_node.py +++ b/plenum/test/test_node.py @@ -48,6 +48,7 @@ from plenum.common.messages.node_message_factory import node_message_factory from plenum.server.replicas import Replicas from hashlib import sha256 +from plenum.common.messages.node_messages import Reply logger = getlogger() @@ -58,14 +59,13 @@ class TestDomainRequestHandler(DomainRequestHandler): def prepare_buy_for_state(txn): from common.serializers.serialization import domain_state_serializer identifier = txn.get(f.IDENTIFIER.nm) - request_id = txn.get(f.REQ_ID.nm) - value = domain_state_serializer.serialize({TXN_TYPE: "buy"}) - key = TestDomainRequestHandler.prepare_buy_key(identifier, request_id) + value = domain_state_serializer.serialize({"amount": txn['amount']}) + key = TestDomainRequestHandler.prepare_buy_key(identifier) return key, value @staticmethod - def prepare_buy_key(identifier, request_id): - return sha256('{}:{}'.format(identifier, request_id).encode()).digest() + def prepare_buy_key(identifier): + return sha256('{}:buy'.format(identifier).encode()).digest() def _updateStateWithSingleTxn(self, txn, isCommitted=False): typ = txn.get(TXN_TYPE) @@ -181,6 +181,12 @@ def resetDelays(self): for r in self.replicas: r.outBoxTestStasher.resetDelays() + def resetDelaysClient(self): + logger.debug("{} resetting delays for client".format(self)) + self.nodestack.resetDelays() + self.clientstack.resetDelays() + self.clientIbStasher.resetDelays() + def force_process_delayeds(self): c = self.nodestack.force_process_delayeds() c += self.nodeIbStasher.force_unstash() @@ -190,10 +196,21 @@ def force_process_delayeds(self): "{} processed in total".format(self, c)) return c + def force_process_delayeds_for_client(self): + c = self.clientstack.force_process_delayeds() + c += self.clientIbStasher.force_unstash() + logger.debug("{} forced processing of delayed messages for clients, " + "{} processed in total".format(self, c)) + return c + def reset_delays_and_process_delayeds(self): self.resetDelays() self.force_process_delayeds() + def reset_delays_and_process_delayeds_for_clients(self): + self.resetDelaysClient() + self.force_process_delayeds_for_client() + def whitelistNode(self, nodeName: str, *codes: int): if nodeName not in self.whitelistedClients: self.whitelistedClients[nodeName] = set() @@ -251,10 +268,27 @@ def getDomainReqHandler(self): self.reqProcessors, self.bls_bft.bls_store) + def processRequest(self, request, frm): + if request.operation[TXN_TYPE] == 'get_buy': + self.send_ack_to_client(request.key, frm) + + identifier = request.identifier + buy_key = self.reqHandler.prepare_buy_key(identifier) + result = self.reqHandler.state.get(buy_key) + + res = { + f.IDENTIFIER.nm: identifier, + f.REQ_ID.nm: request.reqId, + "buy": result + } + + self.transmitToClient(Reply(res), frm) + else: + super().processRequest(request, frm) node_spyables = [Node.handleOneNodeMsg, Node.handleInvalidClientMsg, - Node.processRequest, + Node.processRequest.__name__, Node.processOrdered, Node.postToClientInBox, Node.postToNodeInBox, diff --git a/plenum/test/testable.py b/plenum/test/testable.py index 3a0d3bbbd5..d4387f1ffb 100644 --- a/plenum/test/testable.py +++ b/plenum/test/testable.py @@ -59,10 +59,6 @@ def count(self, method: SpyableMethod) -> int: def spy(func, is_init, should_spy, spy_log=None): sig = inspect.signature(func) - paramNames = [k for k in sig.parameters] - # TODO Find a better way - if paramNames and paramNames[0] == "self": - paramNames = paramNames[1:] # sets up spylog, but doesn't spy on init def init_only(self, *args, **kwargs): @@ -89,19 +85,10 @@ def wrap(self, *args, **kwargs): r = ex raise finally: - params = {} - if kwargs: - for k, v in kwargs.items(): - params[k] = v - if args: - for i, nm in enumerate(paramNames[:len(args)]): - params[nm] = args[i] - - used_log = spy_log - - if hasattr(self, 'spylog'): - used_log = self.spylog - + bound = sig.bind(self, *args, **kwargs) + params = dict(bound.arguments) + params.pop('self', None) + used_log = self.spylog if hasattr(self, 'spylog') else spy_log used_log.append(Entry(start, time.perf_counter(), func.__name__,