Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 0.4.0 branch #217

Closed
wants to merge 36 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
ce23cb4
add evm version capabilities checker to NetworkEnv
charles-cooper Mar 14, 2024
d83a13c
Fix UnboundLocalError: local variable 'snapshot_id' referenced before…
DanielSchiavini May 6, 2024
25405a2
Merge pull request #203 from DanielSchiavini/block_id
charles-cooper May 6, 2024
a05cd0e
feat(etherscan): retry on rate limit (#200)
DanielSchiavini May 6, 2024
38633ba
fix(readme): Titanoboa is not the largest anymore (#198)
DanielSchiavini May 6, 2024
528d685
add some comments
charles-cooper May 6, 2024
ed545f2
Merge pull request #185 from charles-cooper/feat/check-evm-version
charles-cooper May 6, 2024
df7f9ac
feat[test]: propagate dev reason to parent stack
DanielSchiavini May 7, 2024
665676e
fix leveldb global
charles-cooper May 7, 2024
f84bad9
fix rpc db init
charles-cooper May 7, 2024
fd7efcb
check propagation disabled when another reason is given
DanielSchiavini May 7, 2024
dfda318
Merge pull request #207 from DanielSchiavini/dev-reason-propagation
charles-cooper May 7, 2024
1a1d1c4
fix creation of RawEvents
cfcfs May 8, 2024
a5b8545
add set_code, get_storage, set_storage to boa.env
DanielSchiavini May 8, 2024
49a5411
Remove `from_journal`
DanielSchiavini May 8, 2024
79ef0a6
Move set checks to NetworkEnv
DanielSchiavini May 8, 2024
837c037
Unused fixture
DanielSchiavini May 8, 2024
31f3554
test RawEvent event_data
cfcfs May 8, 2024
eeb490b
Block set_balance in network mode
DanielSchiavini May 8, 2024
570bf5d
Unintended change
DanielSchiavini May 8, 2024
f5e9afc
nit: rename to expected_log
cfcfs May 8, 2024
7a448e0
fix dynamic_source_filename return for contracts without filename
cfcfs May 8, 2024
513cb35
Update workflow
DanielSchiavini May 8, 2024
92356b2
Update .github/workflows/integration.yaml
DanielSchiavini May 8, 2024
d9dd896
Merge pull request #211 from DanielSchiavini/master
charles-cooper May 8, 2024
5442db2
change contract filename checking
cfcfs May 8, 2024
e32b85d
Merge pull request #212 from cfcfs/fix/coverage-source-filename
charles-cooper May 8, 2024
9a85898
Merge pull request #209 from cfcfs/fix/raw-event
charles-cooper May 8, 2024
1102f07
test names
DanielSchiavini May 8, 2024
4eaa59e
Merge pull request #210 from DanielSchiavini/feat/forked-state
charles-cooper May 8, 2024
18ab5b6
Fix integration tests
DanielSchiavini May 10, 2024
4fcf590
No HTTPError for eth_call
DanielSchiavini May 10, 2024
b5b1f9e
Check for None, add comments
DanielSchiavini May 10, 2024
e021876
Style
DanielSchiavini May 10, 2024
0ca0ea7
Check for None
DanielSchiavini May 10, 2024
1bf16f9
Merge pull request #213 from DanielSchiavini/fix/integration-tests
charles-cooper May 10, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions .github/workflows/integration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,23 @@
name: integration

on:
pull_request_target:
push: # all
pull_request_target:

jobs:
integration:
name: "integration tests (Alchemy: fork mode and Sepolia)"
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout (push)
uses: actions/checkout@v4
if: github.event_name != 'pull_request_target'

# given we use the pull_request_trigger, only allow contributors to run tests with secrets
- name: Check if the user is a contributor
uses: actions/github-script@v7
if: github.event_name == 'pull_request_target'
with:
script: |
const { actor: username, repo: { owner, repo } } = context;
Expand All @@ -26,16 +30,13 @@ jobs:
core.setFailed(username + ' is not a contributor');
}

# this will check out the base branch, not the head branch
- uses: actions/checkout@v4
- name: Checkout ${{ github.event.pull_request.head.repo.full_name }}@${{ github.event.pull_request.head.sha }}
uses: actions/checkout@v4
if: github.event_name == 'pull_request_target'
with:
fetch-depth: 0 # we need the history to be able to merge

# now merge the head branch into the base branch, so we can run the tests with the head branch's changes
- name: Merge head branch
run: |
git fetch origin ${{ github.head_ref }}
git merge origin/${{ github.head_ref }} --no-edit
repository: ${{ github.event.pull_request.head.repo.full_name }}
ref: ${{ github.event.pull_request.head.sha }}

- name: Setup Python 3.11
uses: actions/setup-python@v4
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,9 @@ If you are running titanoboa on a local [Vyper](https://github.com/vyperlang/vyp

## Background

Titanoboa (/ˌtaɪtənəˈboʊə/;[1] lit. 'titanic boa') is an extinct genus of very large snakes that lived in what is now La Guajira in northeastern Colombia. They could grow up to 12.8 m (42 ft), perhaps even 14.3 m (47 ft) long and reach a body mass of 730–1,135 kg (1,610–2,500 lb). This snake lived during the Middle to Late Paleocene epoch, around 60 to 58 million years ago, following the extinction of all non-avian dinosaurs. Although originally thought to be an apex predator, the discovery of skull bones revealed that it was more than likely specialized in preying on fish. The only known species is Titanoboa cerrejonensis, the largest snake ever discovered,[2] which supplanted the previous record holder, Gigantophis garstini.[3]
Titanoboa ([/ˌtaɪtənəˈboʊə/](https://en.wikipedia.org/wiki/Help:IPA/English); lit. 'titanic boa') is an [extinct](https://en.wikipedia.org/wiki/Extinction) [genus](https://en.wikipedia.org/wiki/Genus) of giant [boid](https://en.wikipedia.org/wiki/Boidae) (the family that includes all boas and [anacondas](https://en.wikipedia.org/wiki/Anaconda)) snake that lived during the [middle](https://en.wikipedia.org/wiki/Selandian) and [late](https://en.wikipedia.org/wiki/Thanetian) [Paleocene](https://en.wikipedia.org/wiki/Paleocene). Titanoboa was first discovered in the early 2000s by the [Smithsonian Tropical Research Institute](https://en.wikipedia.org/wiki/Smithsonian_Tropical_Research_Institute) who, along with students from the [University of Florida](https://en.wikipedia.org/wiki/University_of_Florida), recovered 186 fossils of Titanoboa from [La Guajira](https://en.wikipedia.org/wiki/La_Guajira) in northeastern [Colombia](https://en.wikipedia.org/wiki/Colombia). It was named and described in 2009 as Titanoboa cerrejonensis, the largest snake ever found at that time. It was originally known only from thoracic vertebrae and ribs, but later expeditions collected parts of the skull and teeth. Titanoboa is in the subfamily [Boinae](https://en.wikipedia.org/wiki/Boinae), being most closely related to other extant boines from Madagascar and the Pacific.

Titanoboa could grow up to 12.8 m (42 ft) long, perhaps even up to 14.3 m (47 ft) long, and weigh around 730–1,135 kg (1,610–2,500 lb). The discovery of Titanoboa cerrejonensis supplanted the previous record holder, [Gigantophis garstini](https://en.wikipedia.org/wiki/Gigantophis), which is known from the [Eocene](https://en.wikipedia.org/wiki/Eocene) of [Egypt](https://en.wikipedia.org/wiki/Egypt). Titanoboa evolved following the extinction of all non-avian [dinosaurs](https://en.wikipedia.org/wiki/Dinosaur), being one of the largest reptiles to evolve after the [Cretaceous–Paleogene extinction event](https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event). Its vertebrae are very robust and wide, with a pentagonal shape in anterior view, as in other members of Boinae. Although originally thought to be an [apex predator](https://en.wikipedia.org/wiki/Apex_predator), the discovery of skull bones revealed that it was more than likely specialized in [preying on fish](https://en.wikipedia.org/wiki/Piscivore).

## Usage / Quick Start

Expand Down
13 changes: 13 additions & 0 deletions boa/contracts/base_evm_contract.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,18 @@ def address(self) -> Address:
return self._address


# TODO: allow only ErrorDetail in here.
# Currently this is list[str|ErrorDetail] (see _trace_for_unknown_contract below)
class StackTrace(list):
def __str__(self):
return "\n\n".join(str(x) for x in self)

@property
def dev_reason(self) -> str | None:
if self.last_frame is None or isinstance(self.last_frame, str):
return None
return self.last_frame.dev_reason

@property
def last_frame(self):
return self[-1]
Expand Down Expand Up @@ -75,6 +83,11 @@ def _handle_child_trace(computation, env, return_trace):
child_trace = _trace_for_unknown_contract(child, env)
else:
child_trace = child_obj.stack_trace(child)

if child_trace.dev_reason is not None and return_trace.dev_reason is None:
# Propagate the dev reason from the child frame to the parent
return_trace.last_frame.dev_reason = child_trace.dev_reason

return StackTrace(child_trace + return_trace)


Expand Down
1 change: 1 addition & 0 deletions boa/contracts/vyper/event.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,6 @@ def __repr__(self):
return f"{self.event_type.name}({args})"


@dataclass
class RawEvent:
event_data: Any
8 changes: 8 additions & 0 deletions boa/contracts/vyper/vyper_contract.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,14 @@ def __init__(
with anchor_compiler_settings(self.compiler_data):
_ = compiler_data.bytecode, compiler_data.bytecode_runtime

if (capabilities := getattr(env, "capabilities", None)) is not None:
compiler_evm_version = self.compiler_data.settings.evm_version
if not capabilities.check_evm_version(compiler_evm_version):
msg = "EVM version mismatch! tried to deploy "
msg += f"{compiler_evm_version} but network only has "
msg += f"{capabilities.describe_capabilities()}"
raise Exception(msg)

@cached_property
def abi(self):
return build_abi_output(self.compiler_data)
Expand Down
2 changes: 1 addition & 1 deletion boa/coverage.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def _contract_for_frame(self, frame):

def dynamic_source_filename(self, filename, frame):
contract = self._contract_for_frame(frame)
if contract is None:
if contract is None or contract.filename is None:
return None

return str(contract.filename)
Expand Down
11 changes: 10 additions & 1 deletion boa/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -311,9 +311,18 @@ def _hook_trace_computation(self, computation, contract=None):
child_contract = self._lookup_contract_fast(child.msg.code_address)
self._hook_trace_computation(child, child_contract)

def get_code(self, address):
def get_code(self, address: _AddressType) -> bytes:
return self.evm.get_code(Address(address))

def set_code(self, address: _AddressType, code: bytes) -> None:
self.evm.set_code(Address(address), code)

def get_storage(self, address: _AddressType, slot: int) -> int:
return self.evm.get_storage(Address(address), slot)

def set_storage(self, address: _AddressType, slot: int, value: int) -> None:
self.evm.set_storage(Address(address), slot, value)

# function to time travel
def time_travel(
self,
Expand Down
26 changes: 22 additions & 4 deletions boa/explorer.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,36 @@
import json
from time import sleep
from typing import Optional

import requests

SESSION = requests.Session()


def _fetch_etherscan(uri: str, api_key: Optional[str] = None, **params) -> dict:
def _fetch_etherscan(
uri: str, api_key: Optional[str] = None, num_retries=10, backoff_ms=400, **params
) -> dict:
"""
Fetch data from Etherscan API.
Offers a simple caching mechanism to avoid redundant queries.
Retries if rate limit is reached.
:param uri: Etherscan API URI
:param api_key: Etherscan API key
:param num_retries: Number of retries
:param backoff_ms: Backoff in milliseconds
:param params: Additional query parameters
:return: JSON response
"""
if api_key is not None:
params["apikey"] = api_key

res = SESSION.get(uri, params=params)
res.raise_for_status()
data = res.json()
for _ in range(num_retries):
res = SESSION.get(uri, params=params)
res.raise_for_status()
data = res.json()
if data.get("result") != "Max rate limit reached":
break
sleep(backoff_ms / 1000)

if int(data["status"]) != 1:
raise ValueError(f"Failed to retrieve data from API: {data}")
Expand Down
76 changes: 71 additions & 5 deletions boa/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from eth_account import Account
from requests.exceptions import HTTPError

from boa.environment import Env
from boa.environment import Env, _AddressType
from boa.rpc import (
RPC,
EthereumRPC,
Expand Down Expand Up @@ -80,6 +80,62 @@ def send_transaction(self, tx_data):
return {"hash": txhash}


class Capabilities:
"""
Describes the capabilities of a chain (right now, EVM opcode support)
"""

def __init__(self, rpc):
self._rpc = rpc

def _get_capability(self, hex_bytecode):
try:
self._rpc.fetch("eth_call", [{"to": None, "data": hex_bytecode}])
return True
except RPCError:
return False

@cached_property
def has_push0(self):
# PUSH0
return self._get_capability("0x5f")

@cached_property
def has_mcopy(self):
# PUSH1 0 DUP1 DUP1 MCOPY
return self._get_capability("0x600080805E")

@cached_property
def has_transient(self):
# PUSH1 0 DUP1 TLOAD
return self._get_capability("0x60005C")

@cached_property
def has_cancun(self):
return self.has_shanghai and self.has_mcopy and self.has_transient

@cached_property
def has_shanghai(self):
return self.has_push0

def describe_capabilities(self):
if not self.has_shanghai:
return "pre-shanghai"
if not self.has_cancun:
return "shanghai"
return "cancun"

def check_evm_version(self, evm_version):
if evm_version == "cancun":
return self.has_cancun
if evm_version == "shanghai":
return self.has_shanghai
# don't care about pre-shanghai since there aren't really new
# opcodes between constantinople and shanghai (and pre-constantinople
# is no longer even supported by vyper compiler).
return True


class NetworkEnv(Env):
"""
An Env object which can be swapped in via `boa.set_env()`.
Expand Down Expand Up @@ -117,31 +173,32 @@ def __init__(
self._gas_price = None

self.tx_settings = TransactionSettings()
self.capabilities = Capabilities(rpc)

@cached_property
def _rpc_has_snapshot(self):
try:
snapshot_id = self._rpc.fetch("evm_snapshot", [])
self._rpc.fetch("evm_revert", [snapshot_id])
return True
except RPCError:
except (RPCError, HTTPError):
return False

# OVERRIDES
@contextlib.contextmanager
def anchor(self):
if not self._rpc_has_snapshot:
raise RuntimeError("RPC does not have `evm_snapshot` capability!")
block_number = self.evm.patch.block_number
snapshot_id = self._rpc.fetch("evm_snapshot", [])
try:
block_id = self.evm.patch.block_id
snapshot_id = self._rpc.fetch("evm_snapshot", [])
yield
# note we cannot call super.anchor() because vm/accountdb fork
# state is reset after every txn.
finally:
self._rpc.fetch("evm_revert", [snapshot_id])
# wipe forked state
self._reset_fork(block_id)
self._reset_fork(block_number)

# add account, or "Account-like" object. MUST expose
# `sign_transaction` or `send_transaction` method!
Expand Down Expand Up @@ -438,3 +495,12 @@ def _send_txn(self, from_, to=None, gas=None, value=None, data=None):

t_obj = TraceObject(trace) if trace is not None else None
return receipt, t_obj

def set_balance(self, address, value):
raise NotImplementedError("Cannot use set_balance in network mode")

def set_code(self, address: _AddressType, code: bytes) -> None:
raise NotImplementedError("Cannot use set_code in network mode")

def set_storage(self, address: _AddressType, slot: int, value: int) -> None:
raise NotImplementedError("Cannot use set_storage in network mode")
3 changes: 3 additions & 0 deletions boa/test/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,9 @@ def pytest_collection_modifyitems(config, items):


def pytest_fixture_setup(fixturedef, request):
if request.node.get_closest_marker("ignore_isolation"):
return

global _id
task_id = _id
_id += 1
Expand Down
11 changes: 10 additions & 1 deletion boa/util/leveldb.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@


class LevelDB(BaseDB):
# Creates db as a class variable to avoid level db lock error
_GLOBAL = None

def __init__(self, db_path, max_open_files: int = None) -> None:
self.db = plyvel.DB(
db_path,
Expand All @@ -14,6 +15,14 @@ def __init__(self, db_path, max_open_files: int = None) -> None:
max_open_files=max_open_files,
)

@classmethod
# Creates db as a class variable to avoid level db lock error
# create the singleton db object
def create(cls, *args, **kwargs):
if cls._GLOBAL is None:
cls._GLOBAL = cls(*args, **kwargs)
return cls._GLOBAL

def __getitem__(self, key: bytes) -> bytes:
v = self.db.get(key)
if v is None:
Expand Down
25 changes: 15 additions & 10 deletions boa/vm/fork.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,28 +37,33 @@ class CachingRPC(RPC):
def __init__(self, rpc: RPC, cache_file: str = DEFAULT_CACHE_DIR):
# (default to memory db plyvel not found or cache_file is None)
self._rpc = rpc
self._init_mem_db()
if cache_file is not None:

self._cache_file = cache_file
self._init_db()

# _loaded is a cache for the constructor.
# reduces fork time after the first fork.
_loaded: dict[tuple[str, str], "CachingRPC"] = {}
_pid: int = os.getpid() # so we can detect if our fds are bad

def _init_db(self):
if self._cache_file is not None:
try:
from boa.util.leveldb import LevelDB

print("(using leveldb)", file=sys.stderr)

cache_file = os.path.expanduser(cache_file)
cache_file = os.path.expanduser(self._cache_file)
# use CacheDB as an additional layer over disk
# (ideally would use leveldb lru cache but it's not configurable
# via LevelDB API).
self._db = CacheDB(LevelDB(cache_file), cache_size=1024 * 1024) # type: ignore
leveldb = LevelDB.create(cache_file)
self._db = CacheDB(leveldb, cache_size=1024 * 1024) # type: ignore
return
except ImportError:
# plyvel not found
pass

# _loaded is a cache for the constructor.
# reduces fork time after the first fork.
_loaded: dict[tuple[str, str], "CachingRPC"] = {}
_pid: int = os.getpid() # so we can detect if our fds are bad

def _init_mem_db(self):
self._db = MemoryDB(lrudict(1024 * 1024))

@property
Expand Down
Loading
Loading