Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AVM: Adding bmodexp #6140

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

mangoplane
Copy link

Summary

Adds the new opcode bmodexp as described in issue #6139 to support modular exponentiation involving byte strings of up to 4096 bytes. Closes #6139

Test Plan

  • Relevant tests added to assembler_test.go, evalStateful_test.go, & eval_test.go
  • Opcode is tested with a range of test vectors with function TestBytesModExp, covering panic cases, edge cases, acceptance cases and failure cases. Test vectors were generated manually or with Python.

@CLAassistant
Copy link

CLAassistant commented Sep 21, 2024

CLA assistant check
All committers have signed the CLA.

@mangoplane mangoplane changed the title New opcode modexp AVM: Adding bmodexp Sep 21, 2024
Copy link

codecov bot commented Sep 21, 2024

Codecov Report

Attention: Patch coverage is 83.72093% with 7 lines in your changes missing coverage. Please review.

Project coverage is 56.26%. Comparing base (39f7485) to head (468210e).

Files with missing lines Patch % Lines
data/transactions/logic/opcodes.go 75.86% 4 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6140      +/-   ##
==========================================
- Coverage   56.28%   56.26%   -0.03%     
==========================================
  Files         494      494              
  Lines       69958    69999      +41     
==========================================
+ Hits        39375    39384       +9     
- Misses      27912    27936      +24     
- Partials     2671     2679       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@jannotti jannotti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking quite good so far.

data/transactions/logic/eval.go Outdated Show resolved Hide resolved
prev := last - 1 // y
pprev := last - 2 // x

if len(cx.Stack[last].Bytes) > maxStringSize || len(cx.Stack[prev].Bytes) > maxStringSize || len(cx.Stack[pprev].Bytes) > maxStringSize {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm doing this from phone, but maxStringSize is the AVM max of 4096? That's unusual for the bmath functions, they normally have a maximum of 128 bytes. I suppose you want more because RSA uses really large keys?

At any rate, it's impossible to supply inputs that are larger than this, so there's no need to check in the opcode.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense. Yes, the size is intended to support RSA which has really large keys. Is it okay if we allow the opcode to support this size?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok with me, assuming we can get the cost function to properly account for long inputs. It seems like bmodexp wouldn't be very interesting if it can't handle common RSA key sizes.

I'd like to support bigger inputs on the other b opcodes too, if we can adjust cost functions appropriately. They were first done before we could have the cost depend on size. b+, for example, would almost certainly be easy to adjust, b* might be more complex than simple length dependence. I'm somewhat worried that bmodexp is going to be tricky. Anyway, that should be a separate PR someday. (Do your RSA implementations require operating on RSA keys with any other operations?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it ends up being fast enough, we could pick a cost that accounts for the worst case, which I suppose would three very long inputs.

Or perhaps only scales based on one of the parameters, but accounts for the worst case with the others. (I think bmodexp should be linear with respect to the length of the exponent, since it basically performs one operation per bit.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just read your discussion on cost from the issue more closely. The dependence on the square of the lengths is unfortunate because it implies a custom cost function. Currently, all costs are "data directed" which is nice because it means we can create specs automatically - we can generate text that describes the cost from the constants provided in the opcode table.

I suppose we can add a way to provide both a Go function and a textual description directly in the .go source code. That is somewhat more fragile from the standpoint of modifications to the way we present the spec, but it doesn't see so bad. It's probably also necessary if we ever want to support larger inputs to b* which I suspect is where this really coming from.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it ends up being fast enough, we could pick a cost that accounts for the worst case, which I suppose would three very long inputs.

Or perhaps only scales based on one of the parameters, but accounts for the worst case with the others. (I think bmodexp should be linear with respect to the length of the exponent, since it basically performs one operation per bit.)

This is very tempting, and would mean minimal complexity. I anticipate bmodexp rarely being used, except for applications where the inputs are large such as RSA.

Or perhaps only scales based on one of the parameters, but accounts for the worst case with the others. (I think bmodexp should be linear with respect to the length of the exponent, since it basically performs one operation per bit.)

This is a good idea that might simplify the calculation to allow an existing linear cost model to be used.

Do your RSA implementations require operating on RSA keys with any other operations?

The only operators for RSA besides modexp are the less than and equal operators. These are efficiently implemented with a 512 bit digit partitioning algorithm available in Puya-Bignumber.

I will explore the suggestions to try to linearise the cost model, and figure out the max cost to see if it's cheap enough that we can use a constant. In my opinion, I think the complexity that would be introduced as highlighted with a non-linear cost model isn't worth it, if we can figure out a bounding linear model that's good enough. I'll present the results of the linear model to the discussion thread, and go from there.

data/transactions/logic/evalStateful_test.go Outdated Show resolved Hide resolved
data/transactions/logic/eval_test.go Outdated Show resolved Hide resolved
data/transactions/logic/opcodes.go Outdated Show resolved Hide resolved
@jannotti jannotti self-assigned this Sep 21, 2024
@mangoplane
Copy link
Author

I have addressed your feedback with a recent update, per my understanding. Let me know if there's anything that I may have misinterpreted.

@giuliop
Copy link
Contributor

giuliop commented Sep 24, 2024

bmodexp is useful also in different scenarios with smaller inputs so we should not penalize those cost-wise, for instance all ZKP protocols based on elliptic curves use it and they try to operate on the smallest field possible while preserving security for efficiency.

As a concrete example, smart contract verifiers for zk-proofs based on the plonk protocol generated by AlgoPlonk call a teal implementation of it 5 times, using 32-byte inputs for both curve BN254 and BLS12-381.

Considering that a BN254 verifier consumes ~145,000 opcode budget and a BLS12-381 verifier ~185,000, bmodexp would really help bring down that cost.

@giuliop
Copy link
Contributor

giuliop commented Sep 24, 2024

I'm doing this from phone, but maxStringSize is the AVM max of 4096? That's unusual for the bmath functions, they normally have a maximum of 128 bytes. I suppose you want more because RSA uses really large keys?

Isn't the maximum 64 bytes?

If we can have a plausible linear model for the smaller inputs currently supported by the b-operations (64 bytes?) which breaks for very large inputs, a solution we might consider for consistency is to offer bmodexp that operates on bigint like the other b-operations and add a separate fixed-cost opcode that operates on 4096 byte strings, e.g. modexpstring

@mangoplane
Copy link
Author

mangoplane commented Sep 24, 2024

I'm doing this from phone, but maxStringSize is the AVM max of 4096? That's unusual for the bmath functions, they normally have a maximum of 128 bytes. I suppose you want more because RSA uses really large keys?

Isn't the maximum 64 bytes?

If we can have a plausible linear model for the smaller inputs currently supported by the b-operations (64 bytes?) which breaks for very large inputs, a solution we might consider for consistency is to offer bmodexp that operates on bigint like the other b-operations and add a separate fixed-cost opcode that operates on 4096 byte strings, e.g. modexpstring

You make some good points. However, in my opinion, we should offer a single opcode to reduce complexity. With a sufficiently accurate cost model, such as the one proposed in this GitHub comment, the estimated cost will closely reflect the actual cost within a small margin of error. For example, using the log-polynomial model, the cost for 64 byte long input is 105, which is intuitively reasonable and relatively small. To simplify the cost in that range, we could use a piecewise function where the cost is 105 for all inputs up to 64 bytes in length, and for longer inputs, it follows the advanced cost model. The log-polynomial model also accurately describes the cost for much larger inputs.

Additionally, this seems to align with the long-term vision of allowing byte math opcodes, such as b+, to support up to 4096 bytes each, if I'm not mistaken based on the above discussion.

The opcode should work for inputs up to 4096 bytes, with several test cases exceeding the 64-byte limit. As a sanity check I think it's worth adding another test case to verify the maximum supported length of 4096 bytes.

@jannotti
Copy link
Contributor

Just to close the loop, is the suggestion that this: x = exponent_length * max(base_length, modulus_length)**2 will work for the cost function (with appropriate scaling)?

I would be on board with that, and I'll just have to write an "escape hatch" to allow an arbitrary cost function to be written.

@jannotti jannotti mentioned this pull request Sep 25, 2024
3 tasks
@algorandskiy
Copy link
Contributor

Based on this eval some log-poly formula gives better approximation.

@giuliop
Copy link
Contributor

giuliop commented Sep 25, 2024

Just to close the loop, is the suggestion that this: x = exponent_length * max(base_length, modulus_length)**2 will work for the cost function (with appropriate scaling)?

Looks reasonable to me, the # of iterations is exponent_length and the amount of work performed per iteration is in first approximation modulo(base * base) so proportional to max(base_length, modulus_length)**2

@mangoplane
Copy link
Author

mangoplane commented Sep 25, 2024

Just to close the loop, is the suggestion that this: x = exponent_length * max(base_length, modulus_length)**2 will work for the cost function (with appropriate scaling)?

I would be on board with that, and I'll just have to write an "escape hatch" to allow an arbitrary cost function to be written.

I tried out that model after reading EIPS-2565, but I found it highly inaccurate ($R^2$ of 0.5) compared to the log-poly formula ($R^2$ of 0.96)

x = c1*log(C) + c2*log(A)log(C) + c3*log(B)log(C) + c4*log(A)log(B)log(C)

where c1, c2, c3, and c4 are coefficients calculated by linear regression.

I think this is because the log transformation causes any exponents and multiplications to become coefficients and additions, respectively. The algorithm used for modexp is likely very advanced, making use of all the best approaches known.

It's possible that my data isn't accurate, although I don't see how that could be. Perhaps it's worth trying to reproduce my results. I can provide my benchmark code upon request.

@giuliop
Copy link
Contributor

giuliop commented Sep 26, 2024

I benchmarked bmodexp on my own and exponent_length * max(base_length, modulus_length)**2 looks reasonable to me.

I benchmarked using the same byte length for all three inputs, base, mode, and exp and I get:

Benchmark Iterations Time (ns/op) Extra (extra/op)
BenchmarkBModExp/modexp_32_bytes_inputs-12 78,057 15,019 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_64_bytes_inputs-12 18,218 66,383 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_128_bytes_inputs-12 3,776 322,610 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_256_bytes_inputs-12 556 2,239,315 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_512_bytes_inputs-12 63 18,215,421 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_1024_bytes_inputs-12 8 142,717,656 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_2048_bytes_inputs-12 1 1,147,616,375 ns/op 4.000 extra/op
BenchmarkBModExp/modexp_4096_bytes_inputs-12 1 9,566,156,542 ns/op 4.000 extra/op

If we divide the ns/op by inputs_length**3 we get:

Inputs Len Cost Inputs Len³ Cost / Inputs Len³
32 15,019 32,768 0.46
64 66,383 262,144 0.25
128 322,610 2,097,152 0.15
256 2,239,315 16,777,216 0.13
512 18,215,421 134,217,728 0.14
1024 142,717,656 1,073,741,824 0.13
2048 1,147,616,375 8,589,934,592 0.13
4096 9,566,156,542 68,719,476,736 0.14

Looks like the cost stabilizes after inputs length of 128 bytes.
I think we can make the cost proportional to exponent_length * max(base_length, modulus_length)**2 and perhaps add a constant factor to account for the higher relative cost of small inputs

This is the benchmark function I'm using:

func BenchmarkBModExp(b *testing.B) {
	for _, byteLen := range []int{32, 64, 128, 256, 512, 1024, 2048, 4096} {
		var base, exp, mod []byte
		for _, input := range []*[]byte{&base, &exp, &mod} {
			// create random inputs of the given length, we are using math/rand without seeding,
			// so it will generate the same pseudorandom numbers for each run
			*input = make([]byte, byteLen)
			for i := range *input {
				(*input)[i] = byte(rand.Intn(256))
			}
		}
		ops := fmt.Sprintf("byte 0x%x; byte 0x%x; byte 0x%x; bmodexp; pop", base, exp, mod)
		b.Run(fmt.Sprintf("modexp_%d_bytes_inputs", byteLen), func(b *testing.B) {
			benchmarkOperation(b, "", ops, "int 1")
		})
	}
}

@algorandskiy
Copy link
Contributor

algorandskiy commented Sep 26, 2024

@mangoplane could you commit the benchmarks into crypto_test.go - that's where other cost evaluation benchmarks for crypto opcodes live - I'll try to repro/replay the notebook.

@giuliop how about re-running modexp_1024+ (like -count=64 I guess) to get a better avg value?

@giuliop
Copy link
Contributor

giuliop commented Sep 27, 2024

@giuliop how about re-running modexp_1024+ (like -count=64 I guess) to get a better avg value?

sure, I rerun using -benchtime=64x to have 64 runs (except for 4096 bytes which would timeout, so I ran it 32 times) and the results are in line as before:

Benchmark Trials Time per op (ns/op) Extra ops
BenchmarkBModExp1/modexp_1024_bytes_inputs-12 64 142,368,141 4.000
BenchmarkBModExp1/modexp_2048_bytes_inputs-12 64 1,139,738,240 4.000
BenchmarkBModExp1/modexp_4096_bytes_inputs-12 32 9,443,533,249 4.000

Looks to me like we can use exponent_length * max(base_length, modulus_length)**2 and not make it more complicated than that

@mangoplane
Copy link
Author

Thanks @giuliop for the insightful benchmarks. I replicated your results when the base length is at least that of the modulus. For smaller bases, the cost is overestimated, explaining the low $R^2$ I had for my test data. Since most bmodexp applications (like RSA) involve base length ≥ modulus length, I agree that exponent_length * max(base_length, modulus_length)**2 is a sufficient approximation.

And @algorandskiy, I have provided my benchmark code to reproduce the results of the notebook. Note that it isn't using a seed, so each run will produce slightly different results, but the overall trend should be the same.

@giuliop
Copy link
Contributor

giuliop commented Oct 2, 2024

Looking back at benchmark the 4096 byte length case for all three parameters takes 9 sec to run, so it's not feasible on the AVM unfortunately.
@jannotti what would you consider the upper limit in terms of how long an opcode can run ?

@mangoplane
Copy link
Author

I'm about to modify opcodes.go and its dependencies to accommodate a custom cost function. Below is my current plan.

  • Extend the linearCost type to include an instance variable called customCost of type CustomCost.
  • CustomCost is a class that describes a function compute, which calculates the cost based on the stack and is defined on a per-opcode basis. It also has an instance variable—a string that serves as a description of the function for documentation generation.
  • Address the dependencies in the compute function of linearCost, where CustomCost.compute is applied to the input if customCost is defined. Otherwise, it computes the cost as usual. A similar adjustment must be made for linearCost.docCost.

This would involve minimal changes to the codebase, if I'm not mistaken. The opcodes all assume a constant cost, a linear cost based on a single input, or a linear cost dependent on an immediate (enum variables that follow the opcode to describe its configuration). This addition implies that this trend holds, with special exceptions if a CustomCost instance is supplied.

Interested in hearing everyone's thoughts on this approach. If people think it's a good idea, I am happy to proceed with its implementation.

@jannotti
Copy link
Contributor

jannotti commented Oct 3, 2024

Looking back at benchmark the 4096 byte length case for all three parameters takes 9 sec to run, so it's not feasible on the AVM unfortunately. @jannotti what would you consider the upper limit in terms of how long an opcode can run ?

That's a problem! We aim for 10,000 txn/sec! The most you can execute in a single txn is 16x20,000 "ticks" (by pooling logicsigs). An "tick" is about 15ns. So that gives means a single program can run for about 360,000x15e-9 so something like 3.6*15e-4 = .0054 secs.

So, is it worth getting the cost function "right" when that simply means that large operations will be (rightly) impossible, or should we drastically limit the lengths, and perhaps use a simpler cost. Either way, operations on really large keys seems impossible.

All is not lost, RSA keys seem to be at most 4k BITS, not BYTES. Does that end up being 8^3 cheaper because of the squaring of the multiplicand length, and the factor of 8 from the exponent?

9/512 = 0.017 which starts to become almost feasible.

@algorandskiy
Copy link
Contributor

I reproduced the results and got the log-poly model fits my data points perfectly

log_A coefficient: 0.0
log_B coefficient: 0.30924400959422704
log_C coefficient: 1.2483621313283038
log_A_log_B coefficient: 0.0
log_A_log_C coefficient: 0.0
log_B_log_C coefficient: 0.10928549552529052
log_A_log_B_log_C coefficient: 8.62725548783673e-05
Intercept: 0.0
R-squared (on log scale): 0.9964
Root Mean Squared Error: 98143.7211 gas units

image

@giuliop
Copy link
Contributor

giuliop commented Oct 6, 2024

All is not lost, RSA keys seem to be at most 4k BITS, not BYTES. Does that end up being 8^3 cheaper because of the squaring of the multiplicand length, and the factor of 8 from the exponent?

9/512 = 0.017 which starts to become almost feasible.

Indeed the benchmarking at 512 bytes length for all parameters gives 0.018 sec

Inputs Len (bytes) ns / op Inputs Len³ ns / Inputs Len³
32 15,019 32,768 0.46
64 66,383 262,144 0.25
128 322,610 2,097,152 0.15
256 2,239,315 16,777,216 0.13
512 18,215,421 134,217,728 0.14

That means though that 512 bytes is still too large given that the max for a single program is 20,000 * 16 * 15ns = 0.0048 sec , so the max length for bmodexp's parameters can be 256 bytes.

This assumes we want to cap all parameters at the same size.
@mangoplane for your use case what is the max size of each of the three parameters?

If the assumption that we want to cap all parameters at the same size holds, I guess we have two options:

  1. Introduce bmodexp for up to 64 bytes parameters for consistency with all other b-operations
  2. Introduce bmodexp for up to 256 bytes parameters (maybe calling it modexp)

I would suggest to do option 1 now and then do the work to graduate all b-operations to 256 bytes parameters

@giuliop
Copy link
Contributor

giuliop commented Oct 6, 2024

For the cost function I would use this formula to determine the actual opcode cost, i.e., the number of ticks:

max(base_lenght, mod_lenght)^1.63 * exp_length / 15 + 200

where lenght is the # of bytes.

This is how the benchmarking looks like, here I am keeping base and mod at the same byte length and varying the exp length. I am calculating the # of ticks by dividing the ns/op coming from go test by 15.

Base & Mod Len Exp Len Average ns/op Average 'ticks' Formula 'ticks'
32 16 6,445 430 503
32 24 9,274 618 654
32 32 12,665 844 806
64 16 18,432 1,229 1,138
64 32 33,163 2,211 2,076
64 48 52,550 3,503 3,013
64 64 70,248 4,683 3,951
128 32 82,731 5,515 6,005
128 64 160,046 10,670 11,810
128 96 239,196 15,946 17,615
128 128 315,648 21,043 23,420
256 64 579,067 38,604 36,135
256 128 1,166,991 77,799 72,070
256 192 1,652,136 110,142 108,006
256 256 2,121,371 141,425 143,941

This is the function used for the benchmarking:

func BenchmarkBModExp2(b *testing.B) {
    // Define the base and mod lengths, and corresponding exp lengths
    for _, byteLen := range []int{32, 64, 128, 256} {
        var expLens []int
        switch byteLen {
        case 32:
            expLens = []int{16, 24, 32}
        case 64:
            expLens = []int{16, 32, 48, 64}
        case 128:
            expLens = []int{32, 64, 96, 128}
        case 256:
            expLens = []int{64, 128, 192, 256}
        }

        for _, expLen := range expLens {
            // Generate base and mod of length byteLen
            base := make([]byte, byteLen)
            mod := make([]byte, byteLen)
            for _, input := range []*[]byte{&base, &mod} {
                for i := range *input {
                    (*input)[i] = byte(mathrand.Intn(256))
                }
            }

            // Generate exp of varying lengths
            exp := make([]byte, expLen)
            for i := range exp {
                exp[i] = byte(mathrand.Intn(256))
            }

            ops := fmt.Sprintf("byte 0x%x; byte 0x%x; byte 0x%x; bmodexp; pop", base, exp, mod)

            b.Run(fmt.Sprintf("modexp_base&mod=%d_bytes_exp=%d_bytes", byteLen, expLen), func(b *testing.B) {
                benchmarkOperation(b, "", ops, "int 1")
            })
        }
    }
}

@mangoplane
Copy link
Author

mangoplane commented Oct 8, 2024

All is not lost, RSA keys seem to be at most 4k BITS, not BYTES. Does that end up being 8^3 cheaper because of the squaring of the multiplicand length, and the factor of 8 from the exponent?
9/512 = 0.017 which starts to become almost feasible.

Indeed the benchmarking at 512 bytes length for all parameters gives 0.018 sec
Inputs Len (bytes) ns / op Inputs Len³ ns / Inputs Len³
32 15,019 32,768 0.46
64 66,383 262,144 0.25
128 322,610 2,097,152 0.15
256 2,239,315 16,777,216 0.13
512 18,215,421 134,217,728 0.14

That means though that 512 bytes is still too large given that the max for a single program is 20,000 * 16 * 15ns = 0.0048 sec , so the max length for bmodexp's parameters can be 256 bytes.

This assumes we want to cap all parameters at the same size. @mangoplane for your use case what is the max size of each of the three parameters?

If the assumption that we want to cap all parameters at the same size holds, I guess we have two options:

1. Introduce `bmodexp` for up to 64 bytes parameters for consistency with all other b-operations

2. Introduce `bmodexp` for up to 256 bytes parameters (maybe calling it `modexp`)

I would suggest to do option 1 now and then do the work to graduate all b-operations to 256 bytes parameters

Thanks for your input and feedback everyone. I have what I believe is a solution that satisfies the tick limit, without ruling out many useful applications of the new opcode. Interested to hear your thoughts @jannotti @giuliop @algorandskiy :

In practice, it's rare for the exponent in bmodexp to have a large length, as seen in applications like RSA. For example, in RS256, the exponent length is 3 bytes. Furthermore, it seems there's no need to necessarily cap the length to account for time if the cost model is accurate. For instance, the dynamic cost budget would be exceeded and capped at 16 × 20,000 ticks before it surpasses the time limit. Having these large limits combined with this cost model allows developers to allocate lengths as they see fit, fitting them within the cost constraints. In the case of RS256, the exponent would be set small while the other arguments are large, ensuring they fit well within the budget as estimated by the cost model and, therefore, within the execution time limit.

Therefore, I propose option 3:

  1. Introduce bmodexp for up to 1,024-byte inputs, paired with a robust cost model that prevents exceeding the tick limit. This nuanced approach places the responsibility on developers to adjust input lengths until the cost is low enough to fit within 16 LSIGs, ensuring they remain below the maximum tick length. This flexibility makes RSA viable, given that the exponent is only 3 bytes in RS256

Another feasible option is to set the limits low for bmodexp but to have a specialized RS256 opcode that sets the exponent low and allows larger modulus and signature inputs than can be accommodated by bmodexp.

@giuliop
Copy link
Contributor

giuliop commented Oct 9, 2024

I think option 3 can work, developers can always check at runtime if needed the sizes of their inputs and manage in the code the cases where the opcode budget would be exceeded

@mangoplane
Copy link
Author

mangoplane commented Oct 11, 2024

I think option 3 can work, developers can always check at runtime if needed the sizes of their inputs and manage in the code the cases where the opcode budget would be exceeded

That sounds like a plan. I'll get to work implementing the cost model, following my earlier suggestion involving adding a field called customCost to linearCost, optionally containing a customCost function, which should work and be reasonably clean code.

@mangoplane
Copy link
Author

mangoplane commented Oct 14, 2024

Hey there,

I've pushed my changes to handle non-linear opcode costs, following the plan I outlined earlier. The tests were extended to also verify that the cost is accurately calculated. I performed a sanity check for documentation generation and confirmed that the docs were generated without errors, containing the following information:

TEAL_opcodes_v11.md sanity check:

## bmodexp

- Bytecode: 0xe6
- Stack: ..., A: []byte, B: []byte, C: []byte → ..., []byte
- A raised to the Bth power modulo C. A, B and C are interpreted as big-endian unsigned integers limited to 4096 bytes. Fail if C is zero.
- **Cost**: ((len(B) * max(len(A), len(C)) ^ 1.63) / 15) + 200
- Availability: v11

langspec_v11.json sanity check:

{
"Opcode": 230,
"Name": "bmodexp",
"Args": [
"[]byte",
"[]byte",
"[]byte"
],
"Returns": [
"[]byte"
],
"Size": 1,
"DocCost": "((len(B) * max(len(A), len(C)) ^ 1.63) / 15) + 200",
"Doc": "A raised to the Bth power modulo C. A, B and C are interpreted as big-endian unsigned integers limited to 4096 bytes. Fail if C is zero.",
"IntroducedVersion": 11,
"Groups": [
"Byte Array Arithmetic"
]
}

I thought it might be worthwhile to leave the bmodexp argument size limits as they are, considering it should be impossible to exceed the time limit with the current cost model, where the responsibility lies with the developer to allocate sizes within the budget, as discussed. Let me know your thoughts.

@mangoplane mangoplane force-pushed the new-opcode-modexp branch 3 times, most recently from 218ce12 to a72572d Compare October 14, 2024 23:37
expLength := float64(len(stack[prev].Bytes))
modLength := float64(len(stack[last].Bytes))
baseLength := float64(len(stack[pprev].Bytes))
cost := (math.Pow(math.Max(baseLength, modLength), 1.63) * expLength / 15) + 200
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't be better to declare those parameters (1.63, 15 and 200) as constants? I think it would make easier to "tune" them in one place, if necessary, instead of replacing them in all the consumers.

Maybe also some comments on how those constants have been chosen would be helpful for future readers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback, which I have addressed in the latest commit. It should be clearer now. Let me know your thoughts.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing this. Looks good!

@cusma
Copy link

cusma commented Nov 15, 2024

@jannotti I was wondering if wouldn't be appropriate to have the proposed op code "cost function" parameters in the consensus configuration (instead of hardcoding them in opcode.go - 468210e).

@jannotti
Copy link
Contributor

@jannotti I was wondering if wouldn't be appropriate to have the proposed op code "cost function" parameters in the consensus configuration (instead of hardcoding them in opcode.go - 468210e).

We need to be able to tie cost functions to AVM versions, rather than consensus versions, so that existing code does not change behavior. That's why all the costs are set in logic package, basis on the version of bytecode running.

@cusma
Copy link

cusma commented Nov 15, 2024

We need to be able to tie cost functions to AVM versions, rather than consensus versions, so that existing code does not change behavior. That's why all the costs are set in logic package, basis on the version of bytecode running.

Right! My bad, I forgot about the decoupled bytecode version requirement. Thanks for clarifying!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New opcode: modexp
6 participants