2 Comments

This is awesome!

"An attacker contract is designed to maximize a profit function" - in 2018 I was experimenting with using an OpenAI gym reinforcement learner to do the same. One of the conclusions I had was that the far more important thing in smart contract security is to find a positive attack vs. a maximal attack. Reinforcement learning wasn't great for that as you have to use curiosity based methods to have some positive value function to optimize.

How are you thinking about this here?

Expand full comment
author

Great question. I agree with you, especially in the context of a whitehat operation. It doesn't matter that much if you can drain all of the funds of the protocol, if your objective is to find the bug, report it to the developers, and get a bounty. Even then, the profit function can be capped to e.g. `profit > 0`, or `profit > 1% of TVL`, so that you can get faster results and apply a mitigation as soon as possible.

Expand full comment