The RERO approach to software engineering -- Release Early, Release Often -- essentially means publishing a program as soon as it works, even though it's probably not optimal and you definitely haven't weeded out all of the bugs yet. This allows you to see the program fail in practice, letting you identify problems that you might not have thought of on your own; then you fix those and release it again, constantly refining it in the process.
Here's a good look at what this means, ideologically and practically, especially in contrast to traditional notions of quality control. (You should consider reading the entire series; it's quite enlightening.)
So if it works, why not extend this principle to other fields beyond software engineering?
I have long argued for an experimental and evidence-based approach to education, with short feedback-and-updating cycles as a key element. But what about, say, laws and regulations? What about currencies, or types of government?
If you had asked be about any of these in the past, I would have been extremely sceptical. Those are sensitive, high-impact areas, where even small failures can have immense effects. We're not talking about a few thousand people not being able to access their e-mail inboxes for half a day, here.
Then the DAO thing happened -- and witnessing the aftermath is slowly making me come around.
In case you haven't been following it, here's my short take on what happened:
Ethereum is a blockchain-based digital currency like Bitcoin, but with the added bonus that you can run programs on the blockchain. This enables the implementation of "smart contracts" that automatically execute when triggered by other blockchain events, such as the receipt of funds. Using this technology, the makers of Slock.it released code for a "Distributed Autonomous Organization", intended to be a fully automated investment fund controlled by votes from token holders, with no central authority to mess things up. This seemed like such a revolutionary idea that it was able to raise a record 140 million dollars in crowdfunding before going live this month -- and then someone found an exploit and started draining money (Ethereum) from the DAO.
The word's still out on whether the attacker will ever manage to use this money. The debate is still raging over whether it's okay for the DAO creators (who explicitly announced a completely hands-off approach before) to fork the blockchain and reverse the transactions. This is critical, because much of the appeal of the DAO (and cryptocurrencies in general) rests on the assumption that no central authority can alter, block or reverse transactions on the blockchain. If the way the algorithm is coded allows something, then it's allowed by definition; if you coded the smart contract in a way that allows for exploits, then there is (or should be) no recourse to any "higher authority".
This is why I think what's happening with the DAO is so important: because thanks to its creators' RERO mentality in releasing the requisite code, we're now able to watch our first large-scale experiment in governance by algorithm fail in real time, and learn from its failures.
And there is much to learn. This Bloomberg article pretty much cuts to the heart of the matter. The main strength of smart contracts is that, like any other computer program, they do exactly what it says in the code, meaning they can't be tricked, can't be bribed or extorted, and can be arbitrarily complex because they don't suffer the cognitive limits constraining administration-by-humans. But this "inhuman efficiency" is also their main weakness: if there is any way to use the code that you didn't foresee, there's no human authority to appeal to saying "wait, no, I obviously didn't mean it like that". In the blog post I linked above, Ethereum inventor Vitali Buterin implies that the problem of reliably translating intentions into isomorphic code might be AI-complete; the creators of Slock.it suggest helping matters by embedding a hash of a plain English contract in the smart contract (but of course the interpretation of that would again require human legal assistance, negating the whole point of a smart contract).
Those are real problems, and they are very much recognized by the relevant people in the field. But I don't see anyone retreating before the size of the problem, or saying we should stop experimenting with smart contracts and DAOs until the control problem is solved. This is why the reaction to the DAO exploit actually increased my trust in the RERO model, even for critical areas like currencies and contract law: because everybody involved knew they were participating in a huge, risky experiment, because they thought the direction was promising even though it was only the first step and likely to fail. If you look at the way the creators and most stakeholders in the DAO are reacting, you'll see that it's less like damage control and more like the expected next step in an agile development process. Sure, the DAO code could have been audited (even) more closely before the launch; and sure, some of the exploits might have been fixed by that; but the only way to really know how a project like that will behave in the wild is to put it out there, and then update on what happens.
Why is this important? One the one hand, because smart contracts are important. As soon as at least some of them can be shown to reliably work, have the potential to seriously disrupt many aspects of the economy (particularly the B2B part) by cutting out the middleman; and if distributed autonomous organizations of various shades really take off, they will be one of the biggest challenge for regulators ever. If you follow that line of thought, the economy of the future might be one where objects connected to the Internet of Things will autonomously negotiate for the resources they need, such as grid power or bandwidth, with other programs on the supplier side, creating an Economy of Things[link:https://slock.it/solutions.html] that could be arbitrarily many layers deep; and even in transactions between humans, governmental regulators will have an increasingly hard time controlling, tracking and taxing economic activity.
On the other hand, as I said at the beginning, the aftermath of the DAO exploit can teach us a lot about how to apply the RERO mentality to other critical fields. One important takeaway is that participation should be voluntary -- like patients agreeing to experimental treatments knowing that they might fail. Another is that the program (or law, or whatever form it takes) needs to be transparent and open-source, so that some failure modes can be detected by the crowd before the launch, and others traced back to the source of the problem after an exploit. A third, and possibly the most controversial takeaway is that people who find exploits need to be able to do so without fear of disproportionate retribution -- we will need white-hat hackers for every level of social organisation.
For now, the first large-scale instance of a DAO has failed -- but in dealing with the failure, its creators are doing us a service by highlighting both the problems facing us and the way out, through iterative improvement of running systems "in the wild".
No comments:
Post a Comment