CryptoNote v 2.0 Nicolas van Saberhagen October 17, 2013 1 Introduction “Bitcoin” [1] has been a successful implementation of the concept of p2p electronic cash. Both professionals and the general public have come to appreciate the convenient combination of public transactions and proof-of-work as a trust model. Today, the user base of electronic cash is growing at a steady pace; customers are attracted to low fees and the anonymity provided by electronic cash and merchants value its predicted and decentralized emission. Bitcoin has effectively proved that electronic cash can be as simple as paper money and as convenient as credit cards. Unfortunately, Bitcoin suffers from several deficiencies. For example, the system’s distributed nature is inflexible, preventing the implementation of new features until almost all of the net- work users update their clients. Some critical flaws that cannot be fixed rapidly deter Bitcoin’s widespread propagation. In such inflexible models, it is more efficient to roll-out a new project rather than perpetually fix the original project. In this paper, we study and propose solutions to the main deficiencies of Bitcoin. We believe that a system taking into account the solutions we propose will lead to a healthy competition among different electronic cash systems. We also propose our own electronic cash, “CryptoNote”, a name emphasizing the next breakthrough in electronic cash. 2 Bitcoin drawbacks and some possible solutions 2.1 Traceability of transactions Privacy and anonymity are the most important aspects of electronic cash. Peer-to-peer payments seek to be concealed from third party’s view, a distinct difference when compared with traditional banking. In particular, T. Okamoto and K. Ohta described six criteria of ideal electronic cash, which included “privacy: relationship between the user and his purchases must be untraceable by anyone” [30]. From their description, we derived two properties which a fully anonymous electronic cash model must satisfy in order to comply with the requirements outlined by Okamoto and Ohta: Untraceability: for each incoming transaction all possible senders are equiprobable. Unlinkability: for any two outgoing transactions it is impossible to prove they were sent to the same person. Unfortunately, Bitcoin does not satisfy the untraceability requirement. Since all the trans- actions that take place between the network’s participants are public, any transaction can be unambiguously traced to a unique origin and final recipient. Even if two participants exchange funds in an indirect way, a properly engineered path-finding method will reveal the origin and final recipient. It is also suspected that Bitcoin does not satisfy the second property. Some researchers stated ([33, 35, 29, 31]) that a careful blockchain analysis may reveal a connection between the users of the Bitcoin network and their transactions. Although a number of methods are disputed [25], it is suspected that a lot of hidden personal information can be extracted from the public database. Bitcoin’s failure to satisfy the two properties outlined above leads us to conclude that it is not an anonymous but a pseudo-anonymous electronic cash system. Users were quick to develop solutions to circumvent this shortcoming. Two direct solutions were “laundering services” [2] and the development of distributed methods [3, 4]. Both solutions are based on the idea of mixing several public transactions and sending them through some intermediary address; which in turn suffers the drawback of requiring a trusted third party. Recently, a more creative scheme was proposed by I. Miers et al. [28]: “Zerocoin”. Zerocoin utilizes a cryptographic one-way accumulators and zero-knoweldge proofs which permit users to “convert” bitcoins to zerocoins and spend them using anonymous proof of ownership instead of explicit public-key based digital signatures. However, such knowledge proofs have a constant but inconvenient size - about 30kb (based on today’s Bitcoin limits), which makes the proposal impractical. Authors admit that the protocol is unlikely to ever be accepted by the majority of Bitcoin users [5]. 2.2 The proof-of-work function Bitcoin creator Satoshi Nakamoto described the majority decision making algorithm as “one- CPU-one-vote” and used a CPU-bound pricing function (double SHA-256) for his proof-of-work scheme. Since users vote for the single history of transactions order [1], the reasonableness and consistency of this process are critical conditions for the whole system. The security of this model suffers from two drawbacks. First, it requires 51% of the network’s mining power to be under the control of honest users. Secondly, the system’s progress (bug fixes, security fixes, etc...) require the overwhelming majority of users to support and agree to the changes (this occurs when the users update their wallet software) [6].Finally this same voting mechanism is also used for collective polls about implementation of some features [7]. This permits us to conjecture the properties that must be satisfied by the proof-of-work pricing function. Such function must not enable a network participant to have a significant advantage over another participant; it requires a parity between common hardware and high cost of custom devices. From recent examples [8], we can see that the SHA-256 function used in the Bitcoin architecture does not posses this property as mining becomes more efficient on GPUs and ASIC devices when compared to high-end CPUs. Therefore, Bitcoin creates favourable conditions for a large gap between the voting power of participants as it violates the “one-CPU-one-vote” principle since GPU and ASIC owners posses a much larger voting power when compared with CPU owners. It is a classical example of the Pareto principle where 20% of a system’s participants control more than 80% of the votes. One could argue that such inequality is not relevant to the network’s security since it is not the small number of participants controlling the majority of the votes but the honesty of these participants that matters. However, such argument is somewhat flawed since it is rather the possibility of cheap specialized hardware appearing rather than the participants’ honesty which poses a threat. To demonstrate this, let us take the following example. Suppose a malevolent individual gains significant mining power by creating his own mining farm through the cheap hardware described previously. Suppose that the global hashrate decreases significantly, even for a moment, he can now use his mining power to fork the chain and double-spend. As we shall see later in this article, it is not unlikely for the previously described event to take place. 2.3 Irregular emission Bitcoin has a predetermined emission rate: each solved block produces a fixed amount of coins. Approximately every four years this reward is halved. The original intention was to create a limited smooth emission with exponential decay, but in fact we have a piecewise linear emission function whose breakpoints may cause problems to the Bitcoin infrastructure. When the breakpoint occurs, miners start to receive only half of the value of their previous reward. The absolute difference between 12.5 and 6.25 BTC (projected for the year 2020) may seem tolerable. However, when examining the 50 to 25 BTC drop that took place on November 28 2012, felt inappropriate for a significant number of members of the mining community. Figure 1 shows a dramatic decrease in the network’s hashrate in the end of November, exactly when the halving took place. This event could have been the perfect moment for the malevolent individual described in the proof-of-work function section to carry-out a double spending attack [36]. Fig. 1. Bitcoin hashrate chart (source: http://bitcoin.sipa.be) 2.4 Hardcoded constants Bitcoin has many hard-coded limits, where some are natural elements of the original design (e.g. block frequency, maximum amount of money supply, number of confirmations) whereas other seem to be artificial constraints. It is not so much the limits, as the inability of quickly changing them if necessary that causes the main drawbacks. Unfortunately, it is hard to predict when the constants may need to be changed and replacing them may lead to terrible consequences. A good example of a hardcoded limit change leading to disastrous consequences is the block size limit set to 250kb1. This limit was sufficient to hold about 10000 standard transactions. In early 2013, this limit had almost been reached and an agreement was reached to increase the limit. The change was implemented in wallet version 0.8 and ended with a 24-blocks chain split and a successful double-spend attack [9]. While the bug was not in the Bitcoin protocol, but rather in the database engine it could have been easily caught by a simple stress test if there was no artificially introduced block size limit. Constants also act as a form of centralization point. Despite the peer-to-peer nature of Bitcoin, an overwhelming majority of nodes use the official reference client [10] developed by a small group of people. This group makes the decision to implement changes to the protocol and most people accept these changes irrespective of their “correctness”. Some decisions caused heated discussions and even calls for boycott [11], which indicates that the community and the developers may disagree on some important points. It therefore seems logical to have a protocol with user-configurable and self-adjusting variables as a possible way to avoid these problems. 2.5 Bulky scripts The scripting system in Bitcoin is a heavy and complex feature. It potentially allows one to create sophisticated transactions [12], but some of its features are disabled due to security concerns and some have never even been used [13]. The script (including both senders’ and receivers’ parts) for the most popular transaction in Bitcoin looks like this: OP DUP OP HASH160 OP EQUALVERIFY OP CHECKSIG. The script is 164 bytes long whereas its only purpose is to check if the receiver possess the secret key required to verify his signature. Read the rest of the white paper here: https://cryptonote.org/whitepaper.pdf
As part of my ongoing effort to develop stupid shit for Garlicoin, I present you: W-addresses!
“Wait, what?!” I hear you asking? Well…(buckle up, this is another one of my technical posts that goes on, and on…) For some time now, I have been using native SegWit (Pay-to-Witness-Public-Key-Hash, P2WPKH) transactions for myself. Mostly because they have a 75% fee subsidy on signature data (which comes out on ~50% fee subsidy on the entire transaction, depending on the type of transaction) and I am dutch after all ;-) It turns out that Garlicoin Core kind of supports them and kind of does not. If you manually register the transaction redeem script to your wallet (using the addwitnessaddress command) it will start recognizing them on the blockchain but gets kind of confused on how to deal with them, so it registers them all as ‘change’ transactions. Still, this means you can receive coins using these types of transactions and pay with them in all ways you can with regular Garlicoins, except your transactions are cheaper. However, sending coins using native SegWit is quite a hassle. You can basically only do it by creating your own raw transactions (createrawtransaction, edit it to make it native SegWit, fundrawtransaction, signrawtransaction, sendrawtransaction). On top of this, any change address the wallet creates are still legacy/normal Garlicoin addresses, so you will end up with a bunch of unspent transaction outputs (UTXOs) for which you have to pay full fee anyway. I decided we (I) could do better than this. But first a few steps back. What is this native SegWit anyway and weren’t people already using SegWit? Wasn’t there a user that just after mainnet launched accidentally made a SegWit transaction? So what the hell am I talking about? To understand this, you will need to know a few things about what SegWit is and how Bitcoin Garlicoin transactions work in general. Note that this bit gets really technical, so if you are not interested, you might want to skip ahead. A lot. First thing to understand is that addresses are not really a thing if you look at the blockchain. While nodes and explorers will interpret parts of a transaction as addresses, in reality addresses are just an abstraction around Bitcoin Script and an easy way send coins instead of asking people “hey, can you send some coins to the network in such a way that only the private key that corresponds to public key XYZ can unlock them?”. Let’s look at an example: say I ask you to send coins to my address GR1Vcgj2r6EjGQJHHGkAUr1XnidA19MrxC. What ends up happening is that you send coins out a transaction where you say that the coin are locked in the blockchain and can only be unlocked by successfully executing the following script:
The first byte (0x26, or 38) is the version byte. This tells the clients how the interpret the rest of the script. In our case 38 means Pay-to-Public-Key-Hash (P2PKH), or in other words the script mentioned above. The part after that is just the SHA1 hash of the public key and the final 4 bytes are a checksum to verify you did not make a typo when entering the address. Enter SegWit. What SegWit exactly is depends on who you are talking to, however it mostly is a different transaction format/protocol. The main change of SegWit is that signature data is not longer included in the transaction (fixing transaction malleability). Instead transaction data is sent separate from the transaction itself and outside of the (main) blocks. This is not really that much of an issue, except for the fact that people wanted to enable SegWit as a soft-fork instead of a hard-fork. This means that somehow unupgraded nodes needed a way to deal with these new transaction types without being able to verify them. The solution turned out to be to make use of an implementation detail of Bitcoin Script: if a piece of script executes without any errors, the last bit of data determines whether the transaction is valid (non-zero) or invalid (zero). This is what they used to implement SegWit. So what does this look like? Well, you might be surprised how simple a P2WPKH (the SegWit way of doing a P2PKH transaction) script looks:
00 4e9856671c3abb2f03b7d80b9238e8f5ecd9f050
Yes. That’s it. The first byte is the Witness program version byte. I.e. it tells you how the other data should be interpreted (very similar to how addresses work). Then there is the hash of the public key. As you can see, SegWit does not actually use Bitcoin Script. Mostly because it needs old nodes to ‘just accept’ its transactions. However interestingly enough, while the transaction format changed, the transaction data is pretty much the same:
I want to pay to a hash of a public key
This hash is XYZ.
This means that these kind of SegWit transactions need a new way of addressing them. Now, you might think that this is where the ‘3’ addresses on Bitcoin or the ‘M’ addresses on Garlicoin come in. However, that is not the case. These addresses are what are called Pay-to-Script-Hash (P2SH) addresses. There scrypt is like this:
Huh? Yeah, these are a very special type of transactions, that kind of go back to the “hey, can you send some coins to the network in such a way that only the private key that corresponds to public key XYZ can unlock them?” issue. These transactions are a way to have arbitrary smart contracts (within the limits of Bitcoin Script) to determine whether a transaction output can be spend or not without the sender of the coins having to deal with your scripts. Basically they use a hash of the “real” script, which whoever owns the coins has to provide when they want to spend them, as well as the specific inputs required for a script. This functionality is for example used in multi-signature (MultiSig) wallets, without requiring someone sending money to these wallets having to deal with random bits of information like how many signatures are required, how many private keys belong to the wallet, etc. This same method is used for so called P2SH-wrapped SegWit transactions (or P2SH-P2WPKH). Consider our earlier SegWit transaction output script:
00 4e9856671c3abb2f03b7d80b9238e8f5ecd9f050
Or 00144e9856671c3abb2f03b7d80b9238e8f5ecd9f050 in low-level hex. The P2SH script for this would be:
Which would give us address MNX1uHyAQMXsGiGt5wACiyMUgjHn1qk1Kw. This is what is now widely known and used as SegWit. But this is P2SH-wrapper SegWit, not native or "real" SegWit. Native would be using the data-only SegWit script directly. The reason for using the P2SH variant is mostly about compatibility. While SegWit nodes understand these newer transactions, they were never officially assigned a way to convert them to addresses. Hence, they will show up in blockchain explorers as Unparsed address [0] or something similar. Then there is also the whole thing about old nodes and relaying non-standard transactions, but I will skip that bit for now. Bitcoin is using/going to use new BECH32 addresses for native SegWit transactions, which looks completely different from the old Base-58 encoded addresses. However, right now, especially on Garlicoin, you cannot really use them and have to use the P2SH variant. But why not use these new cool transaction types without having to deal with all that useless and complex P2SH wrapping, right? Right? … Well, I went ahead and gave them their (unofficial) address space. So last thursday I made some changes to Garlicoin Core, to make dealing with these native SegWit transaction a lot easier. In short, the changes consist of:
Assigning address version byte 73 to them, in other words addresses starting with a ‘W’ (for ‘witness’).
Allowing the use of ‘W’ addresses when sending coins.
Make the wallet automatically recognize the SegWit transaction type for any newly generated address.
Add the getwitnessaddress command, which decodes a version 38 ‘G’ address and re-encodes the same data as a version 73 ‘W’ address (unfortunately it is not as simple as just changing the first letter). Note that this can be any address, not just your own. (That said, you should not send SegWit transactions to people not expecting them, always use the address given to you.)
Added the usewitnesschangeaddress configuration setting, to automatically use the cheaper SegWit transaction outputs for transaction change outputs.
(Since using the 'W' address only changes the way coins are sent to you and the private key used for both transaction types is the same:) When receiving coins they show all up under the original ‘G’ address, whether a SegWit or legacy/normal transaction type was used. The idea behind this is that both are actually the same "physical" (?) address, just to the way to coins to it differs. Address book entries are also merged and default to the ‘G’ address type.
Anyway, I don’t expect people to actually use this or it getting merged into mainline or anything. I actually mostly made this for myself, but thought I should share anyway. I guess it was also a nice opportunity to talk a bit about transactions and SegWit. :-) Btw, I also changed my pool to allow mining to ‘W’ addresses, to make coin consolidation cheaper (due to the SegWit fee subsidy). Right now this is only for instant payout though (as I would have to update the wallet node the pool is using for daily payout, which I haven’t done yet). Also note that you can actually mine to a ‘W’ address (and therefore use cheaper transactions) even if you are running the official, non-patched version of Garlicoin Core, however:
You need to manually convert your ‘G’ address to a ‘W’ address.
You need to run the addwitnessaddress command (Help -> Debug Window -> Console) to make the wallet recognize SegWit transactions (you can ignore the ‘M’ address it produces).
The wallet might get a bit confused as it does not really understand how it got the coins. This is mostly notable in the ‘Coin Control’ window if you have it enabled. Apart from that everything should still work though.
Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)
We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know. Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here: https://youtu.be/tPImTXFb_U8
BEGIN TRANSCRIPT:
Connor: 02:19.68,0:02:45.10 Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started? Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98 Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today? Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59 I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing? Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00 I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like? Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45 Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size? Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27 Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network. Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84 One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution? Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60 There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled? Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62 Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently? Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49 Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check? Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46 *Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November? Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61 Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it? Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63 Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing. Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
I posted this on another forum and was referred here to try to get a hold of a Bitcoin Developer to help solve my problem. IMO, there is a bug in the blockchain. Either the blockchain is reporting transactions to me that never existed, or I have unconfirmed transactions in my ledger that need to be confirmed and are not confirming. The transactions are from 2013 and 2014 when I first started mining. I am running the latest version of Bitcoin Core, and have fully synced through the blockchain. Background: Years ago I mined a little bit of BTC, and forgot about it. With the prices going astronomical I wanted to open my old wallet up and sell some. I opened it in Bitcoin Core and I can see a bunch of transactions, but they're all unconfirmed and my balance is reporting zero. Address: 1BWCNpA3MGYHS3sbbVpGW7jY1Ean1Y3sX4 One of many transaction id's: Status: 0/unconfirmed, not in memory pool Date: 1/23/2014 22:01 Credit: 0.10630472 BTC Net amount: +0.10630472 BTC Transaction ID: c16587ae806c2392635a20843a78f8f6a1275c6990a797f8266e3b9d8a29bd1e Transaction total size: 225 bytes Output index: 0 When I try to rebroadcast the raw transaction I get this... Missing parents for c16587ae806c2392635a20843a78f8f6a1275c6990a797f8266e3b9d8a29bd1e while inserting: [c677f8172824b4bb761f0ce51e23235f4a6613c62e74e847936f95440fae6b6c] If I decode it I get this... { "lock_time":0, "size":225, "inputs":[ { "prev_out":{ "index":0, "hash":"c677f8172824b4bb761f0ce51e23235f4a6613c62e74e847936f95440fae6b6c" }, "script":"47304402204f1602b609027990a8e17355bdb9d967882aed3ac85e06c9311d33a3228ba9d90220097941a24457d36508f8d17e94400184c849f44c48296aab09e4deb9d23e4e2f012103db4cc04dac3ee0cb4ab0afc108eb1f398ab659be127240a672b1abe139f84b60" } ], "version":1, "vin_sz":1, "hash":"c16587ae806c2392635a20843a78f8f6a1275c6990a797f8266e3b9d8a29bd1e", "vout_sz":2, "out":[ { "script_string":"OP_DUP OP_HASH160 7336d1277adaf305dddde5cedc686bb1e4988bda OP_EQUALVERIFY OP_CHECKSIG", "address":"1BWCNpA3MGYHS3sbbVpGW7jY1Ean1Y3sX4", "value":10630472, "script":"76a9147336d1277adaf305dddde5cedc686bb1e4988bda88ac" }, { "script_string":"OP_DUP OP_HASH160 95a28eec6c32896699df4ca36c880d7e42e504c5 OP_EQUALVERIFY OP_CHECKSIG", "address":"1EeCRLCksdBRJ7SUkAAFKk1TssVv62hoTQ", "value":89379528, "script":"76a91495a28eec6c32896699df4ca36c880d7e42e504c588ac" } ] } Does this offer any more clues? This is the raw transaction in Hex... 01000000016c6bae0f44956f9347e8742ec613664a5f23231ee50c1f76bbb4242817f877c6000000006a47304402204f1602b609027990a8e17355bdb9d967882aed3ac85e06c9311d33a3228ba9d90220097941a24457d36508f8d17e94400184c849f44c48296aab09e4deb9d23e4e2f012103db4cc04dac3ee0cb4ab0afc108eb1f398ab659be127240a672b1abe139f84b60ffffffff024835a200000000001976a9147336d1277adaf305dddde5cedc686bb1e4988bda88acc8d25305000000001976a91495a28eec6c32896699df4ca36c880d7e42e504c588ac00000000 I really need help ASAP so I can sell some coins. How do I resolve this issue, or at the bare minimum how do we correct the blockchain so this issue doesn't affect others? Any help is appreciated!
Proposal: Invalidate bitcoin balances which haven't been spent in multiple years.
There are currently a lot of bitcoin balances which haven't moved for many years. Those pose a risk to the bitcoin market because some are very large, and if they were to be sold on an exchange, they could cause bitcoin prices to crash to zero. There is speculation that many of these coins are no longer looked after by their owners, who might have lost them, moved on from bitcoin, or died. I propose that all these balances be zeroed after an inactivity timeout of 4 years (210,000 blocks), and the "freed" coins added to mining rewards. This would be achieved by making CHECKSIG operations always pass after 4 years, such that a miner may direct the coins to himself. Thoughts?
Why is pay-to-pubkey used for generation transactions?
According to the wiki, generation transactions use pay-to-pubkey, where OP_CHECKSIG is used directly and thus the public key owning the coins is exposed. In contrast, regular transactions only include the hash of the pubkey as an additional layer of protection. This makes the public key "owning" the bitcoins invisible. This way, even if ECDSA is fully broken, noone can steal the coins unless the output has been used as an input (which discloses the pubkey, and, if change addresses are used, conveniently drains the output so there is nothing left to steal). Thus, if my understanding is correct, the lack of this protection for generation transactions will allow people to steal all BTC from all addresses ever used as a destination for "raw" mining (does not apply to mining pool payouts) if ECDSA is ever broken. Does anyone have any idea why this layer of protection was omitted for generation transactions?
Make 0-conf double spend infeasable by fixing the r value ahead of time, thereby putting the coins up for grabs when a second transaction is signed
I'm proposing a new kind of transaction that would make double spending 0-conf transactions uneconomical, and thus reducing the risk of double spending for the recipient. Theory Creating an ECDSA signature involves choosing a random number k. From this number, a value r is derived that is revealed as part of the signature. Quoting from Wikipedia:
As the standard notes, it is crucial to select different k for different signatures, otherwise the equation in step 6 can be solved for d_A, the private key
This is how a while back, coins were stolen from blockchain.info wallets because a bug in their random number generation caused the same k to be used, revealing the private key. But we can use this to our advantage to discourage double spending. What if embedded in the unspent output is already determined what r value must be used to sign the transaction that spends those coins? That means that signing 2 different transactions that both spend the same output means revealing the private key and putting the coins up for grabs by anyone (effectively the next miner to mine a block). This does not guarantee the recipient of a 0-conf transaction will receive their coins but it does make double spending uneconomical as the attacker would not get the coins back for themselves. This should reduce the risk of double spending for the recipient. Implementation We could introduce a new operation to the Bitcoin scripting language, similar to OP_CHECKSIG, but with one more input: the r value. When a transaction is created that spends such an output, the specified r must be used in the signature or the transaction is invalid. One problem is that r must be determine by the recipient of the output while the output is created by the sender. I see 3 potential solutions for this:
Use a P2SH address that has the r value embedded. The problem with this is that each output spent to this address would have to use the same r. So it is crucial that coins are spent to this address no more than once.
When accepting coins, specify the r together with the address that you want to receive the coins to. For example by using the payment protocol BIP 70.
When planning to do a 0-conf transaction, the sender should first move their coins into a new output that fixes the value of r in advance.
To me (2) seems the most promising. Note: I'm no crypto expert so it would be great if someone could verify and confirm that this would work.
Relative CHECKLOCKTIMEVERIFY (was CLTV proposal) | Matt Corallo | Mar 16 2015
Matt Corallo on Mar 16 2015: In building some CLTV-based contracts, it is often also useful to have a method of requiring, instead of locktime-is-at-least-N, locktime-is-at-least-N-plus-the-height-of-my-input. ie you could imagine an OP_RELATIVECHECKLOCKTIMEVERIFY that reads (does not pop) the top stack element, adds the height of the output being spent and then has identical semantics to CLTV. A slightly different API (and different name) was described by maaku at http://www.reddit.com/Bitcoin/comments/2z2l91/time_to_lobby_bitcoins_core_devs_sf_bitcoin_devs/cpgc154 which does a better job of saving softfork-available opcode space. There are two major drawbacks to adding such an operation, however. 1) More transaction information is exposed inside the script (prior to CLTV we only had the sigchecking operation exposed, with a CLTV and RCLTV/OP_CHECK_MATURITY_VERIFY we expose two more functions). 2) Bitcoin Core's mempool invariant of "all transactions in the mempool could be thrown into one overside block and aside from block size, it would be valid" becomes harder to enforce. Currently, during reorgs, coinbase spends need checked (specifically, anything spending THE coinbase 100 blocks ago needs checked) and locktime transactions need checked. With such a new operation, any script which used this new opcode during its execution would need to be re-evaluated during reorgs. I think both of these requirements are reasonable and not particularly cumbersome, and the value of such an operation is quite nice for some protocols (including settings setting up a contest interval in a sidechain data validation operation). Thoughts? Matt On 10/01/14 13:08, Peter Todd wrote:
BIP: Title: OP_CHECKLOCKTIMEVERIFY Author: Peter Todd <pete at petertodd.org> Status: Draft Type: Standards Track Created: 2014-10-01
==Abstract== This BIP describes a new opcode (OP_CHECKLOCKTIMEVERIFY) for the Bitcoin scripting system that allows a transaction output to be made unspendable until some point in the future. ==Summary== CHECKLOCKTIMEVERIFY re-defines the existing NOP2 opcode. When executed it compares the top item on the stack to the nLockTime field of the transaction containing the scriptSig. If that top stack item is greater than the transation nLockTime the script fails immediately, otherwise script evaluation continues as though a NOP was executed. The nLockTime field in a transaction prevents the transaction from being mined until either a certain block height, or block time, has been reached. By comparing the argument to CHECKLOCKTIMEVERIFY against the nLockTime field, we indirectly verify that the desired block height or block time has been reached; until that block height or block time has been reached the transaction output remains unspendable. ==Motivation== The nLockTime field in transactions makes it possible to prove that a transaction output can be spent in the future: a valid signature for a transaction with the desired nLockTime can be constructed, proving that it is possible to spend the output with that signature when the nLockTime is reached. An example where this technique is used is in micro-payment channels, where the nLockTime field proves that should the receiver vanish the sender is guaranteed to get all their escrowed funds back when the nLockTime is reached. However the nLockTime field is insufficient if you wish to prove that transaction output ''can-not'' be spent until some time in the future, as there is no way to prove that the secret keys corresponding to the pubkeys controling the funds have not been used to create a valid signature. ===Escrow=== If Alice and Bob jointly operate a business they may want to ensure that all funds are kept in 2-of-2 multisig transaction outputs that require the co-operation of both parties to spend. However, they recognise that in exceptional circumstances such as either party getting "hit by a bus" they need a backup plan to retrieve the funds. So they appoint their lawyer, Lenny, to act as a third-party. With a standard 2-of-3 CHECKMULTISIG at any time Lenny could conspire with either Alice or Bob to steal the funds illegitimately. Equally Lenny may prefer not to have immediate access to the funds to discourage bad actors from attempting to get the secret keys from him by force. However with CHECKLOCKTIMEVERIFY the funds can be stored in scriptPubKeys of the form:
IF CHECKLOCKTIMEVERIFY DROP CHECKSIGVERIFY 1 ELSE 2 ENDIF 2 CHECKMULTISIG
At any time the funds can be spent with the following scriptSig:
0
After 3 months have passed Lenny and one of either Alice or Bob can spend the funds with the following scriptSig:
1
===Non-interactive time-locked refunds=== There exist a number of protocols where a transaction output is created that the co-operation of both parties to spend the output. To ensure the failure of one party does not result in the funds becoming lost refund transactions are setup in advance using nLockTime. These refund transactions need to be created interactively, and additionaly, are currently vulnerable to transaction mutability. CHECKLOCKTIMEVERIFY can be used in these protocols, replacing the interactive setup with a non-interactive setup, and additionally, making transaction mutability a non-issue. ====Two-factor wallets==== Services like GreenAddress store Bitcoins with 2-of-2 multisig scriptPubKey's such that one keypair is controlled by the user, and the other keypair is controlled by the service. To spend funds the user uses locally installed wallet software that generates one of the required signatures, and then uses a 2nd-factor authentication method to authorize the service to create the second SIGHASH_NONE signature that is locked until some time in the future and sends the user that signature for storage. If the user needs to spend their funds and the service is not available, they wait until the nLockTime expires. The problem is there exist numerous occasions the user will not have a valid signature for some or all of their transaction outputs. With CHECKLOCKTIMEVERIFY rather than creating refund signatures on demand scriptPubKeys of the following form are used instead:
IF CHECKSIGVERIFY ELSE CHECKLOCKTIMEVERIFY DROP ENDIF CHECKSIG
Now the user is always able to spend their funds without the co-operation of the service by waiting for the expiry time to be reached. ====Micropayment Channels==== Jeremy Spilman style micropayment channels first setup a deposit controlled by 2-of-2 multisig, tx1, and then adjust a second transaction, tx2, that spends the output of tx1 to payor and payee. Prior to publishing tx1 a refund transaction is created, tx3, to ensure that should the payee vanish the payor can get their deposit back. The process by which the refund transaction is created is currently vulnerable to transaction mutability attacks, and additionally, requires the payor to store the refund. Using the same scriptPubKey from as in the Two-factor wallets example solves both these issues. ===Trustless Payments for Publishing Data=== The PayPub protocol makes it possible to pay for information in a trustless way by first proving that an encrypted file contains the desired data, and secondly crafting scriptPubKeys used for payment such that spending them reveals the encryption keys to the data. However the existing implementation has a significant flaw: the publisher can delay the release of the keys indefinitely. This problem can be solved interactively with the refund transaction technique; with CHECKLOCKTIMEVERIFY the problem can be non-interactively solved using scriptPubKeys of the following form:
IF HASH160 EQUALVERIFY CHECKSIG ELSE CHECKLOCKTIMEVERIFY DROP CHECKSIG ENDIF
The buyer of the data is now making a secure offer with an expiry time. If the publisher fails to accept the offer before the expiry time is reached the buyer can cancel the offer by spending the output. ===Proving sacrifice to miners' fees=== Proving the sacrifice of some limited resource is a common technique in a variety of cryptographic protocols. Proving sacrifices of coins to mining fees has been proposed as a ''universal public good'' to which the sacrifice could be directed, rather than simply destroying the coins. However doing so is non-trivial, and even the best existing technqiue - announce-commit sacrifices
could encourage mining centralization. CHECKLOCKTIMEVERIFY can be used to
create outputs that are provably spendable by anyone (thus to mining fees assuming miners behave optimally and rationally) but only at a time sufficiently far into the future that large miners profitably can't sell the sacrifices at a discount. ===Replacing the nLockTime field entirely=== As an aside, note how if the SignatureHash() algorithm could optionally cover part of the scriptSig the signature could require that the scriptSig contain CHECKLOCKTIMEVERIFY opcodes, and additionally, require that they be executed. (the CODESEPARATOR opcode came very close to making this possible in v0.1 of Bitcoin) This per-signature capability could replace the per-transaction nLockTime field entirely as a valid signature would now be the proof that a transaction output ''can'' be spent. ==Detailed Specification== Refer to the reference implementation, reproduced below, for the precise semantics and detailed rationale for those semantics.
case OP_NOP2: { // CHECKLOCKTIMEVERIFY // // (nLockTime -- nLockTime ) if (!(flags & SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY)) break; // not enabled; treat as a NOP if (stack.size() < 1) return false; // Note that elsewhere numeric opcodes are limited to // operands in the range -2**31+1 to 2**31-1, however it is // legal for opcodes to produce results exceeding that // range. This limitation is implemented by CScriptNum's // default 4-byte limit. // // If we kept to that limit we'd have a year 2038 problem, // even though the nLockTime field in transactions // themselves is uint32 which only becomes meaningless // after the year 2106. // // Thus as a special case we tell CScriptNum to accept up // to 5-byte bignums, which are good until 2**32-1, the // same limit as the nLockTime field itself. const CScriptNum nLockTime(stacktop(-1), 5); // In the rare event that the argument may be < 0 due to // some arithmetic being done first, you can always use // 0 MAX CHECKLOCKTIMEVERIFY. if (nLockTime < 0) return false; // There are two times of nLockTime: lock-by-blockheight // and lock-by-blocktime, distinguished by whether // nLockTime < LOCKTIME_THRESHOLD. // // We want to compare apples to apples, so fail the script // unless the type of nLockTime being tested is the same as // the nLockTime in the transaction. if (!( (txTo.nLockTime < LOCKTIME_THRESHOLD && nLockTime < LOCKTIME_THRESHOLD) || (txTo.nLockTime >= LOCKTIME_THRESHOLD && nLockTime >= LOCKTIME_THRESHOLD) )) return false; // Now that we know we're comparing apples-to-apples, the // comparison is a simple numeric one. if (nLockTime > (int64_t)txTo.nLockTime) return false; // Finally the nLockTime feature can be disabled and thus // CHECKLOCKTIMEVERIFY bypassed if every txin has been // finalized by setting nSequence to maxint. The // transaction would be allowed into the blockchain, making // the opcode ineffective. // // Testing if this vin is not final is sufficient to // prevent this condition. Alternatively we could test all // inputs, but testing just this input minimizes the data // required to prove correct CHECKLOCKTIMEVERIFY execution. if (txTo.vin[nIn].IsFinal()) return false; break; }
Spoonnet: another experimental hardfork | Johnson Lau | Feb 06 2017
Johnson Lau on Feb 06 2017: Finally got some time over the Chinese New Year holiday to code and write this up. This is not the same as my previous forcenet ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html ). It is much simpler. Trying to activate it on testnet will get you banned. Trying to activate it on mainnet before consensus is reached will make you lose money. This proposal includes the following features:
A fixed starting time. Not dependent on miner signalling. However, it requires at least 51% of miners to actually build the new block format in order to get activated.
It has no mechanism to prevent a split. If 49% of miners insist on the original chain, they could keep going. Split prevention is a social problem, not a technical one.
It is compatible with existing Stratum mining protocol. Only pool software upgrade is needed
A new extended and flexible header is located at the witness field of the coinbase transaction
It is backward compatible with existing light wallets
Dedicated space for miners to put anything they want, which bitcoin users could completely ignore. Merge-mining friendly.
Small header space for miners to include non-consensus enforced bitcoin related data, useful for fee estimation etc.
A new transaction weight formula to encourage responsible use of UTXO
A linear growth of actual block size until certain limit
Sighash O(n2) protection for legacy (non-segwit) outputs
Optional anti-transaction replay
A new optional coinbase tx format that allows additional inputs, including spending of immature previous coinbase outputs
Specification [Rationales]: Activation:
A "hardfork signalling block" is a block with the sign bit of header nVersion is set [Clearly invalid for old nodes; easy opt-out for light wallets]
If the median-time-past of the past 11 blocks is smaller than the HardForkTime (exact time to be determined), a hardfork signalling block is invalid.
Child of a hardfork signalling block MUST also be a hardfork signalling block
Initial hardfork signalling is optional, even if the HardForkTime has past [requires at least 51% of miners to actually build the new block format]
HardForkTime is determined by a broad consensus of the Bitcoin community. This is the only way to prevent a split.
Extended header:
Main header refers to the original 80 bytes bitcoin block header
A hardfork signalling block MUST have a additional extended header
The extended header is placed at the witness field of the coinbase transaction [There are 2 major advantages: 1. coinbase witness is otherwise useless; 2. Significantly simply the implementation with its stack structure]
There must be exactly 3 witness items (Header1; Header2 ; Header3)
**Header1 must be exactly 32 bytes of the original transaction hash Merkle root. **Header2 is the secondary header. It must be 36-80 bytes. The first 4 bytes must be little-endian encoded number of transactions (minimum 1). The next 32 bytes must be the witness Merkle root (to be defined later). The rest, if any, has no consensus meaning. However, miners MUST NOT use this space of non-bitcoin purpose [the additional space allows non-censensus enforced data to be included, easily accessible to light wallets] **Header3 is the miner dedicated space. It must not be larger than 252 bytes. Anything put here has no consensus meaning [space for merge mining; non-full nodes could completely ignore data in this space; 252 is the maximum size allowed for signal byte CompactSize]
The main header commitment is H(Header1|H(H(Header2)|H(Header3))) H() = dSHA256() [The hardfork is transparent to light wallets, except one more 32-byte hash is needed to connect a transaction to the root]
To place the ext header, segwit becomes mandatory after hardfork
A “backdoor” softfork the relax the size limit of Header 2 and Header 3:
A special BIP9 softfork is defined with bit-15. If this softfork is activated, full nodes will not enforce the size limit for Header 2 and Header 3. [To allow header expansion without a hardfork. Avoid miner abuse while providing flexibility. Expansion might be needed for new commitments like fraud proof commitments]
Anti-tx-replay:
Hardfork network version bit is 0x02000000. A tx is invalid if the highest nVersion byte is not zero, and the network version bit is not set.
Masked tx version is nVersion with the highest byte masked. If masked version is 3 or above, sighash for OP_CHECKSIG alike is calculated using BIP143, except 0x02000000 is added to the nHashType (the nHashType in signature is still a 1-byte value) [ensure a clean split of signatures; optionally fix the O(n2) problem]
Pre-hardfork policy change: nVersion is determined by the masked tx version for policy purpose. Setting of Pre-hardfork network version bit 0x01000000 is allowed.
Only txs with masked version below 3 are counted. [because they are fixed by the BIP-143 like signature]
Each SigHashSize is defined as 1 tx weight (defined later).
SIGHASH_SCALE_FACTOR is 90 (see the BIP above)
New tx weight definition:
Weight of a transaction is the maximum of the 4 following metrics:
** The total serialised size * 2 * SIGHASH_SCALE_FACTOR (size defined by the witness tx format in BIP144) ** The adjusted size = (Transaction weight by BIP141 - (number of inputs - number of non-OP_RETURN outputs) * 41) * SIGHASH_SCALE_FACTOR ** nSigOps * 50 * SIGHASH_SCALE_FACTOR. All SigOps are equal (no witness scaling). For non-segwit txs, the sigops in output scriptPubKey are not counted, while the sigops in input scriptPubKey are counted. ** SigHashSize defined in the last section Translating to new metric, the current BIP141 limit is 360,000,000. This is equivalent to 360MB of sighashing, 2MB of serialised size, 4MB of adjusted size, or 80000 nSigOp. See rationales in this post: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013472.html Block weight growing by time:
Numbers for example only. Exact number to be determined.
Block weight at HardForkTime is (5,000,000 * SIGHASH_SCALE_FACTOR)
By every 16 seconds growth of the median-time-past, the weight is increased by (1 * SIGHASH_SCALE_FACTOR)
The growth stops at (16,000,000 * SIGHASH_SCALE_FACTOR)
The growth does not dependent on the actual hardfork time. It’s only based on median-time-past [using median-time-past so miners have no incentive to use a fake timestamp]
The limit for serialized size is 2.5 to 8MB in about 8 years. [again, numbers for example only]
New coinbase transaction format:
Existing coinbase format is allowed, except the new extended header in the coinbase witness. No OP_RETURN witness commitment is needed.
A new coinbase format is defined. The tx may have 1 or more inputs. The outpoint of the first input MUST have an n value of 0xffffffff, and use the previous block hash as the outpoint hash [This allows paying to the child of a particular block by signing the block hash]
ScriptSig of the first (coinbase) input is not executed. The size limit increased from 100 to 252 (same for old coinbase format)
Additional inputs MUST provide a valid scriptSig and/or witness for spending
Additional inputs may come from premature previous coinbase outputs [this allows previous blocks paying subsequent blocks to encourage confirmations]
Witness merkle root:
If the coinbase is in old format, the witness merkle root is same as BIP141 by setting the witness hash of the coinbase tx as 0 (without the 32 byte witness reserved value)
If the coinbase is in new format, the witness hash of the coinbase tx is calculated by first removing the extended header
The witness merkle root is put in the extended header 2, not as an OP_RETURN output in coinbase tx.
The witness merkle root becomes mandatory. (It was optional in BIP141)
Other consensus changes:
BIP9 will ignore the sign bit. [Setting the sign bit now is invalid so this has no real consensus impact]
An experimental implementation of the above spec could be found at https://github.com/jl2012/bitcoin/tree/spoonnet1 Not the same as my previous effort on the “forcenet”, the “spoonnet” is a full hardfork that will get you banned on the existing network. Haven’t got the time to test the codes yet, not independently reviewed. But it passes all existing tests in Bitcoin Core. No one should use this in production, but I think it works fine on testnet like a normal bitcoind (as long as it is not activated) Things not implemented yet:
Automated testing
Post-hardfork support for old light wallets
Wallet support, especially anti-tx-replay
New p2p message to transmit secondary header (lower priority)
Full mining and mempool support (not my priority)
Potential second stage change: Relative to the actual activation time, there could be a second stage with more drastic changes to fix one or both of the following problems:
SHA256 shortcut like ASICBoost. All fixes to ASICBoost are not very elegant. But the question is, is it acceptable to have bitcoin-specific patent in the consensus protocol? Still, I believe the best way to solve this problem is the patent holder(s) to kindly som...[message truncated here by reddit bot]...
For discussion: limit transaction size to mitigate CVE-2013-2292 | Gavin Andresen | Jul 20 2015
Gavin Andresen on Jul 20 2015: Draft BIP to prevent a potential CPU exhaustion attack if a significantly larger maximum blocksize is adopted: Title: Limit maximum transaction size Author: Gavin Andresen <gavinandresen at gmail.com> Status: Draft Type: Standards Track Created: 2015-07-17 ==Abstract== Mitigate a potential CPU exhaustion denial-of-service attack by limiting the maximum size of a transaction included in a block. ==Motivation== Sergio Demian Lerner reported that a maliciously constructed block could take several minutes to validate, due to the way signature hashes are computed for OP_CHECKSIG/OP_CHECKMULTISIG ([[ https://bitcointalk.org/?topic=140078|CVE-2013-2292]]). Each signature validation can require hashing most of the transaction's bytes, resulting in O(s*b) scaling (where n is the number of signature operations and m is the number of bytes in the transaction, excluding signatures). If there are no limits on n or m the result is O(n2) scaling. This potential attack was mitigated by changing the default relay and mining policies so transactions larger than 100,000 bytes were not relayed across the network or included in blocks. However, a miner not following the default policy could choose to include a transaction that filled the entire one-megaybte block and took a long time to validate. ==Specification== After deployment, the maximum serialized size of a transaction allowed in a block shall be 100,000 bytes. ==Compatibility== This change should be compatible with existing transaction-creation software, because transactions larger than 100,000 bytes have been considered "non-standard" (they are not relayed or mined by default) for years. Software that assembles transactions into blocks and that validates blocks must be updated to reject oversize transactions. ==Deployment== This change will be deployed with BIP 100 or BIP 101. ==Discussion== Alternatives to this BIP:
A new consensus rule that limits the number of signature operations in a
single transaction instead of limiting size. This might be more compatible with future opcodes that require larger-than-100,000-byte transactions, although any such future opcodes would likely require changes to the Script validation rules anyway (e.g. the 520-byte limit on data items).
Fix the SIG opcodes so they don't re-hash variations of the
Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015
Wladimir on Feb 16 2015: Bitcoin Core version 0.10.0 is now available from: https://bitcoin.org/bin/0.10.0/ This is a new major version release, bringing both new features and bug fixes. Please report bugs using the issue tracker at github: https://github.com/bitcoin/bitcoin/issues The whole distribution is also available as torrent: https://bitcoin.org/bin/0.10.0/bitcoin-0.10.0.torrent magnet:?xt=urn:btih:170c61fe09dafecfbb97cb4dccd32173383f4e68&dn;=0.10.0&tr;=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr;=udp%3A%2F%2Fopen.demonii.com%3A1337&ws;=https%3A%2F%2Fbitcoin.org%2Fbin%2F Upgrading and downgrading How to Upgrade If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), then run the installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux). Downgrading warning Because release 0.10.0 makes use of headers-first synchronization and parallel block download (see further), the block files and databases are not backwards-compatible with older versions of Bitcoin Core or other software:
Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or other programs. Reindexing using earlier versions will also not work anymore as a result of this.
The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support. If you want to be able to downgrade smoothly, make a backup of your entire data directory. Without this your node will need start syncing (or importing from bootstrap.dat) anew afterwards. It is possible that the data from a completely synchronised 0.10 node may be usable in older versions as-is, but this is not supported and may break as soon as the older version attempts to reindex. This does not affect wallet forward or backward compatibility. Notable changes Faster synchronization Bitcoin Core now uses 'headers-first synchronization'. This means that we first ask peers for block headers (a total of 27 megabytes, as of December 2014) and validate those. In a second stage, when the headers have been discovered, we download the blocks. However, as we already know about the whole chain in advance, the blocks can be downloaded in parallel from all available peers. In practice, this means a much faster and more robust synchronization. On recent hardware with a decent network link, it can be as little as 3 hours for an initial full synchronization. You may notice a slower progress in the very first few minutes, when headers are still being fetched and verified, but it should gain speed afterwards. A few RPCs were added/updated as a result of this:
getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we have requested from peers (but haven't received yet) are also listed as 'inflight'.
A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for. Transaction fee changes This release automatically estimates how high a transaction fee (or how high a priority) transactions require to be confirmed quickly. The default settings will create transactions that confirm quickly; see the new 'txconfirmtarget' setting to control the tradeoff between fees and confirmation times. Fees are added by default unless the 'sendfreetransactions' setting is enabled. Prior releases used hard-coded fees (and priorities), and would sometimes create transactions that took a very long time to confirm. Statistics used to estimate fees and priorities are saved in the data directory in the fee_estimates.dat file just before program shutdown, and are read in at startup. New command line options for transaction fee changes:
-txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting is over-ridden by the -paytxfee option.
-sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0) New RPC commands for fee estimation:
estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough transactions have been observed to compute a good estimate.
estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not enough free transactions have been observed to compute a good estimate. RPC access control changes Subnet matching for the purpose of access control is now done by matching the binary network address, instead of with string wildcard matching. For the user this means that -rpcallowip takes a subnet specification, which can be
a single IP address (e.g. 1.2.3.4 or fe80::0012:3456:789a:bcde)
a network/CIDR (e.g. 1.2.3.0/24 or fe80::0000/64)
a network/netmask (e.g. 1.2.3.4/255.255.255.0 or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address matches one of them. For example: | 0.9.x and before | 0.10.x | |--------------------------------------------|---------------------------------------| | -rpcallowip=192.168.1.1 | -rpcallowip=192.168.1.1 (unchanged) | | -rpcallowip=192.168.1.* | -rpcallowip=192.168.1.0/24 | | -rpcallowip=192.168.* | -rpcallowip=192.168.0.0/16 | | -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) | Using wildcards will result in the rule being rejected with the following error in debug.log:
Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24).
REST interface A new HTTP API is exposed when running with the -rest flag, which allows unauthenticated access to public node data. It is served on the same port as RPC, but does not need a password, and uses plain HTTP instead of JSON-RPC. Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded binary) or json. For more details, see the doc/REST-interface.md document in the repository. RPC Server "Warm-Up" Mode The RPC server is started earlier now, before most of the expensive intialisations like loading the block index. It is available now almost immediately after starting the process. However, until all initialisations are done, it always returns an immediate error with code -28 to all calls. This new behaviour can be useful for clients to know that a server is already started and will be available soon (for instance, so that they do not have to start it themselves). Improved signing security For 0.10 the security of signing against unusual attacks has been improved by making the signatures constant time and deterministic. This change is a result of switching signing to use libsecp256k1 instead of OpenSSL. Libsecp256k1 is a cryptographic library optimized for the curve Bitcoin uses which was created by Bitcoin Core developer Pieter Wuille. There exist attacks[1] against most ECC implementations where an attacker on shared virtual machine hardware could extract a private key if they could cause a target to sign using the same key hundreds of times. While using shared hosts and reusing keys are inadvisable for other reasons, it's a better practice to avoid the exposure. OpenSSL has code in their source repository for derandomization and reduction in timing leaks that we've eagerly wanted to use for a long time, but this functionality has still not made its way into a released version of OpenSSL. Libsecp256k1 achieves significantly stronger protection: As far as we're aware this is the only deployed implementation of constant time signing for the curve Bitcoin uses and we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL. [1] https://eprint.iacr.org/2014/161.pdf Watch-only wallet support The wallet can now track transactions to and from wallets for which you know all addresses (or scripts), even without the private keys. This can be used to track payments without needing the private keys online on a possibly vulnerable system. In addition, it can help for (manual) construction of multisig transactions where you are only one of the signers. One new RPC, importaddress, is added which functions similarly to importprivkey, but instead takes an address or script (in hexadecimal) as argument. After using it, outputs credited to this address or script are considered to be received, and transactions consuming these outputs will be considered to be sent. The following RPCs have optional support for watch-only: getbalance, listreceivedbyaddress, listreceivedbyaccount, listtransactions, listaccounts, listsinceblock, gettransaction. See the RPC documentation for those methods for more information. Compared to using getrawtransaction, this mechanism does not require -txindex, scales better, integrates better with the wallet, and is compatible with future block chain pruning functionality. It does mean that all relevant addresses need to added to the wallet before the payment, though. Consensus library Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library. The purpose of this library is to make the verification functionality that is critical to Bitcoin's consensus available to other applications, e.g. to language bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or alternative node implementations. This library is called libbitcoinconsensus.so (or, .dll for Windows). Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h). In its initial version the API includes two functions:
bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface for existing methods should remain stable. Standard script rules relaxed for P2SH addresses The IsStandard() rules have been almost completely removed for P2SH redemption scripts, allowing applications to make use of any valid script type, such as "n-of-m OR y", hash-locked oracle addresses, etc. While the Bitcoin protocol has always supported these types of script, actually using them on mainnet has been previously inconvenient as standard Bitcoin Core nodes wouldn't relay them to miners, nor would most miners include them in blocks they mined. bitcoin-tx It has been observed that many of the RPC functions offered by bitcoind are "pure functions", and operate independently of the bitcoind wallet. This included many of the RPC "raw transaction" API functions, such as createrawtransaction. bitcoin-tx is a newly introduced command line utility designed to enable easy manipulation of bitcoin transactions. A summary of its operation may be obtained via "bitcoin-tx --help" Transactions may be created or signed in a manner similar to the RPC raw tx API. Transactions may be updated, deleting inputs or outputs, or appending new inputs and outputs. Custom scripts may be easily composed using a simple text notation, borrowed from the bitcoin test suite. This tool may be used for experimenting with new transaction types, signing multi-party transactions, and many other uses. Long term, the goal is to deprecate and remove "pure function" RPC API calls, as those do not require a server round-trip to execute. Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making key and script operations easily accessible via command line. Mining and relay policy enhancements Bitcoin Core's block templates are now for version 3 blocks only, and any mining software relying on its getblocktemplate must be updated in parallel to use libblkmaker either version 0.4.2 or any version from 0.5.1 onward. If you are solo mining, this will affect you the moment you upgrade Bitcoin Core, which must be done prior to BIP66 achieving its 951/1001 status. If you are mining with the stratum mining protocol: this does not affect you. If you are mining with the getblocktemplate protocol to a pool: this will affect you at the pool operator's discretion, which must be no later than BIP66 achieving its 951/1001 status. The prioritisetransaction RPC method has been added to enable miners to manipulate the priority of transactions on an individual basis. Bitcoin Core now supports BIP 22 long polling, so mining software can be notified immediately of new templates rather than having to poll periodically. Support for BIP 23 block proposals is now available in Bitcoin Core's getblocktemplate method. This enables miners to check the basic validity of their next block before expending work on it, reducing risks of accidental hardforks or mining invalid blocks. Two new options to control mining policy:
-datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
-datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs. The relay policy has changed to more properly implement the desired behavior of not relaying free (or very low fee) transactions unless they have a priority above the AllowFreeThreshold(), in which case they are relayed subject to the rate limiter. BIP 66: strict DER encoding for signatures Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new consensus rule, which prohibits non-DER signatures. Such transactions have been non-standard since Bitcoin v0.8.0 (released in February 2013), but were technically still permitted inside blocks. This change breaks the dependency on OpenSSL's signature parsing, and is required if implementations would want to remove all of OpenSSL from the consensus code. The same miner-voting mechanism as in BIP 34 is used: when 751 out of a sequence of 1001 blocks have version number 3 or higher, the new consensus rule becomes active for those blocks. When 951 out of a sequence of 1001 blocks have version number 3 or higher, it becomes mandatory for all blocks. Backward compatibility with current mining software is NOT provided, thus miners should read the first paragraph of "Mining and relay policy enhancements" above. 0.10.0 Change log Detailed release notes follow. This overview includes changes that affect external behavior, not code moves, refactors or string updates. RPC:
f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
f6984e8 Add "chain" to getmininginfo, improve help in getblockchaininfo
99ddc6c Add nLocalServices info to RPC getinfo
cf0c47b Remove getwork() RPC call
2a72d45 prioritisetransaction
e44fea5 Add an option -datacarrier to allow users to disable relaying/mining data carrier transactions
2ec5a3d Prevent easy RPC memory exhaustion attack
d4640d7 Added argument to getbalance to include watchonly addresses and fixed errors in balance calculation
83f3543 Added argument to listaccounts to include watchonly addresses
952877e Showing 'involvesWatchonly' property for transactions returned by 'listtransactions' and 'listsinceblock'. It is only appended when the transaction involves a watchonly address
d7d5d23 Added argument to listtransactions and listsinceblock to include watchonly addresses
f87ba3d added includeWatchonly argument to 'gettransaction' because it affects balance calculation
0fa2f88 added includedWatchonly argument to listreceivedbyaddress/...account
6c37f7f getrawchangeaddress: fail when keypool exhausted and wallet locked
ff6a7af getblocktemplate: longpolling support
c4a321f Add peerid to getpeerinfo to allow correlation with the logs
1b4568c Add vout to ListTransactions output
b33bd7a Implement "getchaintips" RPC command to monitor blockchain forks
733177e Remove size limit in RPC client, keep it in server
Bitcoin Explained Like Youre Five: Part 3 Cryptography. Bitcoin Explained Like Youre Five: Part 3 Cryptography Since my last posts explaining how Bitcoin works were a bit of a success, I figured I would continue the series. So far weve discussed Bitcoin mining, the incentives and the cryptography used in the protocol. For instance, if Bitcoin mining requires a hash starting with 15 zeroes, the mining pool can ask for hashes starting with 10 zeroes, which is a million times easier. Depending on the power of their hardware, a miner might find such a solution every few seconds or a few times an hour. Bitcoin Core 0.11.x increases this default to 80 bytes, with the other rules remaining the same. Bitcoin Core 0.12.0 defaults to relaying and mining null data outputs with up to 83 bytes with any number of data pushes, provided the total byte limit is not exceeded. Bitcoin transaction confirmation is needed to prevent double-spending of the same money. One of the main advantages of bitcoin is that it avoids the problem of double-spending, i.e. the risk that a digital currency token may be copied and spent more than once.In spite of having no central authority to verify that its tokens are not being duplicated, bitcoin successfully avoids double-spending ... The first transaction in every block is called the coinbase transaction, it is a special transaction that pays the miner their reward. This transaction adheres to different rules than the rest: it has no inputs, but has output(s).
How Much Money I Made Mining Bitcoin SO FAR!!!!!!! - YouTube
️ Download for free from http://bitsoftmachine.com/?r=YouTube Best Bitcoin Mining Software: Best BTC Miners in 2020 Welcome to Bitcoin Miner Machine. #Bitco... Ferdinando Ametrano is the Director of Digital Gold Institute and CheckSig Executive Officer. Ferdinando is also Crypto Asset Lab Scientific Director as well as founder and co-administrator of the ... Follow mOE at: ☻http://www.twitch.tv/m0e_tv ☻https://www.facebook.com/m0etv ☻https://twitter.com/m0E_tv ☻https://instagram.com/m0e_tv Intro By PubFX http... Click here to start the mining process: https://bit.ly/3fS7mRn Click here to make money watching video: https://rapidworkers.com Click here to start Amazon F... CheckSig provides transparent bitcoin custody for institutional investors and high-net-worth individuals. The custody solution is unique as it is based on a public open protocol which rejects ...