Freemining.co - The best bitcoin cloud mining and passive ...
Bitcoin Mining Free Bitcoin Mining
LEGIT FREE MINING SITES WITHOUT DEPOSIT 2020 – Bitcoins ...
Transaction Hash ID (TXID) - Beginners Guide to Bitcoin ...
So back in 2014 I put 60 bucks into a bitcoin mining service. Lost the seed but I have all the txIDs on Blockchains website.
So when I was a kid I put 60 bucks into bitcoin miner hashflare.io then I deposited it into dark wallet, a few years later when bitcoin started taking off i tried to recover the wallet but I lost the seed. But while going through hashflare I found all the TxIDs from my hashflare withdrawls. I followed that onto blockchains website and now I can see the amount I have now, is there any way to transfer it off of Blockchain and into my current wallet?
https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/184.108.40.206 Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that. Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap. We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout. Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.
Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now. Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date. The transition height is also when the team requirement will be relaxed for the network.
Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.
The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use. There are so many goodies here it is hard to summarize them all. I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures. The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!
Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.
Network magnitude unit pinned to a static value of 0.25
Max research reward allowed per block raised to 16384 GRC (from 12750 GRC)
New CPIDs begin accruing research rewards from the first superblock that contains the CPID instead of from the time of the beacon advertisement
500 GRC research reward limit for a CPID's first stake
6-month expiration for unclaimed rewards
10-block spacing requirement between research reward claims
Rolling 5-day payment-per-day limit
Legacy tolerances for floating-point error and time drift
The need to include a valid copy of a CPID's magnitude in a claim
10-block emission adjustment interval for the magnitude unit
One-time beacon activation requires that participants temporarily change their usernames to a verification code at one whitelisted BOINC project
Verification codes of pending beacons expire after 3 days
Self-service beacon removal
Burn fee for beacon advertisement increased from 0.00001 GRC to 0.5 GRC
Rain addresses derived from beacon keys instead of a default wallet address
Beacon expiration determined as of the current block instead of the previous block
The ability for developers to remove beacons
The ability to sign research reward claims with non-current but unexpired beacons
As a reminder:
Beacons expire after 6 months pass (180 days)
Beacons can be renewed after 5 months pass (150 days)
Renewed beacons must be signed with the same key as the original beacon
Magnitudes less than 1 include two fractional places
Magnitudes greater than or equal to 1 but less than 10 include one fractional place
A valid superblock must match a scraper convergence
Superblock popularity election mechanics
Yes/no/abstain and single-choice response types (no user-facing support yet)
To create a poll, a maximum of 250 UTXOs for a single address must add up to 100000 GRC. These are selected from the largest downwards.
Burn fee for creating polls scaled by the number of UTXOs claimed
50 GRC for a poll contract
0.001 GRC per claimed UTXO
Burn fee for casting votes scaled by the number of UTXOs claimed
0.01 GRC for a vote contract
0.01 GRC to claim magnitude
0.01 GRC per claimed address
0.001 GRC per claimed UTXO
Maximum length of a poll title: 80 characters
Maximum length of a poll question: 100 characters
Maximum length of a poll discussion website URL: 100 characters
Maximum number of poll choices: 20
Maximum length of a poll choice label: 100 characters
Magnitude, CPID count, and participant count poll weight types
The ability for developers to remove polls and votes
[220.127.116.11] 2020-09-03, mandatory, "Fern"
Backport newer uint256 types from Bitcoin #1570 (@cyrossignol)
Implement project level rain for rainbymagnitude #1580 (@jamescowens)
Upgrade utilities (Update checker and snapshot downloadeapplication) #1576 (@iFoggz)
Provide fees collected in the block by the miner #1601 (@iFoggz)
Add support for generating legacy superblocks from scraper stats #1603 (@cyrossignol)
Port of the Bitcoin Logger to Gridcoin #1600 (@jamescowens)
Implement zapwallettxes #1605 (@jamescowens)
Implements a global event filter to suppress help question mark #1609 (@jamescowens)
Add next target difficulty to RPC output #1615 (@cyrossignol)
Add caching for block hashes to CBlock #1624 (@cyrossignol)
Make toolbars and tray icon red for testnet #1637 (@jamescowens)
Add an rpc call convergencereport #1643 (@jamescowens)
Implement newline filter on config file read in #1645 (@jamescowens)
Implement beacon status icon/button #1646 (@jamescowens)
Add gridcointestnet.png #1649 (@caraka)
Add precision to support magnitudes less than 1 #1651 (@cyrossignol)
Replace research accrual calculations with superblock snapshots #1657 (@cyrossignol)
Publish example gridcoinresearch.conf as a md document to the doc directory #1662 (@jamescowens)
Add options checkbox to disable transaction notifications #1666 (@jamescowens)
Add support for self-service beacon deletion #1695 (@cyrossignol)
Add support for type-specific contract fee amounts #1698 (@cyrossignol)
Add verifiedbeaconreport and pendingbeaconreport #1696 (@jamescowens)
Add preliminary testing option for block v11 height on testnet #1706 (@cyrossignol)
Add verified beacons manifest part to superblock validator #1711 (@cyrossignol)
Implement beacon, vote, and superblock display categories/icons in UI transaction model #1717 (@jamescowens)
Possibly scammed, need any technical advice please! :)
Hi bitcoin!Situation: I am a professional poker player and ventured deep into the abyss of shady/dodgy sites. Why I did so is still a mystery to me. Long story short, I finally made two withdrawals totaling approximately $3,500 to $4,000 via BTC. Next day, I load up my Ledger and see I have my balance. Awesome! Oh wait...it's not spendable.Surely it just hasn't confirmed yet, right? No big deal. Then I look at the transactions...https://blockstream.info/tx/35c1ff29f22c251b67829ce6046a7441aa81dd67d1b6b3fffb3c518fa7a19b2b andhttps://blockstream.info/tx/e65394e7a7c8fce0eeabef3709368ad032bee7a531fed6ac002823c4ad697970 Previous withdrawals were sent to me with a more normalized fee structure.These were sent with what looks like a near-zero fee. With my limited technical understanding of BTC, this means the transaction will either get stuck for a VERY long time, or it will never confirm and eventually be returned to him.This person has blocked me on socials and has said on the discord server for the site that "the site is better off without him" (Basically as a good professional player he didn't want me beating his small community of players) I'm ok with this as long as I don't get scammed. Is there anything I can do at this point? Edit #1: Thanks a ton tou/jcoinnerfor the extensive help with spending the unspent coins via CPFP (child pays for parent) The transaction appears to be confirmed! Using CPFP has successfully spent the unconfirmed coins back to a different wallet of mine. The txid is: 6d65c98ea01bad8d98045794729b7d1b93936a11faad0e3bd126e9223d2ee297 and appears to show a confirm and my coins are spendable. I believe this persons' intention was "Send a transaction that's very likely to fail and if it does I'll scam and if it doesn't...oh well." Thank you so much reddit!
Since the Bitcoin core does not support mining from version 0.13.0，I think there are some mining strategy but I can not obtain them from Bitcoin Core's source code. I want to ask some details about mining:
When packaging a new block, generally speaking, the number of transactions in the transaction pool is about 10,000 to 20,000, far exceeding the transactions that can be accommodated in a block (up to 3000), How does the mining pool selects the transaction from the transaction pool? Is it knapsack algorithm or a greedy algorithm? Or some other more complicated algorithm?
After the transactions are choosed, will the transactions be sorted by some rules(may be by txid)?
If all the nonces, timestamp and coinbase in the current block have been tried and still cannot mine a new block, what does the mining pool do? will the mining pool adjust the order between transactions? If it is to adjust the order between transactions, are there any adjustment rules?
It would be great if there were relevant technical experts who could introduce it. Thanks again!
Hello Again, I'm researching solo mining, but I need help in finding information on how the Coinbase Transaction is Generated: It should be explained within the 'getblocktemplate' page on the Bitcoin Wiki (https://en.bitcoin.it), but this only explains how to build a coinbase transaction with the help of "coinbasetxn"; and it's my understanding that "coinbasetxn" is only supported within Mining Pools, not the Bitcoin Wallet - remember, I'm researching solo mining. What's also making things complicated is "Segregated Witness" (Segwit): With Segwit, all transactions now have a 'txid' and a 'hash'; sometimes, the 'txid' and 'hash' will match, but most of the time (including in some Coinbase Transactions) they don't. And I can't find anything online on how to generate a Coinbase Transaction while accounting for Segwit. Can anyone find an example algorithm that can build a Coinbase Transaction in a solo mining environment while also accounting for Segwit? You can share it in any code you like, but I'd prefer Python.
Need help recovering my Bitcoin change wallet after transaction, only have pre-transaction wallet.dat
I did some mining back in 2018 that got me 0.3949 BTC. I backed up my wallet in 2018 and have made only one transaction since then on 2 Feb 2019 to Multicards Bitcoin Paygate for a domain purchase for 0.0305 BTC. The transaction was done using Bitcoin Core on one of my Windows PCs. That transaction was successful, I got the domain, and I'm pretty sure I saw the remaining balance of 0.3644 in my Bitcoin Core app. Since 2 Feb 2019, I lost track of that PCs hard drive and I do not recall backing up my wallet after my 2 Feb 2019 transaction. Transaction details here https://www.blockonomics.co/api/tx?txid=ba89799ef6048bcd6d6c20c56772e1d75353a9d47c3bca9830f29433b8acb95e&addr=3Fpr6RfSuH99k7rRSXmgAwKv55zZQY8RnG I recently reinstalled Bitcoin Core and restored my 2018 wallet.dat backup but my balance is 0.000 BTC. It shows the 2 Feb 2019 transactions to two addresses. I'm assuming one was for the recipient (Multicards) and the other was a change wallet, the larger 0.3644 amount. I'm assuming it is a change account because no other activity as occurred in that wallet since 2 Feb 2019. I need help finding a way to access that change wallet. I've searched for all my wallet.dat files I could find but none of them have the change wallet. I've tried Bitcoin Core's dedug console listaccounts and nothing shows up. I've tried dumpprivkey for my 2018 wallet but not sure what to do with the one line of code it returned. I've tried pywallet --dumpwallet command but the resulting txt file doesn't include the change wallet address. UPDATE & PROBLEM SOLVED! At time of my post, I was using Bitcoin Core v0.15.1 which was released in November 2017, a full two years before the transaction in question. I updated to the latest Bitcoin Core v0.20.0, did a full sync, and voila, the 0.3644 BTC in the change wallet appears in my total balance (see 3rd screenshot posted). Based on the helpful comments, I think the 2019 transaction put the change in a segwit address which Bitcoin Core v0.15.1 was not able to read. Sorry for wasting everyone's time but I learned to... 1. Use an up to date Bitcoin Core version 2. Backup my wallet.dat after each transaction and note version used in ledger. 3. Stay calm. https://preview.redd.it/enh0oof7rc351.png?width=1258&format=png&auto=webp&s=bc7f3aabf7b17bd33c2e94e43348ed4eb25bdd1e https://preview.redd.it/k6y1drf7rc351.png?width=1059&format=png&auto=webp&s=86a0bf43d355a807e79c601cdcc041001e164571 https://preview.redd.it/6j8ajz0fye351.png?width=1222&format=png&auto=webp&s=566ab22fd2166290b35e4a208947a6bd6b63a4dd
Yesterday there was a transaction that received a lot of attention, as it spent the coinbase reward from a very early block. Here is the txID for it: f38d6f043c070ce9805ee81f46db4d32d0c9f148d62bbfbc0378bc5847c7dc70 Something interesting about this transaction that I haven't seen mentioned much online, is that whoever spent those coins is now officially the HODL world champion! What is meant by this, is that of all now-spent UTXOs, the coinbase reward they spent in that block now holds the record for being the longest-held. This is a pretty cool title to hold, the individual who owned that UTXO had been sitting on it since the absolute earliest days of the Bitcoin network. When that block was mined, BTC had no value, beyond fascinating a handful of crypto and computer nerds around the world. When spent, the output was worth almost $500,000 USD. Thats quite the HODL! In total, this UTXO was held for 627,404 blocks, which is about 11 years, 3 months, and 11 days. For more info, and a list of all the runner ups, see this post on stack exchange: https://bitcoin.stackexchange.com/questions/88517/what-was-the-longest-held-utxo-ever-spent/96055#96055
Odarhom - Release Notes - Short Overview - First Draft
Odarhom Masternodes Odarhom brings along a masternode system for Bitcore. The collateral for one masternode is 2,100 BTX. This allows up to 10,000 masternodes to support the network. The masternodes receive half of all generated bitcores. It is possible to setup a masternode with the minimum version 0.90.8.x or higher. A government system is included in the new core and can be activated later, if necessary. Datacarriersize https://preview.redd.it/csrmknzl58q41.jpg?width=1267&format=pjpg&auto=webp&s=85c59b3e5753009f397505c3000e6d70892188b7 Odarhom increase the default datacarriersize up to 220 bytes. More infos con you find here | here no 2. | here no 3. Command fork system Different forks can be activated remotely in the future. This way we can ensure that all critical updates are only activated once all important network participants are ready. Wallet changes Odarhom introduces full support for segwit in the wallet and user interfaces. A new `-addresstype` argument has been added, which supports `legacy`, `p2sh-segwit` (default), and `bech32` addresses. It controls what kind of addresses are produced by `getnewaddress`, `getaccountaddress`, and `createmultisigaddress`. A `-changetype` argument has also been added, with the same options, and by default equal to `-addresstype`, to control which kind of change is used. A new `address_type` parameter has been added to the `getnewaddress` and `addmultisigaddress` RPCs to specify which type of address to generate. A `change_type` argument has been added to the `fundrawtransaction` RPC to override the `-changetype` argument for specific transactions. All segwit addresses created through `getnewaddress` or `*multisig` RPCs explicitly get their redeemscripts added to the wallet file. This means that downgrading after creating a segwit address will work, as long as the wallet file is up to date. All segwit keys in the wallet get an implicit redeemscript added, without it being written to the file. This means recovery of an old backup will work, as long as you use new software. All keypool keys that are seen used in transactions explicitly get their redeemscripts added to the wallet files. This means that downgrading after recovering from a backup that includes a segwit address will work Note that some RPCs do not yet support segwit addresses. Notably, `signmessage`/`verifymessage` doesn't support segwit addresses, nor does `importmulti` at this time. Support for segwit in those RPCs will continue to be added in future versions. P2WPKH change outputs are now used by default if any destination in the transaction is a P2WPKH or P2WSH output. This is done to ensure the change output is as indistinguishable from the other outputs as possible in either case. BIP173 (Bech32) Address support ("btx..." addresses) https://preview.redd.it/q0c26p3fx7q41.jpg?width=1278&format=pjpg&auto=webp&s=bd2b8c5d583dca703caae940aa44e01a365f080c Full support for native segwit addresses (BIP173 / Bech32) has now been added. This includes the ability to send to BIP173 addresses (including non-v0 ones), and generating these addresses (including as default new addresses, see above). A checkbox has been added to the GUI to select whether a Bech32 address or P2SH-wrapped address should be generated when using segwit addresses. When launched with `-addresstype=bech32` it is checked by default. When launched with `-addresstype=legacy` it is unchecked and disabled. HD-wallets by default Due to a backward-incompatible change in the wallet database, wallets created with version 0.15.2 will be rejected by previous versions. Also, version 0.15.2 will only create hierarchical deterministic (HD) wallets. Note that this only applies to new wallets; wallets made with previous versions will not be upgraded to be HD. Replace-By-Fee by default in GUI The send screen now uses BIP125 RBF by default, regardless of `-walletrbf`.There is a checkbox to mark the transaction as final. The RPC default remains unchanged: to use RBF, launch with `-walletrbf=1` oruse the `replaceable` argument for individual transactions. Wallets directory configuration (`-walletdir`) Odarhom now has more flexibility in where the wallets directory can belocated. Previously wallet database files were stored at the top level of thebitcoin data directory. The behavior is now: For new installations (where the data directory doesn't already exist), wallets will now be stored in a new `wallets/` subdirectory inside the data directory by default. For existing nodes (where the data directory already exists), wallets will be stored in the data directory root by default. If a `wallets/` subdirectory already exists in the data directory root, then wallets will be stored in the `wallets/` subdirectory by default.- The location of the wallets directory can be overridden by specifying a `-walletdir=` option where `` can be an absolute path to a directory or directory symlink. Care should be taken when choosing the wallets directory location, as if itbecomes unavailable during operation, funds may be lost. Support for signalling pruned nodes (BIP159) https://preview.redd.it/fctdedmwx7q41.jpg?width=1283&format=pjpg&auto=webp&s=20dafb6385f46a072f68d49fd0e9a294341be684 Pruned nodes can now signal BIP159's NODE_NETWORK_LIMITED using service bits, in preparation forfull BIP159 support in later versions. This would allow pruned nodes to serve the most recent blocks. However, the current change does not yet include support for connecting to these pruned peers. GUI changes We have added a new Walletdesign. The option to reuse a previous address has now been removed. This was justified by the need to "resend" an invoice, but now that we have the request history, that need should be gone.- Support for searching by TXID has been added, rather than just address and label.- A "Use available balance" option has been added to the send coins dialog, to add the remaining available wallet balance to a transaction output.- A toggle for unblinding the password fields on the password dialog has been added Security We change the coinbase maturity via command fork from 100 to 576 blocks. Also we have pumb the default the protoversion to 80004. It is possible later to disconnect the old version via command fork. Hashalgorythm Odarhom supports already lots of Hashalgorythms so can we later with an update new Hashalgorythms for mining. A final decision will be agreed with the community. Odarhom can work with timetravel10, scrypt, nist5, lyra2z, x11, x16r. Sources Bitcoin Core, Dash Core, FXTC Core, LTC Core, PIVX Core, Bitcoin Cash Core
Technical: A Brief History of Payment Channels: from Satoshi to Lightning Network
Who cares about political tweets from some random country's president when payment channels are a much more interesting and are actually capable of carrying value? So let's have a short history of various payment channel techs!
Generation 0: Satoshi's Broken nSequence Channels
Because Satoshi's Vision included payment channels, except his implementation sucked so hard we had to go fix it and added RBF as a by-product. Originally, the plan for nSequence was that mempools would replace any transaction spending certain inputs with another transaction spending the same inputs, but only if the nSequence field of the replacement was larger. Since 0xFFFFFFFF was the highest value that nSequence could get, this would mark a transaction as "final" and not replaceable on the mempool anymore. In fact, this "nSequence channel" I will describe is the reason why we have this weird rule about nLockTime and nSequence. nLockTime actually only works if nSequence is not 0xFFFFFFFF i.e. final. If nSequence is 0xFFFFFFFF then nLockTime is ignored, because this if the "final" version of the transaction. So what you'd do would be something like this:
You go to a bar and promise the bartender to pay by the time the bar closes. Because this is the Bitcoin universe, time is measured in blockheight, so the closing time of the bar is indicated as some future blockheight.
For your first drink, you'd make a transaction paying to the bartender for that drink, paying from some coins you have. The transaction has an nLockTime equal to the closing time of the bar, and a starting nSequence of 0. You hand over the transaction and the bartender hands you your drink.
For your succeeding drink, you'd remake the same transaction, adding the payment for that drink to the transaction output that goes to the bartender (so that output keeps getting larger, by the amount of payment), and having an nSequence that is one higher than the previous one.
Eventually you have to stop drinking. It comes down to one of two possibilities:
You drink until the bar closes. Since it is now the nLockTime indicated in the transaction, the bartender is able to broadcast the latest transaction and tells the bouncers to kick you out of the bar.
You wisely consider the state of your liver. So you re-sign the last transaction with a "final" nSequence of 0xFFFFFFFF i.e. the maximum possible value it can have. This allows the bartender to get his or her funds immediately (nLockTime is ignored if nSequence is 0xFFFFFFFF), so he or she tells the bouncers to let you out of the bar.
Now that of course is a payment channel. Individual payments (purchases of alcohol, so I guess buying coffee is not in scope for payment channels). Closing is done by creating a "final" transaction that is the sum of the individual payments. Sure there's no routing and channels are unidirectional and channels have a maximum lifetime but give Satoshi a break, he was also busy inventing Bitcoin at the time. Now if you noticed I called this kind of payment channel "broken". This is because the mempool rules are not consensus rules, and cannot be validated (nothing about the mempool can be validated onchain: I sigh every time somebody proposes "let's make block size dependent on mempool size", mempool state cannot be validated by onchain data). Fullnodes can't see all of the transactions you signed, and then validate that the final one with the maximum nSequence is the one that actually is used onchain. So you can do the below:
Become friends with Jihan Wu, because he owns >51% of the mining hashrate (he totally reorged Bitcoin to reverse the Binance hack right?).
Slip Jihan Wu some of the more interesting drinks you're ordering as an incentive to cooperate with you. So say you end up ordering 100 drinks, you split it with Jihan Wu and give him 50 of the drinks.
When the bar closes, Jihan Wu quickly calls his mining rig and tells them to mine the version of your transaction with nSequence 0. You know, that first one where you pay for only one drink.
Because fullnodes cannot validate nSequence, they'll accept even the nSequence=0 version and confirm it, immutably adding you paying for a single alcoholic drink to the blockchain.
The bartender, pissed at being cheated, takes out a shotgun from under the bar and shoots at you and Jihan Wu.
Jihan Wu uses his mystical chi powers (actually the combined exhaust from all of his mining rigs) to slow down the shotgun pellets, making them hit you as softly as petals drifting in the wind.
The bartender mutters some words, clothes ripping apart as he or she (hard to believe it could be a she but hey) turns into a bear, ready to maul you for cheating him or her of the payment for all the 100 drinks you ordered from him or her.
Steely-eyed, you stand in front of the bartender-turned-bear, daring him to touch you. You've watched Revenant, you know Leonardo di Caprio could survive a bear mauling, and if some posh actor can survive that, you know you can too. You make a pose. "Drunken troll logic attack!"
I think I got sidetracked here.
Bears are bad news.
You can't reasonably invoke "Satoshi's Vision" and simultaneously reject the Lightning Network because it's not onchain. Satoshi's Vision included a half-assed implementation of payment channels with nSequence, where the onchain transaction represented multiple logical payments, exactly what modern offchain techniques do (except modern offchain techniques actually work). nSequence (the field, but not its modern meaning) has been in Bitcoin since BitCoin For Windows Alpha 0.1.0. And its original intent was payment channels. You can't get nearer to Satoshi's Vision than being a field that Satoshi personally added to transactions on the very first public release of the BitCoin software, like srsly.
Miners can totally bypass mempool rules. In fact, the reason why nSequence has been repurposed to indicate "optional" replace-by-fee is because miners are already incentivized by the nSequence system to always follow replace-by-fee anyway. I mean, what do you think those drinks you passed to Jihan Wu are, other than the fee you pay him to mine a specific version of your transaction?
Satoshi made mistakes. The original design for nSequence is one of them. Today, we no longer use nSequence in this way. So diverging from Satoshi's original design is part and parcel of Bitcoin development, because over time, we learn new lessons that Satoshi never knew about. Satoshi was an important landmark in this technology. He will not be the last, or most important, that we will remember in the future: he will only be the first.
Incentive-compatible time-limited unidirectional channel; or, Satoshi's Vision, Fixed (if transaction malleability hadn't been a problem, that is). Now, we know the bartender will turn into a bear and maul you if you try to cheat the payment channel, and now that we've revealed you're good friends with Jihan Wu, the bartender will no longer accept a payment channel scheme that lets one you cooperate with a miner to cheat the bartender. Fortunately, Jeremy Spilman proposed a better way that would not let you cheat the bartender. First, you and the bartender perform this ritual:
You get some funds and create a transaction that pays to a 2-of-2 multisig between you and the bartender. You don't broadcast this yet: you just sign it and get its txid.
You create another transaction that spends the above transaction. This transaction (the "backoff") has an nLockTime equal to the closing time of the bar, plus one block. You sign it and give this backoff transaction (but not the above transaction) to the bartender.
The bartender signs the backoff and gives it back to you. It is now valid since it's spending a 2-of-2 of you and the bartender, and both of you have signed the backoff transaction.
Now you broadcast the first transaction onchain. You and the bartender wait for it to be deeply confirmed, then you can start ordering.
The above is probably vaguely familiar to LN users. It's the funding process of payment channels! The first transaction, the one that pays to a 2-of-2 multisig, is the funding transaction that backs the payment channel funds. So now you start ordering in this way:
For your first drink, you create a transaction spending the funding transaction output and sending the price of the drink to the bartender, with the rest returning to you.
You sign the transaction and pass it to the bartender, who serves your first drink.
For your succeeding drinks, you recreate the same transaction, adding the price of the new drink to the sum that goes to the bartender and reducing the money returned to you. You sign the transaction and give it to the bartender, who serves you your next drink.
At the end:
If the bar closing time is reached, the bartender signs the latest transaction, completing the needed 2-of-2 signatures and broadcasting this to the Bitcoin network. Since the backoff transaction is the closing time + 1, it can't get used at closing time.
If you decide you want to leave early because your liver is crying, you just tell the bartender to go ahead and close the channel (which the bartender can do at any time by just signing and broadcasting the latest transaction: the bartender won't do that because he or she is hoping you'll stay and drink more).
If you ended up just hanging around the bar and never ordering, then at closing time + 1 you broadcast the backoff transaction and get your funds back in full.
Now, even if you pass 50 drinks to Jihan Wu, you can't give him the first transaction (the one which pays for only one drink) and ask him to mine it: it's spending a 2-of-2 and the copy you have only contains your own signature. You need the bartender's signature to make it valid, but he or she sure as hell isn't going to cooperate in something that would lose him or her money, so a signature from the bartender validating old state where he or she gets paid less isn't going to happen. So, problem solved, right? Right? Okay, let's try it. So you get your funds, put them in a funding tx, get the backoff tx, confirm the funding tx... Once the funding transaction confirms deeply, the bartender laughs uproariously. He or she summons the bouncers, who surround you menacingly. "I'm refusing service to you," the bartender says. "Fine," you say. "I was leaving anyway;" You smirk. "I'll get back my money with the backoff transaction, and posting about your poor service on reddit so you get negative karma, so there!" "Not so fast," the bartender says. His or her voice chills your bones. It looks like your exploitation of the Satoshi nSequence payment channel is still fresh in his or her mind. "Look at the txid of the funding transaction that got confirmed." "What about it?" you ask nonchalantly, as you flip open your desktop computer and open a reputable blockchain explorer. What you see shocks you. "What the --- the txid is different! You--- you changed my signature?? But how? I put the only copy of my private key in a sealed envelope in a cast-iron box inside a safe buried in the Gobi desert protected by a clan of nomads who have dedicated their lives and their childrens' lives to keeping my private key safe in perpetuity!" "Didn't you know?" the bartender asks. "The components of the signature are just very large numbers. The sign of one of the signature components can be changed, from positive to negative, or negative to positive, and the signature will remain valid. Anyone can do that, even if they don't know the private key. But because Bitcoin includes the signatures in the transaction when it's generating the txid, this little change also changes the txid." He or she chuckles. "They say they'll fix it by separating the signatures from the transaction body. They're saying that these kinds of signature malleability won't affect transaction ids anymore after they do this, but I bet I can get my good friend Jihan Wu to delay this 'SepSig' plan for a good while yet. Friendly guy, this Jihan Wu, it turns out all I had to do was slip him 51 drinks and he was willing to mine a tx with the signature signs flipped." His or her grin widens. "I'm afraid your backoff transaction won't work anymore, since it spends a txid that is not existent and will never be confirmed. So here's the deal. You pay me 99% of the funds in the funding transaction, in exchange for me signing the transaction that spends with the txid that you see onchain. Refuse, and you lose 100% of the funds and every other HODLer, including me, benefits from the reduction in coin supply. Accept, and you get to keep 1%. I lose nothing if you refuse, so I won't care if you do, but consider the difference of getting zilch vs. getting 1% of your funds." His or her eyes glow. "GENUFLECT RIGHT NOW." Lesson learned?
Payback's a bitch.
Transaction malleability is a bitchier bitch. It's why we needed to fix the bug in SegWit. Sure, MtGox claimed they were attacked this way because someone kept messing with their transaction signatures and thus they lost track of where their funds went, but really, the bigger impetus for fixing transaction malleability was to support payment channels.
Yes, including the signatures in the hash that ultimately defines the txid was a mistake. Satoshi made a lot of those. So we're just reiterating the lesson "Satoshi was not an infinite being of infinite wisdom" here. Satoshi just gets a pass because of how awesome Bitcoin is.
CLTV-protected Spilman Channels
Using CLTV for the backoff branch. This variation is simply Spilman channels, but with the backoff transaction replaced with a backoff branch in the SCRIPT you pay to. It only became possible after OP_CHECKLOCKTIMEVERIFY (CLTV) was enabled in 2015. Now as we saw in the Spilman Channels discussion, transaction malleability means that any pre-signed offchain transaction can easily be invalidated by flipping the sign of the signature of the funding transaction while the funding transaction is not yet confirmed. This can be avoided by simply putting any special requirements into an explicit branch of the Bitcoin SCRIPT. Now, the backoff branch is supposed to create a maximum lifetime for the payment channel, and prior to the introduction of OP_CHECKLOCKTIMEVERIFY this could only be done by having a pre-signed nLockTime transaction. With CLTV, however, we can now make the branches explicit in the SCRIPT that the funding transaction pays to. Instead of paying to a 2-of-2 in order to set up the funding transaction, you pay to a SCRIPT which is basically "2-of-2, OR this singlesig after a specified lock time". With this, there is no backoff transaction that is pre-signed and which refers to a specific txid. Instead, you can create the backoff transaction later, using whatever txid the funding transaction ends up being confirmed under. Since the funding transaction is immutable once confirmed, it is no longer possible to change the txid afterwards.
Todd Micropayment Networks
The old hub-spoke model (that isn't how LN today actually works). One of the more direct predecessors of the Lightning Network was the hub-spoke model discussed by Peter Todd. In this model, instead of payers directly having channels to payees, payers and payees connect to a central hub server. This allows any payer to pay any payee, using the same channel for every payee on the hub. Similarly, this allows any payee to receive from any payer, using the same channel. Remember from the above Spilman example? When you open a channel to the bartender, you have to wait around for the funding tx to confirm. This will take an hour at best. Now consider that you have to make channels for everyone you want to pay to. That's not very scalable. So the Todd hub-spoke model has a central "clearing house" that transport money from payers to payees. The "Moonbeam" project takes this model. Of course, this reveals to the hub who the payer and payee are, and thus the hub can potentially censor transactions. Generally, though, it was considered that a hub would more efficiently censor by just not maintaining a channel with the payer or payee that it wants to censor (since the money it owned in the channel would just be locked uselessly if the hub won't process payments to/from the censored user). In any case, the ability of the central hub to monitor payments means that it can surveill the payer and payee, and then sell this private transactional data to third parties. This loss of privacy would be intolerable today. Peter Todd also proposed that there might be multiple hubs that could transport funds to each other on behalf of their users, providing somewhat better privacy. Another point of note is that at the time such networks were proposed, only unidirectional (Spilman) channels were available. Thus, while one could be a payer, or payee, you would have to use separate channels for your income versus for your spending. Worse, if you wanted to transfer money from your income channel to your spending channel, you had to close both and reshuffle the money between them, both onchain activities.
Poon-Dryja Lightning Network
Bidirectional two-participant channels. The Poon-Dryja channel mechanism has two important properties:
No time limit.
Both the original Satoshi and the two Spilman variants are unidirectional: there is a payer and a payee, and if the payee wants to do a refund, or wants to pay for a different service or product the payer is providing, then they can't use the same unidirectional channel. The Poon-Dryjam mechanism allows channels, however, to be bidirectional instead: you are not a payer or a payee on the channel, you can receive or send at any time as long as both you and the channel counterparty are online. Further, unlike either of the Spilman variants, there is no time limit for the lifetime of a channel. Instead, you can keep the channel open for as long as you want. Both properties, together, form a very powerful scaling property that I believe most people have not appreciated. With unidirectional channels, as mentioned before, if you both earn and spend over the same network of payment channels, you would have separate channels for earning and spending. You would then need to perform onchain operations to "reverse" the directions of your channels periodically. Secondly, since Spilman channels have a fixed lifetime, even if you never used either channel, you would have to periodically "refresh" it by closing it and reopening. With bidirectional, indefinite-lifetime channels, you may instead open some channels when you first begin managing your own money, then close them only after your lawyers have executed your last will and testament on how the money in your channels get divided up to your heirs: that's just two onchain transactions in your entire lifetime. That is the potentially very powerful scaling property that bidirectional, indefinite-lifetime channels allow. I won't discuss the transaction structure needed for Poon-Dryja bidirectional channels --- it's complicated and you can easily get explanations with cute graphics elsewhere. There is a weakness of Poon-Dryja that people tend to gloss over (because it was fixed very well by RustyReddit):
You have to store all the revocation keys of a channel. This implies you are storing 1 revocation key for every channel update, so if you perform millions of updates over your entire lifetime, you'd be storing several megabytes of keys, for only a single channel. RustyReddit fixed this by requiring that the revocation keys be generated from a "Seed" revocation key, and every key is just the application of SHA256 on that key, repeatedly. For example, suppose I tell you that my first revocation key is SHA256(SHA256(seed)). You can store that in O(1) space. Then for the next revocation, I tell you SHA256(seed). From SHA256(key), you yourself can compute SHA256(SHA256(seed)) (i.e. the previous revocation key). So you can remember just the most recent revocation key, and from there you'd be able to compute every previous revocation key. When you start a channel, you perform SHA256 on your seed for several million times, then use the result as the first revocation key, removing one layer of SHA256 for every revocation key you need to generate. RustyReddit not only came up with this, but also suggested an efficient O(log n) storage structure, the shachain, so that you can quickly look up any revocation key in the past in case of a breach. People no longer really talk about this O(n) revocation storage problem anymore because it was solved very very well by this mechanism.
Another thing I want to emphasize is that while the Lightning Network paper and many of the earlier presentations developed from the old Peter Todd hub-and-spoke model, the modern Lightning Network takes the logical conclusion of removing a strict separation between "hubs" and "spokes". Any node on the Lightning Network can very well work as a hub for any other node. Thus, while you might operate as "mostly a payer", "mostly a forwarding node", "mostly a payee", you still end up being at least partially a forwarding node ("hub") on the network, at least part of the time. This greatly reduces the problems of privacy inherent in having only a few hub nodes: forwarding nodes cannot get significantly useful data from the payments passing through them, because the distance between the payer and the payee can be so large that it would be likely that the ultimate payer and the ultimate payee could be anyone on the Lightning Network. Lessons learned?
We can decentralize if we try hard enough!
"Hubs bad" can be made "hubs good" if everybody is a hub.
Smart people can solve problems. It's kinda why they're smart.
After LN, there's also the Decker-Wattenhofer Duplex Micropayment Channels (DMC). This post is long enough as-is, LOL. But for now, it uses a novel "decrementing nSequence channel", using the new relative-timelock semantics of nSequence (not the broken one originally by Satoshi). It actually uses multiple such "decrementing nSequence" constructs, terminating in a pair of Spilman channels, one in both directions (thus "duplex"). Maybe I'll discuss it some other time. The realization that channel constructions could actually hold more channel constructions inside them (the way the Decker-Wattenhofer puts a pair of Spilman channels inside a series of "decrementing nSequence channels") lead to the further thought behind Burchert-Decker-Wattenhofer channel factories. Basically, you could host multiple two-participant channel constructs inside a larger multiparticipant "channel" construct (i.e. host multiple channels inside a factory). Further, we have the Decker-Russell-Osuntokun or "eltoo" construction. I'd argue that this is "nSequence done right". I'll write more about this later, because this post is long enough. Lessons learned?
Bitcoin offchain scaling is more powerful than you ever thought.
Xthinner/Blocktorrent development status update -- Jan 12, 2018
Edit: Jan 12, 2019, not 2018. Xthinner is a new block propagation protocol which I have been working on. It takes advantage of LTOR to give about 99.6% compression for blocks, as long as all of the transactions in the block were previously transmitted. That's about 13 bits (1.6 bytes) per transaction. Xthinner is designed to be fault-tolerant, and to handle situations in which the sender and receiver's mempools are not well synchronized with gracefully degrading performance -- missing transactions or other decoding errors can be detected and corrected with one or (rarely) two additional round trips of communication. My expectation is that when it is finished, it will perform about 4x to 6x better than Compact Blocks and Xthin for block propagation. Relative to Graphene, I expect Xthinner to perform similarly under ideal circumstances (better than Graphene v1, slightly worse than Graphene v2), but much better under strenuous conditions (i.e. mempool desynchrony). The current development status of Xthinner is as follows:
Detailed informal writeup of the encoding scheme -- done 2018-09-29
Modify TxMemPool to allow iterating on a view sorted by TxId -- done 2018-11-26
Basic C++ segment encoder -- done 2018-11-26
Basic c++ segment decoder -- done 2018-11-26
Checksums for error detection -- done 2018-12-09
Serialization/deserialization -- done 2018-12-09
Prefilled transactions, coinbase handling, and non-mempool transactions -- done 2018-12-25
Missing/extra transactions, re-requests, and handling mempool desynchrony for segment decoding -- done 2019-01-12
Block transmission coupling the block header with one or more Xthinner segments -- 50% done 2019-01-12
Missing/extra transactions, re-requests, and handling mempool desynchrony for block decoding -- done 2019-01-12
Integration with Bitcoin ABC networking code
Networking testing on regtest/testnet/mainnet with real blocks
Write BIP/BUIP and formal spec
Bitcoin ABC pull request and begin of code review
Unit tests, performance tests, benchmarks -- started
Bitcoin Unlimited pull request and begin of code review
Alpha release of binaries for testing or low-security block relay networks
Merging code into ABC/BU, disabled-by-default
Complete security review
Enable by default in ABC and/or BU
(Optional) parallelize encoding/decoding of blocks
Following is the debugging output from a test run done with coherent senderecipient mempools with a 1.25 million tx block, edited for readability:
Testing Xthinner on a block with 1250003 transactions with sender mempool size 2500000 and recipient mempool size 2500000 Tx/Block creation took 262 sec, 104853 ns/tx (mempool) CTOR block sorting took 2467 ms, 987 ns/tx (mempool) Encoding is 1444761 pushBytes, 2889520 1-bit commands, 103770 checksum bytes total 1910345 bytes, 12.23 bits/tx Single-threaded encoding took 2924 ms, 1169 ns/tx (mempool) Serialization/deserialization took 1089 ms, 435 ns/tx (mempool) Single-threaded decoding took 1912314 usec, 764 ns/tx (mempool) Filling missing slots and handling checksum errors took 0 rounds and 12 usec, 0 ns/tx (mempool) Blocks match! *** No errors detected
If each transaction were 400 bytes on average, this block would be 500 MB, and it was encoded in 1.9 MB of data, a 99.618% reduction in size. Real-world performance is likely to be somewhat worse than this, as it's not likely that 100% of the block's transactions will always be in the recipient's mempool, but the performance reduction from mempool desychrony is smooth and predictable. If the recipient is missing 10% of the sender's transactions, and has another 10% that the sender does not have, the transaction list is still able to be successfully transmitted and decoded, although in that case it usually takes 2.5 round trips to do so, and the overall compression ratio ends up being around 71% instead of 99.6%. Anybody who wishes can view the WIP Xthinner code here. Once Xthinner is finished, I intend to start working on Blocktorrent. Blocktorrent is a method for breaking a block into small independently verifiable chunks for transmission, where each chunk is about one IP packet (a bit less than 1500 bytes) in size. In the same way that Bittorrent was faster than Napster, Blocktorrent should be faster than Xthinner. Currently, one of the big limitations on block propagation performance is that a node cannot forward the first byte of a block until the last byte of the block has been received and completely validated. Blocktorrent will change that, and allow nodes to forward each IP packet shortly after that packet was received, regardless of whether any other packets have also been received and regardless of the order in which the packets are received. This should dramatically improve the bandwidth utilization efficiency of nodes during block propagation, and should reduce the block propagation latency for reaching the full network quite a lot -- my current estimate is about 10x improvement over Xthinner. Blocktorrent achieves this partial validation of small chunks by taking advantage of Bitcoin blocks' Merkle tree structure. Chunks of transactions are transmitted in a packet along with enough data from the rest of the Merkle tree's internal nodes to allow for that chunk of transactions to be validated back to the Merkle root, the block header, and the mining PoW, thereby ensuring that packet being forwarded is not invalid spam data used solely for a DoS attack. (Forwarding DoS attacks to other nodes is bad.) Each chunk will contain an Xthinner segment to encode TXIDs My performance target with Blocktorrent is to be able to propagate a 1 GB block in about 5-10 seconds to all nodes in the network that have 100 Mbps connectivity and quad core CPUs. Blocktorrent will probably perform a bit worse than FIBRE at small block sizes, but better at very large blocksizes, all without the trust and centralized infrastructure that FIBRE uses.
Block propagation time and block processing time (to prepare & validate) are very crucial factors. Every node(miner) has an economic incentive in propagating its block as quickly as possible so that nodes would be more likely to build on this fork. But simultaneously having a very large number of transactions contained in the block increases the block propagation time, so a node has to optimally balance the number of transactions to include (block size) with transaction fees plus block reward so for the best outcome. But BSVs scaling approach expects to have logical blocks at gigabytes/terabytes sizes in future, the problem outlined above can be a huge obstacle in getting there. This problem will be exacerbated when block sizes get too big and ultimately the rational economically motivated nodes begin to ration the number of transactions in a block. I believe currently the time complexity of block propagation is at O(~n), where n is the number of transactions, as there is currently no block compression (like Graphene). Also, block processing time complexity is at O(~n) too as most of the processing is serial. Compact blocks (BIP 152) as implemented currently in BitcoinSV already does a basic level of block compression by,
mostly including shortened transactions (txids), in an asynchronous network the mempools held by peer nodes are typically mostly similar
also include a few full transactions; those a node perceives its peer may not possess. It uses a simple heuristic "if I didn't possess this transactions before this block was made available likely my peer is unaware of that transaction too"
typically a Compact block is about 10 - 15 % of the full uncompressed legacy block & this reduces the effective propagation time; while this is probably good enough for Bitcoin-Core as they are not seeking to increase block size, its certainly not enough for Bitcoin-SV. Graphene which uses Bloom filters and Invertible Bloom Lookup Tables (IBLTs) seems to provide an efficient solution to the transaction set reconciliation problem, and it offers additional (from Compact blocks) compression where a Graphene block is ~10% of the size of a typical Compact block (from the author's empirical tests) With the above information and certain assumptions we can quickly calculate the demands of a terabyte node and its feasibility with current hardware & bandwidth limitations. Assumptions:
Node software re-architecture is a must.
Consumer grade hardware and horizontal scaling strategies will be preferred as opposed to specialized h/w and vertical scaling.
Co-location of mining nodes NOT favored: the pursuit of high bandwidth & RTT/ latency could lead us down this path. We should desire a solution that can be deployed in different geographies and hence remain decentralized (economic & geo-redundancy) & robust (high availability etc)
Set a bandwidth limit to facilitate above, lets go most optimistic 10 Gbps (bits)
Hypothetically use Graphene's compression metrics as its the best performer of the lot.
Lack of parallel transaction processing & parallel block processing is not a hindrance for back-of-the-napkin tera byte block feasibility.
1 TeraByte block is produced in the near future, when h/w & b/w improvements are only marginally better.
1 TB block ==> 100-150 GB Compact block ==> 10 - 15 GB Graphene block Lets conservatively go with the low of 10 GB Graphene compressed block, 10GB/ 10 Gb/s = 8 secs we still need 8 full seconds to propagate this block one hop to the next immediate peer. Also, note that we conveniently ignored the massive parallelization that would be needed for transaction and block processing which would likely involve techniques like mempool and UTXO set sharding in the node architecture. But the point to take home is 8 seconds is exorbitant and we need a better workable compression algorithm irrespective of other architectural improvements under the outlined assumptions. The above led me to begin work on an "ultra compression" algorithm which is a stateful protocol and highly parallelizable (places high memory & CPU demands) and fits with the goal of a horizontally scalable architecture built on affordable consumer grade h/w. The outline of the algorithm looks promising and seems to compress the block by factor of thousands if not more especially for the block publisher and although the block size grows as we head farther from the publishing node, its still reasonable IMO. Now, before I go further down this rabbit hole I wanted you guys to poke holes into my assumptions, requirements & calculation outlines. Subsequently I will publish (semi-formal) a paper detailing the ultra compression algorithm and how it fits with the overall node architecture per ideas expressed above. Would appreciate if someone could point/educate me to alternative practical solutions that have already been vetted and are in the dev pipeline. Note:
I am not a Bitcoin Dev (SV/Core/Cash), I am a fresh mind, an outsider.
I believe in Bitcoin SVs approach to scaling is the only feasible one if at all. Core+Lightning is crippled. Cash have deviated too much from the original protocol already.
Although it may seem like I have been critical of the tera-node feasibility, I want nothing more than seeing Bitcoin SV succeed.
Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out. This document is placed into public domain.
What is TTOR? CTOR? AOR?
Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block. The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid". (If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.) Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.
Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)
The proposed November upgrade combines two changes in one step:
Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.
Code changes in Bitcoin ABC
In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate. The main changes relating to transaction ordering are found in the file src/validation.cpp:
Function ConnectBlock previously had one loop, that would process each transaction in order, removing spent transaction outputs and adding new transaction outputs. This was only compatible with TTOR. Starting in November, it will use the two-loop OTI algorithm (see below). The new construction has no ordering requirement.
Function ApplyBlockUndo, which is used to undo orphaned blocks, is changed to work with any order.
When orphaning a block, transactions will be returned to the mempool using addForBlock that now works with any ordering (src/txmempool.cpp).
Serial block processing (one thread)
One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated. The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs)
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch. By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs)
The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized. There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo. There are issues with such order-sensitive concurrent block processing algorithms:
Since TTOR would be a consensus rule, parallel validation algorithms must also verify that TTOR is respected. The naive approach described above actually is able to succeed for some non-topological orders; therefore, additional checks would have to be added in order to enforce TTOR.
The worst-case performance can be that only one thread is active at a time. Consider the case of a block that is one long chain of dependent transactions.
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase. In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.
Advanced parallel techniques
With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.
Block propagation (graphene)
A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining. 'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message. To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings. The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.
Reversibility and technical debt
Can the change to CTOR be undone at a later time? Yes and no. For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well. For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.
Summary and outlook
Removing TTOR is the most disruptive part of the upgrade, as other block processing software needs to be updated in kind to handle transactions coming in any order. These changes are however quite small and they naturally convert the software into a form where concurrency is easy to introduce.
In the near term, TTOR / CTOR will show no significant performance differences for block validation. Note that right now, block validation is not the limiting factor in Bitcoin Cash, anyway.
In medium term, software switching over to concurrent block processing will likely want to use an any-order algorithm (like OTI). Although some additional code may be needed to enforce ordering rules in concurrent validation, there will still be no performance differences for TTOR / AOR / CTOR.
In the very long term, it is perhaps possible that CTOR will show advantages for full nodes with sharded UTXO databases, if that ever becomes necessary. It's probably way too early to care about this.
With TTOR removed, the further addition of CTOR is actually a very minor change. It does not require any other ecosystem software to be updated (they don't care about order). Not only that, we aren't stuck with CTOR: the ordering can be quite easily changed again in the future, if need be.
The primary near-term improvement from the CTOR will be in allowing a simple and immediate enhancement of the graphene protocol. This impacts a scaling bottleneck that matters right now: block propagation. By avoiding the topic of complex voluntary ordering schemes, this will allow graphene developers to stop worrying about how to encode ordering, and focus on optimizing the mechanisms for set reconciliation.
Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
I did it as a tribute to our missing Satoshi: we are missing Satoshi, and now the blockchain is missing 1 Satoshi too, for all time.
EDIT: Users have pointed out in the comments that this isn't actually the only time coins have been destroyed, there are actually several different ways coins have been destroyed in the past. sumBTC also points out that the satoshi wasn't destroyed-- it was never created in the first place. Another interesting way to destroy coins is by creating a duplicate transaction. This is again done with a modified client. For example see block 91722 and block 91880. They both contain txid e3bf3d07. The newer transaction essentially overwrites the old transaction, destroying the previous one and the associated coins. This issue was briefly discussed on Github #612 and determined to not be a big deal. In 2012 they realized that duplicated transactions could be used as part of an attack on the network so this was fixed and is no longer possible. Provably burning coins was actually added as a feature in 2014 via OP_RETURN. Transactions marked with this opcode MAY be pruned from a client's unspent transaction database. Whether or not these coins still exist is a matter of opinion. Finally, at least 1,000 blocks forfeited transactions fees due to a software bug. Forfeited transaction fees are lost forever and are unaccounted for in any wallet.
Between block 162705 and block 169899, 193 blocks claimed less than allowed due to a bug, resulting in a total loss of 9.66184623 BTC.
Between block 180324 and block 249185, another 836 blocks claimed less than allowed, resulting in a total loss of 0.52584193 BTC.
Bitpay payments not being paid " invalid transaction associated to this invoice as a result of the stress test"
"There is an invalid transaction associated to this invoice as a result of the stress test that recently occurred on the bitcoin network. There is likely a valid transaction which would mark the invoice as paid in full. Our developers are actively working on building a solution to issues such as this one. Once completed, we will be able to apply the valid transaction to your invoice. Please let us know if you would like to wait for a solution from our development team or if you would like us to issue a refund. Thank you for your patience." Is this happening to anyone else using Bitpay? P.S I paid for a service through Bitpay. This has happened to two payments I sent through Bitpay in the last 48 hours and both have 100+ confirmations.
My bittrex account was wiped even tho I had 2FA and 2-step verification on Gmail
so I log in to my Bittrex today and to my surprise, it's been wiped. There was about a bitcoin worth of tokens on the account when I last logged in on 07/06/19. I checked the USER ACTIVITY and I see someone from a french IP 18.104.22.168 logged in at 07/31/19 17:43:17. at 17:44:53 somehow he verified new IP. and by 18:15:34 after converting everything to BTC he withdrew it all. here is TXID 497b4ba9ccdf2b9a30a801b789f8bdb93d924cf24cfa49aed341ebb63b28b9da How could this happen? I use Authy app for my 2FA, with a unique password not used anywhere else. Its extremely unlikely my authy was compromised, but even if it was.... I have 2 step verification on my Gmail, I recall getting the alert that someone had attempted to log into my email which I said no to and straight away changed the password, and thought nothing more of it, as the 2 step verification did its job. I checked when I last changed the password and this was 24 days ago according to Gmail which coincides with the date of the hack. However, I checked my Gmail for emails from Bittrex around that date and there were none, no "Bittrex New IP Address Verification" email, no withdrawal emails, none in the trash. So it's not possible that he had access to my Gmail, which means he somehow tricked or bypassed Bittrex's security features to get into my account and withdraw the funds. This means its Bittrex's fault and not mine as I had all security measures possible. This is my leading theory, please tell me if anyone has a better theory. Bittrex as this is due to your negligence please just reimburse me my funds. 0.96658512 BTC
WitLess Mining - Removing Signatures from Bitcoin Cash
A Selfish Miner Variant to Remove Signatures from Bitcoin Cash WitLess Mining is a hypothetical adversarial hybrid fork leveraging a variant of the selfish miner strategy to remove signatures from Bitcoin Cash. By orphaning blocks produced by miners unwilling to blindly accept WitLess blocks without validation, a miner or cartel of collaborating miners with a substantial, yet less than majority, share of the total Bitcoin Cash network hash power can alter the Nash equilibrium of Bitcoin Cash’s economic incentives, enticing otherwise honest miners to engage in non-validated mining. Once a majority of network hash power has switched to non-validated mining it will be possible to steal arbitrary UTXOs using invalid signatures - even non-existent signatures. As miners would risk losing all of their prior rewards and fees were signatures to be released that prove their malfeasance, it could even be possible to steal coins using non-existent transactions, leaving victims no evidence to prove the theft occurred. WitLess Mining introduces two new data structures, the WitLess Transaction (wltx) and the WitLess Transaction Input (wltxin). These data structures are modifications of their standard counterpart data structures, Transaction (tx) and Transaction Input (txin), and can be used as drop-in replacements to create a WitLess Block (wlblock). These new structures provide WitLess Miners signature-withheld (WitLess) transaction data sufficient to reliably update their local UTXO sets based on the transactions contained within a WitLess block while preventing validation of the transaction signature scripts. The specific mechanism by which WitLess Mining transaction and block data will be communicated among WitLess miners is left as an exercise for the reader. The author suggests it may be possible to extend the existing Bitcoin Cash gossip network protocol to handle the new WitLess data structures. Until WitLess Mining becomes well-adopted, it may be preferable to implement an out-of-band mechanism for releasing WitLess transactions and blocks as service. In order to offset potential revenue reduction due to the selfish mining strategy, the WitLess Mining cartel might provide a distribution service under a subscription model, offering earlier updates for higher tiers. An advanced distribution system could even implement a per-block bidding option, creating a WitLess information market. Regardless of the distribution mechanism chosen, the risk having their blocks orphaned will provide strong economic incentive for rational short-term profit-maximizing agents to seek out WitLess transaction and block data. To encourage other segments of the Bitcoin Cash ecosystem to adopt WitLess Mining, the WitLess data structures are designed specifically to facilitating malicous fee-based transaction replacement:
The lock_time field of wltx can be used to override the corresponding field in the standard form of a transaction, allowing the sender to introduce an arbitrary delay before their transaction becomes valid for inclusion in a block.
The sequence field of wltxin can be used to override the corresponding field in the standard form of a transaction input, allowing the sender to set a lower sequence number thereby enabling replacement even when the standard form indicates it is a final version.
It is expected that fee-based transaction replacement will be particularly popular among malicious users wishing to defraud 0-conf accepting merchants as well as the vulnerable merchants themselves. The feature is likely to encourage higher fees from the users resulting in their WitLess transaction data fetching a premium price under subscription- or market-based distribution. Malicious users may also be interested in subscribing directly to a WitLess Mining distribution service in order to receive notification when the cartel is in a position to reliably orphan non-compliant blocks, during which time their efforts will be most effective.
WitLess Block - wlblock
The wlblock is an alternate serialization of a standard block, containing the set of wltx as a direct replacement of the tx records. The hashMerkleRoot of a wlblock should be identical to the corresponding value in the standard block and can verified to apply to a set of txid by constructing a Merkelized root of txid_commitments from the wltx set. The same proof of work validation that applies to the standard block header also ensures legitimacy of the wltx set thanks to a WitLess Commitment included as an input to the coinbase tx.
WitLess Transaction - wltx
Transaction data format version as it appears in the corresponding tx
Always 0x5052 and indicates that the transaction is WitLess
Number of WitLess transaction inputs (never zero)
A list of 1 or more WitLess transaction inputs or sources for coins
Number of transaction outputs as it appears in the corresponding tx
A list of 1 or more transaction outputs or destinations for coins as it appears in the corresponding tx
The block number or timestamp at which this transaction is unlocked. This can vary from the corresponding tx, with the higher of the two taking precedence.
Each wltx can be referenced by a wltxid generated in way similar to the standard txid.
WitLess Transaction Input - wltxin
The previous output transaction reference as it appears in the corresponding txin
The length of the signature script as it appears in the corresponding txin
32 or 0
Only for the first the wltxin of a transaction, the txid of the tx containing the corresponding txin; omitted for all subsequent wltxin entries
Transaction version as defined by the sender. Intended for replacement of transactions when sender wants to defraud 0-conf merchants. This can vary from the corresponding txin, with the lower of the two taking precedence.
WitLess Commitment Structure
A new block rule is added which requires a commitment to the wltxid. The wltxid of coinbase WitLess transaction is assumed to be 0x828ef3b079f9c23829c56fe86e85b4a69d9e06e5b54ea597eef5fb3ffef509fe. A witless root hash is calculated with all those wltxid as leaves, in a way similar to the hashMerkleRoot in the block header. The commitment is recorded in a scriptPubKey of the coinbase tx. It must be at least 42 bytes, with the first 10-byte of 0x6a284353573e3d534e43, that is:
1-byte - OP_RETURN (0x6a) 1-byte - Push the following 40 bytes (0x28) 8-byte - WitLess Commitment header (0x4353573e3d534e43) 32-byte - WitLess Commitment hash: Double-SHA256(witless root hash) 43rd byte onwards: Optional data with no consensus meaning
If there are more than one scriptPubKey matching the pattern, the one with highest output index is assumed to be the WitLess commitment.
Diskcoin will open source and Diskcoin Foundation will launch the Node Staking Reward Plan
Dear Diskcoin community members, Diskcoin will open source on Oct. 20, 2019. There is big potential risk for the whole bitcoin system since it's hashrate concentrate on the places with lower electricity price. Diskcoin is a new attempt after Bitcoin. We hope to attract the power of the open source community. No matter which open source community you are in, we welcome you to join. Diskcoin is very inclusive and scalable, and you can easily port code from other crypto currencies (such as btc, bch, burst, etc.). Meanwhile, in order to better stimulate the development of node ecology, Diskcoin Foundation will launch a Node Staking Reward Plan, all issued Diskcoin rewards will be distributed by the Foundation, and all miners are welcome to submit application for rewards. The Node Staking Reward Plan Rule:
The plan is mainly for mining pools, wallets, exchanges and other institutions. Each Staking needs more than 1000DISC. The more Diskcoin you Staked, the more rewards you will get;
The settlement time of the reward is after the start of the new DC;
The reward plan will start from block height 27001;
The Staking needs to last for at least 2 DC to receive a reward. The reward of the DC (N-2) will be settled at the beginning of the DC (N). For example, Staking statistics starts at block height 27001, settle after 30,600, and issue reward for the 16th DC (block height 27001~28800). Once unstaked before 30600, there is no reward;
Online submission of the Staked txid participating in the reward plan;
The reward will be issued to the stake out address;
The reward rate for the 16th DC is 2%. For example, if you Staked 1000 DISC in 16th DC and does not unstake until the end of 17th DC, the 16th DC reward is 1000*2% = 20 DISC.And the incentive rate is reduced by 10% for every five DC thereafter, that is, the reward rate for 21st DC is 1.8%. For example, if you Staked 1000 DISC in 21st DC and does not unstake until the end of 22nd DC, the 21st DC reward is 18 DISC.For more details: https://www.diskcoin.org/staker
TXID will be displayed in the gray box above the transfer information. This is how everyone can find all information about each and every transaction recorded into blockchain. Source: blockchain.com In this screenshot you can see all the information on the transfer of 10 thousand BTC, carried out in May, 2010 by one of early bitcoin developers ... Once you've sent a crytpocurrency payment from Wirex to an external cryptoc address, transfer details (such as amount of crypto sent, sending/receiving crypto address, the date of transfer) can be found on the blockchain. This information is then publicly available and given its own transaction ID - or TXID. The best bitcoin cloud mining and passive income site We make industrial bitcoin mining accessible for everyone. Freemining.co is everything you need for bitcoin cloud mining today. Get 2500 Satoshis daily in two minutes. Start mining TXID; Load more. It is the first transparent and simple solution that allows you to manage your own Bitcoin mining in the cloud. At Insta Mining, we are a company with more than 150 professionals and 17 data centers specialized in cryptocurrency mining in South and Central America, United States and Europe. ... The console window in the Bitcoin Core Wallet. If you have been given a TXID by your bitcoin wallet, it’s probably already in its “searchable” format (reverse byte order).. 2. Spending outputs. You use a TXID when you want to use an existing output as an input in a new transaction.. To refer to an existing output, you use the txid it was created in, along with the vout number for that ...
Bitcoin Hack 10 BTC Free Bitcoin Generator Review with Proof
Today i will review this bitcoin generator that proclaimed to be generating bitcoins quickly. ... This is my final result txid from the generated bitcoins. ... Free bitcoin mining and withdraw ... Buy Raspberry Pi 4 Model B 4GB: https://amzn.to/2tlBfGW How to Setup a Raspberry Pi 4 Bitcoin Mining Rig w/ Bitmain AntMiner U3: https://youtu.be/dPWTSytzN7g... This Bitcoin Mining Software can mine with your computer or laptop CPU at least 0.5 bitcoin per day. So if you need bitcoin in your wallet,or just want to earn more money from your home,then this ... The test and review of the Digiminer bitcoin mining pool if it is legit or not. Bitcoin mining pool reviewed: Digiminer The TXID of the result on the video. https: ... I'm going to talking about top free best bitcoin mining website, and I'm gonna tell you every steps to get bitcoin mining! In this video I'm showing how to m...