They check whether they're running on testnet by accessing the
m_rpc_server object, which does not exist when in RPC mode.
Also, fix hard_fork_info being called with the wrong API.
43bca0d blockchain_utilities: new blockchain_dump diagnostic tool (moneromooo-monero)
5f397e4 Add functions to iterate through blocks, txes, outputs, key images (moneromooo-monero)
0a5a5e8 db_bdb: record numbers for recno databases start at 1 (moneromooo-monero)
50dfdc0 db_bdb: DB_KEYEMPTY is also not found for non-top recon fields (moneromooo-monero)
572780e blockchain_db: use the DNE exceptions where appropriate (moneromooo-monero)
Displays current block height and target, net hash, hard fork
basic info, and connections.
Useful as a basic user friendly "what's going on here" command.
The wallet and the daemon applied different height considerations
when selecting outputs to use. This can leak information on which
input in a ring signature is the real one.
Found and originally fixed by smooth on Aeon.
Using major version would cause older daemons to reject those
blocks as they fail to deserialize blocks with a major version
which is not 1. There is no such restriction on the minor
version, so switching allows older daemons to coexist with
newer ones till the actual fork date, when most will hopefully
have updated already.
Also, for the same reason, we consider a vote for 0 to be a
vote for 1, since older daemons set minor version to 0.
33affd2 blockchain: on hardfork 2, require mixin 2 at least if possible (moneromooo-monero)
434e0f4 hardfork: make the voting window a week (moneromooo-monero)
0a7421b hardfork: rescan speedup (moneromooo-monero)
fec98b8 hardfork: remove use of GNU extension for initializing object (moneromooo-monero)
4bbf944 blockchain: on hardfork 2, allow miners to claim less money than allowed (moneromooo-monero)
088bc56 hardfork: change window semantics to not count the newly added block (moneromooo-monero)
198f557 blockchain: use different hard fork settings for testnet and mainnet (moneromooo-monero)
This allows knowing the hard fork a block must obey in order to be
added to the blockchain. The previous semantics would use that new
block's version vote to determine this hard fork, which made it
impossible to use the rules to validate transactions entering the
tx pool (and made it impossible to validate a block before adding
it to the blockchain).
3c10239 unbound: use the mini event fallback implementation (moneromooo-monero)
4e138a0 dns_utils: remove unnecessary string conversion (moneromooo-monero)
f928468 dns_utils: factor the fetching code for different DNS record types (moneromooo-monero)
4ef0da1 dns_utils: simplify string handling and fix leak (moneromooo-monero)
ae5f28c dns_utils: add a const where possible (moneromooo-monero)
f43d465 dns_utils: lock access to the singleton (moneromooo-monero)
5990344 dns: make ctor private (moneromooo-monero)
This ensures one can't instanciate a DNSResolver object by
mistake, but uses the singleton. A separate create static
function is added for cases where a new object is explicitely
needed.
0a4bc84 Added ref10 shen_ed25519_ref code, which includes code that can replace crypto-ops with a version straight from Bernstein's ref 10 (ShenNoether)
0d70fdc revert to 776b4fc91a (ShenNoether)
b01f286 Added shen_ed25519_ref to crypto ops subfolder, the point is to directly have bitmonero's crypto code come from bernstein et al's ref 10 code (ShenNoether)
f197599 wallet: encrypt the cache file (moneromooo-monero)
98c76a3 chacha8: add a key generation variant that take a pointer and size (moneromooo-monero)
It contains private data, such as a record of transactions.
The key is derived from the view and spend secret keys.
The encryption currently is one shot, so may require a lot of
memory for large wallet caches.
7c4d6f1 simplewallet: Use default log file name when executable's file path is unknown (warptangent)
b5b0f08 epee: Don't set log file name when process path name isn't found (warptangent)
Default to "simplewallet.log" in current directory when file path isn't
obtained from epee.
In this situation previously, it defaulted to the file name of ".log"
("" + ".log") in the current directory.
(Thanks to @sammy007 for reporting bug.)
An earlier version yet used "" + "/" + ".log" = "/.log", which resulted
in silently not logging in most cases, due to lack of permission.
Test:
PATH=$PATH:</path/to/simplewallet/folder> && simplewallet --wallet-file /dev/null
This results in epee not finding the executable's file path, so
simplewallet will now use a default log filename.
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
This obsoletes the need for a lengthy blockchain rescan when
a transaction doesn't end up in the chain after being accepted
by the daemon, or any other reason why the wallet's idea of
spent and unspent outputs gets out of sync from the blockchain's.
The original code removed key images from a tx from the blockchain
when an non to-key nor gen input was found in that tx. Additionally,
the remainder of the tx data was added to the blockchain only after
the double spend check passed.
2634307 daemon: omit extra set of <> in error message (moneromooo-monero)
0822933 daemon: print a decoded tx in print_tx (moneromooo-monero)
1d678b1 daemon: fix print_tx not find transactions (moneromooo-monero)
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
If there's no blocks in database (m_height == 0):
Don't assign incorrect block range to check.
Skip average block size check.
Test:
Run blockchain_converter with an existing source blockchain.bin and
a non-existent LMDB destination database.
The converter creates a BlockchainLMDB instance with zero height, due to
not being initialized with a genesis block, normally done by
Blockchain::init(). While different than the behavior of bitmonerod,
blockchain_import, and blockchain_export, the initialization hasn't been
strictly necessary.
The db batch size estimation normally uses an average block size, or a
default minimum block size, whichever is greater. In this case, as
there's no existing blocks to check for an average block size, the
default should be used.
It should avoid a lot of the issues sending more than half the
wallet's contents due to change.
Actual output selection is still random. Changing this would
improve the matching of transaction amounts to output sizes,
but may have non obvious effects on blockchain analysis.
Mapped to the new transfer_new command in simplewallet, and
transfer uses the existing algorithm.
To use in RPC, add "new_algorithm: true" in the transfer_split
JSON command. It is not used in the transfer command.
boost doesn't support %zu for size_t, and the previous change
to %u could technically lose bits (though it would require splitting
a transfer into 4 billion transactions, which seems unlikely).
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
fd73d9c Check and resize if needed at batch transaction start (warptangent)
f9e4afd blockchain_utilities: Increase debug statement's log level (warptangent)
699e4b3 blockchain_utilities: Pass expected number of blocks when starting batch (warptangent)
6e170c8 Optionally allow DB to know expected number of blocks at batch transaction start (warptangent)
This currently only affects blockchain_import and blockchain_converter.
When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.
The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
size of 4k
- a factor for the expanded size a block occupies in the DB across the
sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
increase over the batch
Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.
The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.
For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
dc4dbc1 simplewallet: allow creating a wallet from a public address and view secret key (moneromooo-monero)
6a0f61d account: allow creating an account from a public address and view secret key (moneromooo-monero)
e05a58a wallet2: fix write_watch_only_wallet comment description (moneromooo-monero)
4bf6f0d simplewallet: forbid seed commands for watch only wallets (moneromooo-monero)
It uses the async console handler differently than simplewallet,
and wasn't running the same exit code, causing it to never actually
exit after breaking out of the console entry loop.
The new save_watch_only saves a copy of the keys file without the
spend key. It can then be given away to be used as a normal keys
file, but with no spend ability.
Sends all the dust to your own wallet. May fail (if the fee required
is more than the dust total). May end up paying most of the dust in fees.
Unlocked dust total is now also displayed in "balance".
5680604 Replace hardcoded value with existing constant of same value (warptangent)
f37ee2f Update database resize behavior (warptangent)
f85cd8e Include database error in more error messages (warptangent)
Replace --set_log with --log-level for consistency.
Show default log level in usage.
Add --log-file for specifying log file path.
Document log file path.
Display log file path at startup.
As with display of seed, don't log view key and spend key.
Includes:
- display of viewkey at wallet creation
- "viewkey" command output
- "spendkey" command output
Add fix for compile error with multiple uses of peerid_type (uint64_t)
variable in lambda expression.
- known GCC issue: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65843
epee: replace return value of nullptr for expected boolean with false.
Fixes#231.
The random selection of a node shouldn't favor repeats that occur in the
hardcoded and DNS seed node lists.
Remove hardcoded ":18080" address which gives parse error.
Test: bitmonerod --log-level 2
The seed node list displayed at startup shouldn't show duplicate
addresses (includes port).
Based on tewinget's update.
Make OpenAlias address format independent of existing DNS functions.
Add tests.
Test:
make debug-test
cd build/debug/tests/unit_tests
# test that regular DNS functions work, including IPv4 lookups.
# also test function that converts OpenAlias address format
make && ./unit_tests --gtest_filter=DNSResolver*
# test that OpenAlias addresses like donate@getmonero.org work from
# wallet tools
make && ./unit_tests --gtest_filter=AddressFromURL.Success
Add public method blockchain_storage::debug_pop_block_from_blockchain()
Ensure blockchain_import calls destructors before exit.
To test:
DATABASE=memory make release
// create blockchain.bin from blockchain.raw if needed
build/release/bin/blockchain_import --block-stop 1000
// try popping a single block
build/release/bin/blockchain_import --pop-blocks 1
Some filesystems (*cough* NTFS *cough*) aren't good with sparse files,
so this makes LMDB dynamically resize its mapsize as needed. Note: the
check interval is currently every 10 blocks (for testing) and will
probably need to change to 1000 or something. Default mapsize set to
1GiB.
Blockchain conversion tools using batching will probably segfault, I'll
fix that in the next commit.
Remove repeated coinbase tx in each exported block's data.
Add resume from last exported height to blockchain_export, making it the
default behavior when the file already exists.
Start reorganizing the utilities.
Various cleanup.
Update output, including referring to both height and block numbers as
zero-based instead of one-based. This better matches the block data,
rather than just some parts of the existing codebase.
Use smaller default batch sizes for importer when verifying, so progress
is saved more frequently.
Use small default batch size (1000) for importer on Windows, due to
current issue with big transaction sizes on LMDB.
file format
-----------
[4-byte magic | variable-length header | block data]
header
------
4-byte file_info length
file_info struct
file format major version
file format minor version
header length (includes file_info struct)
[rest of header, padded with 0 bytes up to header length]
block data
----------
4-byte chunk/block_package length
block_package struct
block
txs (coinbase/miner tx included already in block)
block_size
cumulative_difficulty
coins_generated
4-byte chunk/block_package length
block_package struct
[...]