Structured {de-,}serialization methods for (many new) types
which are used for requests or responses in the RPC.
New types include RPC requests and responses, and structs which compose
types within those.
# Conflicts:
# src/cryptonote_core/blockchain.cpp
This commit refactors some of the rpc-related functions in the
Blockchain class to be more composable. This change was made
in order to make implementing the new zmq rpc easier without
trampling on the old rpc.
New functions:
Blockchain::get_num_mature_outputs
Blockchain::get_random_outputs
Blockchain::get_output_key
Blockchain::get_output_key_mask_unlocked
Blockchain::find_blockchain_supplement (overload)
functions which previously had this functionality inline now call these
functions as necessary.
This might prevent some calls to terminate when the LockedTXN
dtor is called as part of stack unwinding caused by another
exception in the first place.
c867357a cryptonote_protocol: error handling on cleanup_handle_incoming_blocks (moneromooo-monero)
ce901fcb Fix blockchain_import wedge on exception in cleanup_handle_incoming_blocks (moneromooo-monero)
84fa015e core: guard against exceptions in handle_incoming_{block,tx} (moneromooo-monero)
5807529e blockchain: cap memory size of retrieved blocks (moneromooo-monero)
c1b10381 rpc: decrease memory usage a bit in getblocks.bin (moneromooo-monero)
3dd34a49 Cleanup test impact of moving blockchain_db_types() (Howard Chu)
80344740 More DB support cleanup (Howard Chu)
4c7f8ac0 DB cleanup (Howard Chu)
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
5d4ef719 core: speed up output index unique set calculation (moneromooo-monero)
19d7f568 perf_timer: allow profiling more granular than millisecond (moneromooo-monero)
bda8c598 epee: add nanosecond timer and pause/restart profiling macros (moneromooo-monero)
Add get_fork_version and add_ideal_fork_version to core so
cryptonote_protocol does not have to need the Blockchain
class directly, as it's not in its dependencies, and add
those to the fake core classes in tests too.
We won't even talk to a peer which claims a wrong version
for its top block. This will avoid syncing to known bad
peers in the first place.
Also add IP fails when failing to verify a block.
A sort+uniq step was done for every tx in a 200 block chunk,
causing a lot of repeated scanning as the size of the offset
map got larger with every added tx. We now do the step only
once at the end of the loop.
Doing it this way potentially uses more memory, but testing
shows that it's currently only about 2% more.
If the number of blocks to check was not a multiple of the
number of preparation threads, the last few blocks would
not be included in the threaded long hash calculation.
Those would still get calculated when the block gets added
to the chain, however, so this was only a tiny performance
hit, rather than a security bug.
They used to be sorted by amount, which was fine before rct,
but is now suboptimal, since amounts are not known anymore.
In particular, it would give a recipient knowledge of whether
change was higher or lower than the amount received.
02d66db4 tx_pool: initialize padding in txpool meta structure (moneromooo-monero)
0722aea3 cryptonote_core: initialize checkpoint flag (moneromooo-monero)
91d41090 tx_pool: ensure txes loaded from poolstate.bin have their txid cached (moneromooo-monero)
aaeb164c tx_pool: remove transactions if they're in the blockchain (moneromooo-monero)
558cfc31 core, wallet: faster tx pool scanning (moneromooo-monero)
f065234b core: cache tx and block hashes in the respective classes (moneromooo-monero)
The txid is not saved, and we want to make sure the transactions
have their txid cached while in the pool, since get_transactions
copies the transaction object, so any txid calculation on those
copies would not benefit any later caller, since the original tx
would be left without a cached txid.
Minimum mixin 4 and enforced ringct is moved from v5 to v6.
v5 is now used for an increased minimum block size (from 60000
to 300000) to cater for larger typical/minimum transaction size.
The fee algorithm is also changed to decrease the base per kB
fee, and add a cheap tier for those transactions which we do
not care if they get delayed (or even included in a block).
b553c282 rpc: fix BUILD_TAG mispelling (BUILDTAG) (moneromooo-monero)
02097c87 core: print the "new update found" message in cyan, for visibility (moneromooo-monero)
749ebace download: check available disk space before downloading (moneromooo-monero)
f36c5f1e download: give download threads distinct names (moneromooo-monero)
f6211322 core: make update download cancellable (moneromooo-monero)
63f0e074 download: async API (moneromooo-monero)
9bf017ed http_client: allow cancelling a download (moneromooo-monero)
0d90123c http_client: allow derived class to get headers at start (moneromooo-monero)
If the blocks aren't being linked against a binary (such as
one of the blockchain utilities), the symbol will not be
NULL, but the size will be 0. This avoids a apurious warning
about the data hash.
This is to prevent unbounded memory use. Since I don't think there
is a container that has quick insert, quick lookup, and automatic
FIFO, I use two and swap every N, clearing the oldest one.
You're wondering how this fixes core tests, aren't you...
It prevents the miner (initialized by cryptonote::core) from
breaking trying to access arguments that were not added.
Since the tests don't use the miner directly, it makes more
sense to have cryptonote_core add those, since it also uses
the miner.
They're now used by core to determine the data directory to use
for the txpool directory.
This fixes an assert in the core tests, which don't use the RPC
server, which normally initializes the P2P code.
a427235e core: add a missing newline on a string to be logged (moneromooo-monero)
b6a2230e unit_tests: fix minor blockchain_db regression (moneromooo-monero)
c488eca5 hardfork: tone down some logs (moneromooo-monero)
bed2d9f2 Get rid of directory lock (Howard Chu)
2e913676 Handle map resizes from other processes (Howard Chu)
bf1348b7 Can't cache num_txs or num_outputs either (Howard Chu)
dc53e9ee Add a few read txns to streamline (Howard Chu)
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
Zero change is sent to a random address, which confuses the code
which determines which key to use to encrypt the payment id.
Ignore zero amounts for this purpose, so the payment id gets
encrypted with the real destination's key.
3ae79a59 core: set missing verifivation_failed flag when rejecting a tx (moneromooo-monero)
ea6549e9 core_tests: decrease trace level from trace to debug (moneromooo-monero)
0644eed7 Remove boost/foreach.cpp includes (Miguel Herranz)
36dd3e23 Replace BOOST_REVERSE_FOREACH with ranged for (Miguel Herranz)
629e3101 Replace BOOST_FOREACH with C++11 ranged for (Miguel Herranz)
poolstate.bin and p2pstate.bin are stored in .bitmonero/ if the default
P2P port is being used.
If another port is used both files are stored in
.bitmonero/PORTNUMBER/.
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
fsync the DB asynchronously, to allow block download/verification
to proceed while syncing. Sync after every batch. Note that
"fastest" still defaults to fastest:async:1000.
It can now be queried by RPC, so it needs to be set before
it is otherwise needed for consensus, even if no blocks had
to be added (ie, exit and restart quickly).
Continue filling until we reach the block size limit, or the
resulting coinbase decreases.
Also remove old sanity check on block size, which is now not
wanted anymore.
After popping blocks from the old chain, the hard fork object's
notion of the current version was not in line with the new height,
causing the first blocks from the new chain to be rejected due
to a false expection of a newer version.
A bug in cold signing caused a spurious pubkey to be included
in transactions, so we need to ensure we use the correct one
when sending outputs from one of those.
If the block reward to use for the fee calculation can't be
calculated (should not happen in practice), use a high bound,
so we use a fee overestimate that will be accepted by the network.
The vast majority of transactions will have just one tx pubkey,
but a bug with cold wallet signing caused two such keys to be
there, with the second one being the real one.
Added a new command to the P2P protocol definitions to allow querying for support flags.
Implemented handling of new support flags command in net_node. Changed for_each callback template to include support flags. Updated print_connections command to show peer support flags.
Added p2p constant for signaling fluffy block support.
Added get_pool_transaction function to cryptnote_core.
Added new commands to cryptonote protocol for relaying fluffy blocks.
Implemented handling of fluffy block command in cryptonote protocol.
Enabled fluffy block support in node initial configuration.
Implemented get_testnet function in cryptonote_core.
Made it so that fluffy blocks only run on testnet.
Compute derivation only once per tx, instead of once per output. Approx 33% faster while using 75% as much CPU on my machine. Note old functions in cryptonote_core (lookup_acc_outs and is_out_to_acc) are still used by tests.
The fee will vary based on the base reward and the current
block size limit:
fee = (R/R0) * (M0/M) * F0
R: base reward
R0: reference base reward (10 monero)
M: block size limit
M0: minimum block size limit (60000)
F0: 0.002 monero
Starts applying at v4
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
4038e86 Add performance timers for ringct tx verification (moneromooo-monero)
74dfdb0 perf_timer: new class and macros to make performance logs easier (moneromooo-monero)
10be903 Brackets to prevent premature return (NanoAkron)
fb1785a Brackets to ensure doesn't function prematurely return (NanoAkron)
8ed0d72 Moved logging to target functions rather than caller (NanoAkron)
442bfd1 Added messages at log level 2 to reflect deactivation procedure (NanoAkron)
bba6af9 wallet: cold wallet transaction signing (moneromooo-monero)
9872dcb wallet: fix log confusion between bytes and kilobytes (moneromooo-monero)
d9b0bf9 cryptonote_core: make extra field removal more generic (moneromooo-monero)
98f19d4 serialization: add support for serializing std::pair and std::list (moneromooo-monero)
This change adds the ability to create a new unsigned transaction
from a watch only wallet, and save it to a file. This file can
then be moved to another computer/VM where a cold wallet may load
it, sign it, and save it. That cold wallet does not need to have
a blockchain nor daemon. The signed transaction file can then be
moved back to the watch only wallet, which can load it and send
it to the daemon.
Two new simplewallet commands to use it:
sign_transfer (on the cold wallet)
submit_transfer (on the watch only wallet)
The transfer command used on a watch only wallet now writes an
unsigned transaction set in a file called 'unsigned_monero_tx'
instead of submitting the tx to the daemon as a normal wallet does.
The signed tx file is called 'signed_monero_tx'.
Keep the immediate direct deps at the library that depends on them,
declare deps as PUBLIC so that targets that link against that library
get the library's deps as transitive deps.
Break dep cycle between blockchain_db <-> crytonote_core.
No code refactoring, just hide cycle from cmake so that
it doesn't complain (cycles are allowed only between
static libs, not shared libs).
This is in preparation for supproting BUILD_SHARED_LIBS cmake
built-in option for building internal libs as shared.
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
The code used to cap at 5000 blocks per sync. It also treated 0 as 1.
Remove these checks; if specified as 0 do no periodic syncs at all.
Then the user is responsible for syncing in some external process.
When RingCT is enabled, outputs from coinbase transactions
are created as a single output, and stored as RingCT output,
with a fake mask. Their amount is not hidden on the blockchain
itself, but they are then able to be used as fake inputs in
a RingCT ring. Since the output amounts are hidden, their
"dustiness" is not an obstacle anymore to mixing, and this
makes the coinbase transactions a lot smaller, as well as
helping the TXO set to grow more slowly.
Also add a new "Null" type of rct signature, which decreases
the size required when no signatures are to be stored, as
in a coinbase tx.
This allows the key to be not the same for two outputs sent to
the same address (eg, if you pay yourself, and also get change
back). Also remove the key amounts lists and return parameters
since we don't actually generate random ones, so we don't need
to save them as we can recalculate them when needed if we have
the correct keys.
The whole rct data apart from the MLSAGs is now included in
the signed message, to avoid malleability issues.
Instead of passing the data that's not serialized as extra
parameters to the verification API, the transaction is modified
to fill all that information. This means the transaction can
not be const anymore, but it cleaner in other ways.
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.
The mixRing (output keys and commitments) and II fields (key images)
can be reconstructed from vin data.
This saves some modest amount of space in the tx.
It may be suboptimal, but it's a pain to have to rebuild everything
when some of this changes.
Also, no clue why there seems to be two different code paths for
serializing a tx...
This plugs a privacy leak from the wallet to the daemon,
as the daemon could previously see what input is included
as a transaction input, which the daemon hadn't previously
supplied. Now, the wallet requests a particular set of
outputs, including the real one.
This can result in transactions that can't be accepted if
the wallet happens to select too many outputs with non standard
unlock times. The daemon could know this and select another
output, but the wallet is blind to it. It's currently very
unlikely since I don't think anything uses non default
unlock times. The wallet requests more outputs than necessary
so it can use spares if any of the returns outputs are still
locked. If there are not enough spares to reach the desired
mixin, the transaction will fail.
This constrains the number of instances of any amount
to the unlocked ones (as defined by the default unlock time
setting: outputs with non default unlock time are not
considered, so may be counted as unlocked even if they are
not actually unlocked).
The tests for rejection of unmixable outputs in v2 are commented out,
as there are no unmixable outputs created anymore. This should be
restored at some point.
It sets the max number of threads to use for a parallel job.
This is different that the number of total threads, since monero
binaries typically start a lot of them.
When reaching the tail emission phase, the amount of coins will
eventually go over MONEY_SUPPLY, overflowing 64 bits. There was
a check added to blockchain_storage, but this was not ported to
the blockchain DB version.
Reported by smooth.
This can generate non decomposed outputs for very large block
rewards (or not so large ones if a miner decides to not quantize
the block rewards). Out of an abundance of caution, we refuse
to generate those. They are still accepted by the consensus code,
however.
d5d46e6 tests: obligatory hardfork unit build fix after interface change (moneromooo-monero)
25672d3 wallet: pass std::function by const ref, not value (moneromooo-monero)
0be6e08 wallet: do not leak owned amounts to the daemon unless --trusted-daemon (moneromooo-monero)
12146da wallet: change sweep_dust to sweep_unmixable (moneromooo-monero)
600a3cf New RPC and daemon command to get output histogram (moneromooo-monero)
f9a2fd2 wallet: handle rare case where fee adjustment can bump to the next kB (moneromooo-monero)
f26651a wallet: factor fee calculation (moneromooo-monero)
This allows appropriate action to be taken, like displaying
the reason to the user.
Do just that in simplewallet, which should help a lot in
determining why users fail to send.
Also make it so a tx which is accepted but not relayed is
seen as a success rather than a failure.
This is a list of existing output amounts along with the number
of outputs of that amount in the blockchain.
The daemon command takes:
- no parameters: all outputs with at least 3 instances
- one parameter: all outputs with at least that many instances
- two parameters: all outputs within that many instances
The default starts at 3 to avoid massive spamming of all dust
outputs in the blockchain, and is the current minimum mixin
requirement.
An optional vector of amounts may be passed, to request
histogram only for those outputs.
This was meant to go in v2, but the miner tx slipped through
the cracks as it doesn't go through the main tx verification
since it doesn't get added to the pool.
tx_pool.h doxygen documentation completed.
Many notes made on areas for improvement, be that functionality or
code clarity.
Commented code and unused code removed.
The functions in src/cryptonote_core/checkpoints_create.{h,cpp} should
be member functions of the checkpoints class, if nothing else for the
sake of keeping their documentation together.
This commit covers moving those functions to be member functions of the
checkpoints class as well as documenting those functions.
All functions in src/cryptonote_core/checkpoints.h are now documented in
doxygen style.
checkpoints.cpp has been reviewed, one function has been marked for
discussion on correctness.
All functions are now documented in doxygen format. Comments have been
updated to reflect the current state of the code. Many areas for
improvement in clarity and design have been noted, as well as cruft to
be removed. These changes are not reflected in this commit both to
allow time for comment and to keep commits organized by purpose.
This is already the default for the daemon, but by checking a command
line argument and calling a Blockchain member function setter.
Initialize the variable to false so it's not dependent on an external
command-line argument check. This allows utilities like
blockchain_import to have a reasonable default without code changes.
c2a1fee simplewallet: prompt for private keys when generating wallets (moneromooo-monero)
4513b4c simplewallet: add a new --restore-from-keys option (moneromooo-monero)
We also replace the --fakechain option with an optional structure
containing details about configuration for the core/blockchain,
for test purposes. This seems more future friendly.
7658ac0 blockchain: revert handle_get_objects adding block id on tx not found (moneromooo-monero)
3a0f4d8 berkeleydb: fix delete/free mismatch (moneromooo-monero)
1642be2 minor bugfixes and refactoring (Thomas Winget)
098dcf2 unit_tests: fix mnemonics unit test testing invalid seeds (moneromooo-monero)
Locking just one db turns out to not have been a good idea, since
the pool and p2p state fdles have to be used anyway.
Also ensure the directory exists before tring to lock.
- Blockchain should store if it's running on testnet or not
- moved loading compiled-in block hashes to its own function for clarity
- on handle_get_objects, should now correctly return false if a block's
transactions are missing
- replace instances of BOOST_FOREACH with C++11 for loops in Blockchain.
7fc6fa3 wallet: forbid dust altogether in output selection where appropriate (moneromooo-monero)
5e1a739 blockchain: log number of outputs available for a new tx (moneromooo-monero)
bcac101 daemon: fix a few issues reported by valgrind (moneromooo-monero)
a7e8174 tx_pool: fix serialization of new relayed data (moneromooo-monero)
601ad76 hardfork: fix mixup in indexing variable in get_voting_info (moneromooo-monero)
444e22f blockchain: remove unused timer (moneromooo-monero)
7edfdd8 blockchain: fix m_sync_counter uninitialized variable use (moneromooo-monero)
d97582c epee: use generate_random_bytes for new random uuids (moneromooo-monero)
17c7c9c epee: remove dodgy random code that nobody uses (moneromooo-monero)
A boost lock is used to determine whether more than one process
wants to access the database. The boost file_lock doesn't seem
to like locking directories, so we use an arbitrary file in it.
This allows to still run two daemons if they have different
database directories (ie, LMDB/BDB, different data directories).
This is intended to avoid cases where a timed out tx will be
re-relayed by another peer for which it has not timed out yet,
which would cause the tx to stay in the network's pool for a
long time (until all peers time it out before another one
tries to relay it again).
This ensures this will be done without fail, as the error prone
matching of every return with a call to KILL_IOSERVICE leads to
hard to debug corruption when one is missing.
- only try to stop if actually started
- print number of threads before zeroing it
This fixes the suspiciously doubled "Mining has been stopped"
message on exit.
b39aae7 Tweak 45800a25e9 (hyc)
4a5a5ff blockchain: always stop the ioservice before returning (moneromooo-monero)
78b65cf db_lmdb: safety close db at exit (moneromooo-monero)
45800a2 db_lmdb: fix a strdup/delete[] mistmatch (moneromooo-monero)
If the block reward was too high, the verification failed flag
was set, but the function continued. The code which was supposed
to trap this flag and return failure failed to trap it, and,
while the block was not added to the chain, the function would
return success.
The reason for avoiding returning when the block reward problem
was detected was to be able to return any transactions to the
pool if needed. This is now mooted by moving the transaction
return code to a separate function, which is now called at all
appropriate points, making the logic much simpler, and hopefully
correct now.
We also move the hard fork version check after the prev_id check,
as block which does not go on the top of the chain might not
have the expected version there, without being invalid just for
this reason.
Last, we trap the case where a block fails to be added due to
using already spent key images, to set the verification failed
flag.
This fixes some double spending tests.
This may or may not be unneeded in normal (non test) circumstances,
to be determined later. Keeping these for now may be slower, but safer.
Block reward may now be less than the full amount allowed.
This was breaking the bitflipping test.
We now keep track of whether a block which was accepted by the core
has a lower than allowed block reward, and allow this in the test.
The check was explicit in the original version, so it seems
safer to make it explicit here, especially as it is now done
implicitely in a different place, away from the original check.
The core tests use the blockchain, and reset it to be able
to add test data to it. This does not play nice with the
databases, since those will save that data without an explicit
save call.
We add a fakechain flag that the tests will set, which tells
the core and blockchain code to use a separate database, as
well as skip a few things like checkpoints and fixup, which
only make sense for real data.
This fixes coretests, which does not register daemon specific arguments,
but uses core, which uses those arguments. Also gets rid of an unwanted
dependency on daemon code from core.
Early DB versions did not store key images for inputs if the
transaction spending them had no outputs (ie, all fee). This
is not correct, as this would allow these outputs to be double
spent. This was fixed in 533acc30ed
a few months ago, but databases having synced blocks 2021612 and
685498 with a faulty version will be missing those key images
in the spent keys database. This code checks for this, and adds
those key images if they are missing.
This allows them to be saved as a fixed (one byte) chunk whatever
the value. Using a varint will use two bytes as the high bit gets
set.
This is backward compatible with current usage (0-2 values).
d887c18 hardfork: fix more major/minor issues (moneromooo-monero)
3b47ca2 hardfork: fix rescan on load (moneromooo-monero)
4cea2b1 Add IP blocking for misbehaving nodes (adapted from Boolberry) (Javier Smooth)
9c64b12 quiet down p2p logging a bit (Javier Smooth)
53c75ab blockchain: log versions as numbers, not characters (moneromooo-monero)
edade8d hardfork: fix actual/voting confusion (moneromooo-monero)
Also add some more tests, and rename some instances of
"version" and "add" for clarity.
NOTE: the starting height values are sometimes wrong.
I suspect this is due to the hard fork reorg code being
buggy, since they're good when syncing after the fact.
However, they're not actually used by the consensus code,
so I'm ignoring this for now, but this needs debugging.
The last relayed time of a transaction is maintained, and
transactions will be relayed again if they are still in the
pool after a certain amount of time, which increases with
the transaction's age. All such transactions are resent,
whether or not they originated on the local node.
Use the correct block time for realtime fuzz on locktime
Use the correct block time to calculate next_difficulty on alt chains (will not work as-is with voting)
Lock unit tests to original block time for now
43bca0d blockchain_utilities: new blockchain_dump diagnostic tool (moneromooo-monero)
5f397e4 Add functions to iterate through blocks, txes, outputs, key images (moneromooo-monero)
0a5a5e8 db_bdb: record numbers for recno databases start at 1 (moneromooo-monero)
50dfdc0 db_bdb: DB_KEYEMPTY is also not found for non-top recon fields (moneromooo-monero)
572780e blockchain_db: use the DNE exceptions where appropriate (moneromooo-monero)
The wallet and the daemon applied different height considerations
when selecting outputs to use. This can leak information on which
input in a ring signature is the real one.
Found and originally fixed by smooth on Aeon.
Using major version would cause older daemons to reject those
blocks as they fail to deserialize blocks with a major version
which is not 1. There is no such restriction on the minor
version, so switching allows older daemons to coexist with
newer ones till the actual fork date, when most will hopefully
have updated already.
Also, for the same reason, we consider a vote for 0 to be a
vote for 1, since older daemons set minor version to 0.
This allows knowing the hard fork a block must obey in order to be
added to the blockchain. The previous semantics would use that new
block's version vote to determine this hard fork, which made it
impossible to use the rules to validate transactions entering the
tx pool (and made it impossible to validate a block before adding
it to the blockchain).
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Add public method blockchain_storage::debug_pop_block_from_blockchain()
Ensure blockchain_import calls destructors before exit.
To test:
DATABASE=memory make release
// create blockchain.bin from blockchain.raw if needed
build/release/bin/blockchain_import --block-stop 1000
// try popping a single block
build/release/bin/blockchain_import --pop-blocks 1