mirror of
https://git.wownero.com/wownero/wownero.git
synced 2024-12-25 13:48:51 +00:00
e5d2680094
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437
)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
609 lines
24 KiB
C++
609 lines
24 KiB
C++
// Copyright (c) 2014-2015, The Monero Project
|
|
//
|
|
// All rights reserved.
|
|
//
|
|
// Redistribution and use in source and binary forms, with or without modification, are
|
|
// permitted provided that the following conditions are met:
|
|
//
|
|
// 1. Redistributions of source code must retain the above copyright notice, this list of
|
|
// conditions and the following disclaimer.
|
|
//
|
|
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
|
|
// of conditions and the following disclaimer in the documentation and/or other
|
|
// materials provided with the distribution.
|
|
//
|
|
// 3. Neither the name of the copyright holder nor the names of its contributors may be
|
|
// used to endorse or promote products derived from this software without specific
|
|
// prior written permission.
|
|
//
|
|
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
|
|
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
|
|
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
|
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
|
|
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
|
|
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
//
|
|
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
|
|
|
|
#include <algorithm>
|
|
#include <boost/filesystem.hpp>
|
|
#include <unordered_set>
|
|
#include <vector>
|
|
|
|
#include "tx_pool.h"
|
|
#include "cryptonote_format_utils.h"
|
|
#include "cryptonote_boost_serialization.h"
|
|
#include "cryptonote_config.h"
|
|
#if BLOCKCHAIN_DB == DB_LMDB
|
|
#include "blockchain.h"
|
|
#else
|
|
#include "blockchain_storage.h"
|
|
#endif
|
|
#include "common/boost_serialization_helper.h"
|
|
#include "common/int-util.h"
|
|
#include "misc_language.h"
|
|
#include "warnings.h"
|
|
#include "crypto/hash.h"
|
|
|
|
DISABLE_VS_WARNINGS(4244 4345 4503) //'boost::foreach_detail_::or_' : decorated name length exceeded, name was truncated
|
|
|
|
namespace cryptonote
|
|
{
|
|
namespace
|
|
{
|
|
size_t const TRANSACTION_SIZE_LIMIT = (((CRYPTONOTE_BLOCK_GRANTED_FULL_REWARD_ZONE * 125) / 100) - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE);
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
#if BLOCKCHAIN_DB == DB_LMDB
|
|
//---------------------------------------------------------------------------------
|
|
tx_memory_pool::tx_memory_pool(Blockchain& bchs): m_blockchain(bchs)
|
|
{
|
|
|
|
}
|
|
#else
|
|
tx_memory_pool::tx_memory_pool(blockchain_storage& bchs): m_blockchain(bchs)
|
|
{
|
|
|
|
}
|
|
#endif
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::add_tx(const transaction &tx, /*const crypto::hash& tx_prefix_hash,*/ const crypto::hash &id, size_t blob_size, tx_verification_context& tvc, bool kept_by_block)
|
|
{
|
|
|
|
|
|
if(!check_inputs_types_supported(tx))
|
|
{
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
|
|
uint64_t inputs_amount = 0;
|
|
if(!get_inputs_money_amount(tx, inputs_amount))
|
|
{
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
|
|
uint64_t outputs_amount = get_outs_money_amount(tx);
|
|
|
|
if(outputs_amount >= inputs_amount)
|
|
{
|
|
LOG_PRINT_L1("transaction use more money then it has: use " << print_money(outputs_amount) << ", have " << print_money(inputs_amount));
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
|
|
uint64_t fee = inputs_amount - outputs_amount;
|
|
uint64_t needed_fee = blob_size / 1024;
|
|
needed_fee += (blob_size % 1024) ? 1 : 0;
|
|
needed_fee *= FEE_PER_KB;
|
|
if (!kept_by_block && fee < needed_fee /*&& fee < MINING_ALLOWED_LEGACY_FEE*/)
|
|
{
|
|
LOG_PRINT_L1("transaction fee is not enough: " << print_money(fee) << ", minumim fee: " << print_money(needed_fee));
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
|
|
if (!kept_by_block && blob_size >= TRANSACTION_SIZE_LIMIT)
|
|
{
|
|
LOG_PRINT_L1("transaction is too big: " << blob_size << " bytes, maximum size: " << TRANSACTION_SIZE_LIMIT);
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
|
|
//check key images for transaction if it is not kept by block
|
|
if(!kept_by_block)
|
|
{
|
|
if(have_tx_keyimges_as_spent(tx))
|
|
{
|
|
LOG_PRINT_L1("Transaction with id= "<< id << " used already spent key images");
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
}
|
|
|
|
|
|
crypto::hash max_used_block_id = null_hash;
|
|
uint64_t max_used_block_height = 0;
|
|
#if BLOCKCHAIN_DB == DB_LMDB
|
|
bool ch_inp_res = m_blockchain.check_tx_inputs(tx, max_used_block_height, max_used_block_id, kept_by_block);
|
|
#else
|
|
bool ch_inp_res = m_blockchain.check_tx_inputs(tx, max_used_block_height, max_used_block_id);
|
|
#endif
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
if(!ch_inp_res)
|
|
{
|
|
if(kept_by_block)
|
|
{
|
|
//anyway add this transaction to pool, because it related to block
|
|
auto txd_p = m_transactions.insert(transactions_container::value_type(id, tx_details()));
|
|
CHECK_AND_ASSERT_MES(txd_p.second, false, "transaction already exists at inserting in memory pool");
|
|
txd_p.first->second.blob_size = blob_size;
|
|
txd_p.first->second.tx = tx;
|
|
txd_p.first->second.fee = inputs_amount - outputs_amount;
|
|
txd_p.first->second.max_used_block_id = null_hash;
|
|
txd_p.first->second.max_used_block_height = 0;
|
|
txd_p.first->second.kept_by_block = kept_by_block;
|
|
txd_p.first->second.receive_time = time(nullptr);
|
|
tvc.m_verifivation_impossible = true;
|
|
tvc.m_added_to_pool = true;
|
|
}else
|
|
{
|
|
LOG_PRINT_L1("tx used wrong inputs, rejected");
|
|
tvc.m_verifivation_failed = true;
|
|
return false;
|
|
}
|
|
}else
|
|
{
|
|
//update transactions container
|
|
auto txd_p = m_transactions.insert(transactions_container::value_type(id, tx_details()));
|
|
CHECK_AND_ASSERT_MES(txd_p.second, false, "intrnal error: transaction already exists at inserting in memorypool");
|
|
txd_p.first->second.blob_size = blob_size;
|
|
txd_p.first->second.tx = tx;
|
|
txd_p.first->second.kept_by_block = kept_by_block;
|
|
txd_p.first->second.fee = inputs_amount - outputs_amount;
|
|
txd_p.first->second.max_used_block_id = max_used_block_id;
|
|
txd_p.first->second.max_used_block_height = max_used_block_height;
|
|
txd_p.first->second.last_failed_height = 0;
|
|
txd_p.first->second.last_failed_id = null_hash;
|
|
txd_p.first->second.receive_time = time(nullptr);
|
|
tvc.m_added_to_pool = true;
|
|
|
|
if(txd_p.first->second.fee > 0)
|
|
tvc.m_should_be_relayed = true;
|
|
}
|
|
|
|
tvc.m_verifivation_failed = true;
|
|
//update image_keys container, here should everything goes ok.
|
|
BOOST_FOREACH(const auto& in, tx.vin)
|
|
{
|
|
CHECKED_GET_SPECIFIC_VARIANT(in, const txin_to_key, txin, false);
|
|
std::unordered_set<crypto::hash>& kei_image_set = m_spent_key_images[txin.k_image];
|
|
CHECK_AND_ASSERT_MES(kept_by_block || kei_image_set.size() == 0, false, "internal error: keeped_by_block=" << kept_by_block
|
|
<< ", kei_image_set.size()=" << kei_image_set.size() << ENDL << "txin.k_image=" << txin.k_image << ENDL
|
|
<< "tx_id=" << id );
|
|
auto ins_res = kei_image_set.insert(id);
|
|
CHECK_AND_ASSERT_MES(ins_res.second, false, "internal error: try to insert duplicate iterator in key_image set");
|
|
}
|
|
|
|
tvc.m_verifivation_failed = false;
|
|
|
|
m_txs_by_fee.emplace((double)blob_size / fee, id);
|
|
//succeed
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::add_tx(const transaction &tx, tx_verification_context& tvc, bool keeped_by_block)
|
|
{
|
|
crypto::hash h = null_hash;
|
|
size_t blob_size = 0;
|
|
get_transaction_hash(tx, h, blob_size);
|
|
return add_tx(tx, h, blob_size, tvc, keeped_by_block);
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::remove_transaction_keyimages(const transaction& tx)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
// ND: Speedup
|
|
// 1. Move transaction hash calcuation outside of loop. ._.
|
|
crypto::hash actual_hash = get_transaction_hash(tx);
|
|
BOOST_FOREACH(const txin_v& vi, tx.vin)
|
|
{
|
|
CHECKED_GET_SPECIFIC_VARIANT(vi, const txin_to_key, txin, false);
|
|
auto it = m_spent_key_images.find(txin.k_image);
|
|
CHECK_AND_ASSERT_MES(it != m_spent_key_images.end(), false, "failed to find transaction input in key images. img=" << txin.k_image << ENDL
|
|
<< "transaction id = " << get_transaction_hash(tx));
|
|
std::unordered_set<crypto::hash>& key_image_set = it->second;
|
|
CHECK_AND_ASSERT_MES(key_image_set.size(), false, "empty key_image set, img=" << txin.k_image << ENDL
|
|
<< "transaction id = " << actual_hash);
|
|
|
|
auto it_in_set = key_image_set.find(actual_hash);
|
|
CHECK_AND_ASSERT_MES(it_in_set != key_image_set.end(), false, "transaction id not found in key_image set, img=" << txin.k_image << ENDL
|
|
<< "transaction id = " << actual_hash);
|
|
key_image_set.erase(it_in_set);
|
|
if(!key_image_set.size())
|
|
{
|
|
//it is now empty hash container for this key_image
|
|
m_spent_key_images.erase(it);
|
|
}
|
|
|
|
}
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::take_tx(const crypto::hash &id, transaction &tx, size_t& blob_size, uint64_t& fee)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
auto it = m_transactions.find(id);
|
|
if(it == m_transactions.end())
|
|
return false;
|
|
|
|
auto sorted_it = find_tx_in_sorted_container(id);
|
|
|
|
if (sorted_it == m_txs_by_fee.end())
|
|
return false;
|
|
|
|
tx = it->second.tx;
|
|
blob_size = it->second.blob_size;
|
|
fee = it->second.fee;
|
|
remove_transaction_keyimages(it->second.tx);
|
|
m_transactions.erase(it);
|
|
m_txs_by_fee.erase(sorted_it);
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
void tx_memory_pool::on_idle()
|
|
{
|
|
m_remove_stuck_tx_interval.do_call([this](){return remove_stuck_transactions();});
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
sorted_tx_container::iterator tx_memory_pool::find_tx_in_sorted_container(const crypto::hash& id) const
|
|
{
|
|
return std::find_if( m_txs_by_fee.begin(), m_txs_by_fee.end()
|
|
, [&](const sorted_tx_container::value_type& a){
|
|
return a.second == id;
|
|
}
|
|
);
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
//proper tx_pool handling courtesy of CryptoZoidberg and Boolberry
|
|
bool tx_memory_pool::remove_stuck_transactions()
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
for(auto it = m_transactions.begin(); it!= m_transactions.end();)
|
|
{
|
|
uint64_t tx_age = time(nullptr) - it->second.receive_time;
|
|
|
|
if((tx_age > CRYPTONOTE_MEMPOOL_TX_LIVETIME && !it->second.kept_by_block) ||
|
|
(tx_age > CRYPTONOTE_MEMPOOL_TX_FROM_ALT_BLOCK_LIVETIME && it->second.kept_by_block) )
|
|
{
|
|
LOG_PRINT_L1("Tx " << it->first << " removed from tx pool due to outdated, age: " << tx_age );
|
|
remove_transaction_keyimages(it->second.tx);
|
|
auto sorted_it = find_tx_in_sorted_container(it->first);
|
|
if (sorted_it == m_txs_by_fee.end())
|
|
{
|
|
LOG_PRINT_L1("Removing tx " << it->first << " from tx pool, but it was not found in the sorted txs container!");
|
|
}
|
|
else
|
|
{
|
|
m_txs_by_fee.erase(sorted_it);
|
|
}
|
|
m_transactions.erase(it++);
|
|
}else
|
|
++it;
|
|
}
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
size_t tx_memory_pool::get_transactions_count() const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
return m_transactions.size();
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
void tx_memory_pool::get_transactions(std::list<transaction>& txs) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
BOOST_FOREACH(const auto& tx_vt, m_transactions)
|
|
txs.push_back(tx_vt.second.tx);
|
|
}
|
|
//------------------------------------------------------------------
|
|
bool tx_memory_pool::get_transactions_and_spent_keys_info(std::vector<tx_info>& tx_infos, std::vector<spent_key_image_info>& key_image_infos) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
for (const auto& tx_vt : m_transactions)
|
|
{
|
|
tx_info txi;
|
|
const tx_details& txd = tx_vt.second;
|
|
txi.id_hash = epee::string_tools::pod_to_hex(tx_vt.first);
|
|
txi.tx_json = obj_to_json_str(*const_cast<transaction*>(&txd.tx));
|
|
txi.blob_size = txd.blob_size;
|
|
txi.fee = txd.fee;
|
|
txi.kept_by_block = txd.kept_by_block;
|
|
txi.max_used_block_height = txd.max_used_block_height;
|
|
txi.max_used_block_id_hash = epee::string_tools::pod_to_hex(txd.max_used_block_id);
|
|
txi.last_failed_height = txd.last_failed_height;
|
|
txi.last_failed_id_hash = epee::string_tools::pod_to_hex(txd.last_failed_id);
|
|
txi.receive_time = txd.receive_time;
|
|
tx_infos.push_back(txi);
|
|
}
|
|
|
|
for (const key_images_container::value_type& kee : m_spent_key_images) {
|
|
const crypto::key_image& k_image = kee.first;
|
|
const std::unordered_set<crypto::hash>& kei_image_set = kee.second;
|
|
spent_key_image_info ki;
|
|
ki.id_hash = epee::string_tools::pod_to_hex(k_image);
|
|
for (const crypto::hash& tx_id_hash : kei_image_set)
|
|
{
|
|
ki.txs_hashes.push_back(epee::string_tools::pod_to_hex(tx_id_hash));
|
|
}
|
|
key_image_infos.push_back(ki);
|
|
}
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::get_transaction(const crypto::hash& id, transaction& tx) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
auto it = m_transactions.find(id);
|
|
if(it == m_transactions.end())
|
|
return false;
|
|
tx = it->second.tx;
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::on_blockchain_inc(uint64_t new_block_height, const crypto::hash& top_block_id)
|
|
{
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::on_blockchain_dec(uint64_t new_block_height, const crypto::hash& top_block_id)
|
|
{
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::have_tx(const crypto::hash &id) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
if(m_transactions.count(id))
|
|
return true;
|
|
return false;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::have_tx_keyimges_as_spent(const transaction& tx) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
BOOST_FOREACH(const auto& in, tx.vin)
|
|
{
|
|
CHECKED_GET_SPECIFIC_VARIANT(in, const txin_to_key, tokey_in, true);//should never fail
|
|
if(have_tx_keyimg_as_spent(tokey_in.k_image))
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::have_tx_keyimg_as_spent(const crypto::key_image& key_im) const
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
return m_spent_key_images.end() != m_spent_key_images.find(key_im);
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
void tx_memory_pool::lock() const
|
|
{
|
|
m_transactions_lock.lock();
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
void tx_memory_pool::unlock() const
|
|
{
|
|
m_transactions_lock.unlock();
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::is_transaction_ready_to_go(tx_details& txd) const
|
|
{
|
|
//not the best implementation at this time, sorry :(
|
|
//check is ring_signature already checked ?
|
|
if(txd.max_used_block_id == null_hash)
|
|
{//not checked, lets try to check
|
|
|
|
if(txd.last_failed_id != null_hash && m_blockchain.get_current_blockchain_height() > txd.last_failed_height && txd.last_failed_id == m_blockchain.get_block_id_by_height(txd.last_failed_height))
|
|
return false;//we already sure that this tx is broken for this height
|
|
|
|
if(!m_blockchain.check_tx_inputs(txd.tx, txd.max_used_block_height, txd.max_used_block_id))
|
|
{
|
|
txd.last_failed_height = m_blockchain.get_current_blockchain_height()-1;
|
|
txd.last_failed_id = m_blockchain.get_block_id_by_height(txd.last_failed_height);
|
|
return false;
|
|
}
|
|
}else
|
|
{
|
|
if(txd.max_used_block_height >= m_blockchain.get_current_blockchain_height())
|
|
return false;
|
|
if(m_blockchain.get_block_id_by_height(txd.max_used_block_height) != txd.max_used_block_id)
|
|
{
|
|
//if we already failed on this height and id, skip actual ring signature check
|
|
if(txd.last_failed_id == m_blockchain.get_block_id_by_height(txd.last_failed_height))
|
|
return false;
|
|
//check ring signature again, it is possible (with very small chance) that this transaction become again valid
|
|
if(!m_blockchain.check_tx_inputs(txd.tx, txd.max_used_block_height, txd.max_used_block_id))
|
|
{
|
|
txd.last_failed_height = m_blockchain.get_current_blockchain_height()-1;
|
|
txd.last_failed_id = m_blockchain.get_block_id_by_height(txd.last_failed_height);
|
|
return false;
|
|
}
|
|
}
|
|
}
|
|
//if we here, transaction seems valid, but, anyway, check for key_images collisions with blockchain, just to be sure
|
|
if(m_blockchain.have_tx_keyimges_as_spent(txd.tx))
|
|
return false;
|
|
|
|
//transaction is ok.
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::have_key_images(const std::unordered_set<crypto::key_image>& k_images, const transaction& tx)
|
|
{
|
|
for(size_t i = 0; i!= tx.vin.size(); i++)
|
|
{
|
|
CHECKED_GET_SPECIFIC_VARIANT(tx.vin[i], const txin_to_key, itk, false);
|
|
if(k_images.count(itk.k_image))
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::append_key_images(std::unordered_set<crypto::key_image>& k_images, const transaction& tx)
|
|
{
|
|
for(size_t i = 0; i!= tx.vin.size(); i++)
|
|
{
|
|
CHECKED_GET_SPECIFIC_VARIANT(tx.vin[i], const txin_to_key, itk, false);
|
|
auto i_res = k_images.insert(itk.k_image);
|
|
CHECK_AND_ASSERT_MES(i_res.second, false, "internal error: key images pool cache - inserted duplicate image in set: " << itk.k_image);
|
|
}
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
std::string tx_memory_pool::print_pool(bool short_format) const
|
|
{
|
|
std::stringstream ss;
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
for (const transactions_container::value_type& txe : m_transactions) {
|
|
const tx_details& txd = txe.second;
|
|
ss << "id: " << txe.first << std::endl;
|
|
if (!short_format) {
|
|
ss << obj_to_json_str(*const_cast<transaction*>(&txd.tx)) << std::endl;
|
|
}
|
|
ss << "blob_size: " << txd.blob_size << std::endl
|
|
<< "fee: " << print_money(txd.fee) << std::endl
|
|
<< "kept_by_block: " << (txd.kept_by_block ? 'T' : 'F') << std::endl
|
|
<< "max_used_block_height: " << txd.max_used_block_height << std::endl
|
|
<< "max_used_block_id: " << txd.max_used_block_id << std::endl
|
|
<< "last_failed_height: " << txd.last_failed_height << std::endl
|
|
<< "last_failed_id: " << txd.last_failed_id << std::endl;
|
|
}
|
|
|
|
return ss.str();
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::fill_block_template(block &bl, size_t median_size, uint64_t already_generated_coins, size_t &total_size, uint64_t &fee)
|
|
{
|
|
// Warning: This function takes already_generated_
|
|
// coins as an argument and appears to do nothing
|
|
// with it.
|
|
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
|
|
total_size = 0;
|
|
fee = 0;
|
|
|
|
// Maximum block size is 130% of the median block size. This gives a
|
|
// little extra headroom for the max size transaction.
|
|
size_t max_total_size = (130 * median_size) / 100 - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE;
|
|
std::unordered_set<crypto::key_image> k_images;
|
|
|
|
auto sorted_it = m_txs_by_fee.begin();
|
|
while (sorted_it != m_txs_by_fee.end())
|
|
{
|
|
auto tx_it = m_transactions.find(sorted_it->second);
|
|
|
|
// Can not exceed maximum block size
|
|
if (max_total_size < total_size + tx_it->second.blob_size)
|
|
{
|
|
sorted_it++;
|
|
continue;
|
|
}
|
|
|
|
// If adding this tx will make the block size
|
|
// greater than CRYPTONOTE_GETBLOCKTEMPLATE_MAX
|
|
// _BLOCK_SIZE bytes, reject the tx; this will
|
|
// keep block sizes from becoming too unwieldly
|
|
// to propagate at 60s block times.
|
|
if ( (total_size + tx_it->second.blob_size) > CRYPTONOTE_GETBLOCKTEMPLATE_MAX_BLOCK_SIZE )
|
|
{
|
|
sorted_it++;
|
|
continue;
|
|
}
|
|
|
|
// If we've exceeded the penalty free size,
|
|
// stop including more tx
|
|
if (total_size > median_size)
|
|
break;
|
|
|
|
// Skip transactions that are not ready to be
|
|
// included into the blockchain or that are
|
|
// missing key images
|
|
if (!is_transaction_ready_to_go(tx_it->second) || have_key_images(k_images, tx_it->second.tx))
|
|
{
|
|
sorted_it++;
|
|
continue;
|
|
}
|
|
|
|
bl.tx_hashes.push_back(tx_it->first);
|
|
total_size += tx_it->second.blob_size;
|
|
fee += tx_it->second.fee;
|
|
append_key_images(k_images, tx_it->second.tx);
|
|
sorted_it++;
|
|
}
|
|
|
|
return true;
|
|
}
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::init(const std::string& config_folder)
|
|
{
|
|
CRITICAL_REGION_LOCAL(m_transactions_lock);
|
|
|
|
m_config_folder = config_folder;
|
|
std::string state_file_path = config_folder + "/" + CRYPTONOTE_POOLDATA_FILENAME;
|
|
boost::system::error_code ec;
|
|
if(!boost::filesystem::exists(state_file_path, ec))
|
|
return true;
|
|
bool res = tools::unserialize_obj_from_file(*this, state_file_path);
|
|
if(!res)
|
|
{
|
|
LOG_PRINT_L1("Failed to load memory pool from file " << state_file_path);
|
|
|
|
m_transactions.clear();
|
|
m_txs_by_fee.clear();
|
|
m_spent_key_images.clear();
|
|
}
|
|
|
|
for (auto it = m_transactions.begin(); it != m_transactions.end(); ) {
|
|
if (it->second.blob_size >= TRANSACTION_SIZE_LIMIT) {
|
|
LOG_PRINT_L1("Transaction " << get_transaction_hash(it->second.tx) << " is too big (" << it->second.blob_size << " bytes), removing it from pool");
|
|
remove_transaction_keyimages(it->second.tx);
|
|
m_transactions.erase(it);
|
|
}
|
|
it++;
|
|
}
|
|
|
|
// no need to store queue of sorted transactions, as it's easy to generate.
|
|
for (const auto& tx : m_transactions)
|
|
{
|
|
m_txs_by_fee.emplace((double)tx.second.blob_size / tx.second.fee, tx.first);
|
|
}
|
|
|
|
// Ignore deserialization error
|
|
return true;
|
|
}
|
|
|
|
//---------------------------------------------------------------------------------
|
|
bool tx_memory_pool::deinit()
|
|
{
|
|
if (!tools::create_directories_if_necessary(m_config_folder))
|
|
{
|
|
LOG_PRINT_L1("Failed to create data directory: " << m_config_folder);
|
|
return false;
|
|
}
|
|
|
|
std::string state_file_path = m_config_folder + "/" + CRYPTONOTE_POOLDATA_FILENAME;
|
|
bool res = tools::serialize_obj_to_file(*this, state_file_path);
|
|
if(!res)
|
|
{
|
|
LOG_PRINT_L1("Failed to serialize memory pool to file " << state_file_path);
|
|
}
|
|
return true;
|
|
}
|
|
}
|