The 10 minute one will never trigger for 0 blocks, as it's still
fairly likely to happen even without the actual hash rate changing
much, so we add a 20 minute window, where it will (for 0 blocks)
and a one hour window.
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
Further speedups to icu compilation, it is faster to run the
pre-generated configure scripts.
Ensure that the native protobuf installation only generates the required
libraries and binaries.
Disable qt compilation when running travis on windows. Qt is used for
lrelease, the travis recipe instead usese the a local installation of
lrelease.
Remove various packages and options from the travis recipe.
Update Readline to version 8.0. The previously used url 404'd sometimes,
use the official gnu ftp server instead.
Remove unused cmake config.
Building with docker is arguably easier and more familiar to most people
than either kvm, or lxc.
This commit also relaxes the back compat requirement a bit. 32 bit linux
now uses glibc version 2.0. Also, the docker shell could not handle gcc arguments
containing spaces, so the explicit '-DFELT_TYPE' declaration was dropped.
Lastly, this removes some packages from the osx descriptor.
```
Undefined symbols for architecture x86_64:
"common_category()", referenced from:
make_error_code(common_error) in parse.cpp.o
make_error_code(common_error) in tor_address.cpp.o
"boost::system::detail::system_category_ncx()", referenced from:
boost::system::system_category() in parse.cpp.o
boost::system::system_category() in socks.cpp.o
boost::system::system_category() in libepee.a(net_utils_base.cpp.o)
"boost::system::detail::generic_category_ncx()", referenced from:
boost::system::generic_category() in parse.cpp.o
boost::system::generic_category() in socks.cpp.o
boost::system::generic_category() in tor_address.cpp.o
boost::system::generic_category() in libepee.a(string_tools.cpp.o)
boost::system::generic_category() in libepee.a(net_utils_base.cpp.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [src/net/libnet.dylib] Error 1
make[2]: *** [src/net/CMakeFiles/net.dir/all] Error 2
```