DeciMath – Efficient-Gas Fixed-Point Mathematics on Ethereum

Trustless computation on Ethereum is set to revolutionize multiple industries. Smart contracts written in Solidity open the doors to decentralized finance, immutable storage and programmable money – it’s an exciting time to build on ETH.

However… challenges abound. Smart contract programming languages must balance expressiveness with security. Ethereum’s smart contract language, Solidity, deliberately omits certain features – and one of these is decimal numbers.

As it turns out, decimal mathematics would be pretty useful in Solidity – it’s important for precise computations like interest payments or game mechanics.

Solidity doesn’t have decimal math, so inspired by Vitalik Buterin’s post, I set about creating DeciMath. I tackled the transcendental functions first – ln(x), exp(x) and pow(b,x) – with a view to getting useful results for 18 decimal places of precision.

Putting Decimals on Ethereum

Solidity only deals with signed and unsigned integers. However, if we specify a fixed-point number of decimal places, it’s easy to represent decimals in terms of uints. Like so:

Numberuint representation at 18 DP

I chose 18 decimal places of precision for DeciMath – conceivably high enough for most computations on the ETH blockchain. It also matches the smallest denomination of ether: 1 wei (1e-18 ether).

Making Decimal Multiplication and Division Work

For addition and subtraction, decimal arithmetic is the same as normal arithmetic – we just keep the decimal point in the same place – i.e. 18 digits from the right.

Decimal multiplication and division become tricky. Solidity doesn’t know that we’re using uints to actually represent decimals, so it gets the output magnitude wrong. For 2 * 2, it will actually do:

2e18 * 2e18 = 4e36

…out by a factor of a billion billion! Our representation of ‘4’ should be 4e18.

We correct for this with special functions for decimal multiplication and division, with DeciMath uints as input and output. They output both the correct digits and magnitude. For example:

1) Normal Multplication: 2.5 * 0.005

2) DeciMath multiplication: decMul18(2500000000000000000, 50000000000000000)

1 ) returns 0.00125

2) returns 1250000000000000 – the correct uint representation of 1).

The Transcendental Functions – exp(x), ln(x), pow(b,x)

With basic arithmetic functions in place, we have the building blocks for the transcendentals – the exponential, logarithm, and pow functions.

The most well-known algorithms for computing ln(x) and exp(x) involve taylor expansions – using known identities that equate these functions to their infinite series expansions. The algorithms sum successive terms in the series, until the sum is accurate to a specified tolerance.

Engineering Challenge #1: Performant Algorithms and Gas Costs

Taylor series algorithms are, unfortunately, not performant enough in Solidity.

Even for the fast-converging taylor expansion of exp(x), gas costs rise quickly – we empirically observe gas costs rising roughly linearly, hitting 100k+ for exponent over 30.

In fact, it’s clear from inspecting the algorithm that the number of multiplications increases linearly with x, i.e. the runtime complexity is O(x).

The taylor expansion for ln(x) converge more slowly than exp(x), and an expression for pow(b,x) would need to perform both ln(x) and exp(x).

A faster approach was clearly needed.

Lookup Table based Algorithms

One approach to reduce computational complexity of an algorithm is to swap certain common operations with reads from an array of values in a lookup table (LUT).

The better LUT-based algos tend to be much more performant – O(log(n)), or even O(1). It’s possible to find LUT-based algorithms where runtime complexity is relatively constant with respect to input magnitude, and varies more with input precision.

Engineering Challenge #2 – Blockchain Data Size Constraints

I considered simply writing log tables to the blockchain, such that ln(x) could be computed with a direct lookup and very little computation.

However, on Ethereum, storage is at a premium. Writing data is very costly, as it it must be replicated and stored on every node in the EVM.

To compute ln(x) for even nine decimal places of precision would require an array of length 100 million. Millions of lines of array data will make contract deployment costs exorbitant, and just setting the whole array would require thousands of seperate transactions.

So what is our max viable LUT size?

I wanted to keep upfront contract costs – deploying and setting LUTs – well under $100. Running the numbers with the 2019 median gas price and current ETH price, upfront contract costs hit ~$80 with 500 LUT entries.

So I sought out LUT-based algorithms for exp(x) and ln(x) that were both accurate and utilized small LUTs – no more than a few hundred entries total.

As it turns out, algorithms for math functions in micro-controllers in embedded hardware have a similar requirement. They need high accuracy, but they’re also very storage-constrained.

Digging through the literature led me to LUT-based algorithms for ln(x) and exp(x) from a 2004 paper by Baumann. These used a couple of nice identities to decompose ln(x) and exp(x) into functions of log2(x) and pow2(x) respectively – and small LUTs to index the terms used in repeated computation. pow(b,x) may then be composed from ln() and exp() functions.

In practice only, a 100-length LUT and two 37-length LUT arrays were needed for the algorithms – very acceptable for a smart contract.

Results – Function Performance

Success! ln(x), exp(x) and pow(b,x) have nearly constant gas usage. They are mostly dependent on the precision of the input, and in the case of ln(x), the desired output accuracy.

For 18 DP precision, we have gas costs:

  • ln(x): 60-63k
  • exp(x): 30-34k
  • pow(b,x): 85-94k

(these were reached after further optimizations detailed below).

For comparison, 100k is about the cost of sequentially writing five 256-bit uints to a storage array.

Calculating Errors

We live in an imperfect world. Round-off error is the bane of accurate math libraries. The LUT values are accurate only to a set precision – 38 decimal places – but as most are irrational numbers, the true values go on forever.

Since the LUTs are used repeatedly in the algorithms, we get tiny errors at each step which bubble up and contribute to an overall discrepancy.

I used a JS script to calculate percentage errors, comparing results of my pow(), ln() and exp() to the results of their counterpart functions in the JS Decimal library.

So How Accurate Are DeciMath Functions?

ln(x)is very precise. The max percentage error ranges from 1e-17 to 1e-19, and outputs are always < 100 – thus ln(x) is always accurate to at least 17 decimal places.

exp(x) and exp_taylor(x) also have low max percentage error – 1e-16 to 1e-18. However, their outputs grow exponentially. Thus, exp() functions are most precise at lower exponent. exp(10) is accurate to at least 10 decimal places, while exp(60) is accurate to the nearest 1e6.

pow(b,x) is most accurate in the middle ranges – e.g base 0.1 – 10, with exponent 0 – 15, here it precise to at least several decimal places. It is least accurate at the extremes: at high base, or base ≈ 1 with very high exponent.

It’s usefulness depends on the use case – see the GitHub documentation for error estimates at different input ranges.

Computing Actual Gas Costs of DeciMath Functions

Like all internal functions, DeciMath functions only incur gas costs when included as a part of a transaction. Local calls to a Ethereum node from an client cost 0 gas.

To see actual gas usage (and not just estimates) I used a wrapper “caller” contract – DeciMathCaller.sol – with public functions. These public functions call DeciMath functions inside a transaction – triggering gas usage. We subtract 21000 (tx fee) from the transaction cost to get true DeciMath gas costs.

Handling Big Numbers

Pure Javascript doesn’t like big numbers. The ‘maximum safe integer’ is around 9e15. So I coded up a little module – makeBN.js – to both handle big numbers, and convert them to and from DeciMath format. It relies on JS ‘Decimal’ and ‘BN’ modules.

Further Gas Optimizations

Seeking improvements in gas costs, I scoured the algorithms for opportunities to refactor and optimize.


The first easy-win optimization was to refactor ln(x) with branches. The ln(x) algorithm repeatedly divides input by 2 until it reaches 1 < x < 2, counting divisions, then performs log2(x). Since x can be very large (1.1e41) there can be up to 130 divisions before the loop stops.

I swapped this repeated division with conditional branches that check the magnitude of x and divide by a sufficiently large multiple of two, removing large chunks of operations.

Result: A nice reduction in gas cost of around 10k.


At each step in the pow() algorithm there is a multiplication of the form:

pow_table[i] * d

Where d is a digit of the mantissa – the part of the number after the decimal point.

Since a digit can only ever be 0-9, the LUT could simply be increased by a factor of 10 to contain all possible outputs of the above multiplication.

I saw a chance to replace an operation with a lookup at every step of the loop. I hoped this would reduce both gas and percentage error – as we’d have fewer operations, and exact values in the table.

Result: Expanding the LUT to a length of 370 yielded a large improvement in gas cost of pow(b,x) – reducing average usage by 60k! A fantastic result.

Percentage error only improved slightly, and remained the same order of magnitude.

Testing The Contract

I wrote an exhaustive set of tests, all in Solidity. Solidity is currently not much fun for testing: it’s somewhat inexpressive, the tests are (confusingly) themselves inside contracts, and it doesn’t like reversion or external helper files.

For most use cases I prefer testing Truffle projects in Javascript.

However – for a math library, I wanted to test raw function outputs with as little intermediate JS as possible… so .sol test files it was.

Testing contract reversion in Solidity is convoluted – I had to use a proxy contract pattern to execute raw contract calls. These return a boolean – false if the call reverts, which we can check for with our assertions. Check out the TestRawCaller in TestDeciMath_Log2.sol to see that in action.

Sources of Error, and Future Improvements

As mentioned above, errors range between 1e-18% to 1e-19% for ln(x), and 1e-15% to 1e-20% for pow(b,x).

I’m very happy with ln(x) precision and gas costs. It’s accurate to at least 17DP for all inputs, and is an acceptable, constant 60k-ish gas.

pow(b,x) – despite fairly constant percentage error and decent, constant gas – has high absolute error outside its ‘sweet spot’ ranges.

This is because the output of pow(b,x) can get so large – up to around 1e40.

For example:
A) Decimal.pow(15,20.345)   // Computed with the JS Decimal library

B) DeciMath.pow(15000000000000000000, 20345000000000000000)   // Computed with the DeciMath pow function

A returns: 846401969952137856379629.989537881113927979
B returns: 846401969952137856114431.110646095971178011

Percentage error: 3.13e-17 %
Absolute error: 265198.87 (to 2 DP)

And for larger outputs, we get larger absolute error.

Future Work for Reducing pow(b,x) Error

Let’s take a look at the pow function to see where we might improve it:

function pow(uint base, uint x) public view returns (uint power) {
  if (base >= TEN18) {
    return exp(decMul18(x, ln(base, 70)));
  if (base < TEN18) {
    uint exponent = decMul18(x, ln(decDiv18(TEN18, base), 70));
    return decDiv18(TEN18, exp(exponent));

We see that both branches return an expression of the form exp(ln(base)).

So percentage error in pow() is constrained by percentage errors in exp() and ln(). Squeezing better performance of pow() necessitates improvement in either exp() or ln().

Accuracy in these two functions comes down to round-off error – the LUTs, used in multiplications, are only accurate to 38 digits.

Trade-offs: Accuracy, Gas Cost and Overflow Protection

Increasing the precision of the lookup tables would increase gas costs, but may also introduce random uint overflow errors – even under the maximum input limits.

I orginally produced the LUTs to 38DP, because Solidity uints overflow at 2**256 – around 1.1e77. LUT values are all =~1 or < 1, and only used in simple multiplications – so 38DP precisions guarantees overflow is avoided (since 1e38 * 1e38 = 1e76).

However. Upon inspecting the algorithms, it looks like there might be room to spare. Multiplications involving LUT values do not seem to approach this maximum product (1.1e77). It may be possible to squeeze out 1 or 2 more decimal places of precision, without introducing random overflows.

Another approach would be to – at each step – correct for the round-off error of the previous step. This is known as Kahan summation. It may be worthwhile, but will certainly increase gas costs.

Fixed-Point Mathematics on the Blockchain

The constraints of Ethereum and Solidity can be challenging… what’s easy in centralized applications, often becomes hard on the blockchain. Limited expressiveness, security concerns, high storage and computation costs combine to make blockchain development a formidable task.

It was super fun to tackle Decimal math in Solidity – I’m especially happy to bring logarithms to Ethereum with low error and reasonable gas cost. The transcendental functions are the building blocks of mathematics – if you’d like to see further decimal math built out, feel free to contact me or submit a pull request on the DeciMath GitHub repo.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *