## Saturday, March 29, 2014

### Power means

Before we get going, let me mention that my friends over at What does the quant say? have started a blog! This weeks post is about? You guessed it, statistics! You can check out "Getting our hands dirty with beta functions" by clicking here.

This post picks up where we left off in "Statistics", looking at different definitions for the mean of a sequence of numbers.

The geometric mean together with the arithmetic and harmonic means of the last post together are called the Pythagorean means.

let geometric_mean t =
(*http://en.wikipedia.org/wiki/Geometric_mean

The geometric mean is defined as the nth root (where n is the
count of numbers) of the product of the numbers.

*)
let n = List.length t in
let prod = List.fold_left (fun acc xi -> acc *. xi) 1.0 t in
prod ** (1.0/.(float_of_int n))

Note the presence of the power function powf (operator ( ** ))! How to compute values of this function was the focus of this post.

Here is yet another common mean, the quadratic mean.

let quadratic_mean t =
(*http://en.wikipedia.org/wiki/Root_mean_square

In mathematics, the root mean square (abbreviated RMS or rms),
also known as the quadratic mean, is a statistical measure of the
magnitude of a varying quantity. It is especially useful when
variates are positive and negative, e.g., sinusoids. RMS is used
in various fields, including electrical engineering.  It can be
calculated for a series of discrete values or for a continuously
varying function. Its name comes from its definition as the square
root of the mean of the squares of the values.

*)
let squares = List.fold_left
(fun acc xi -> acc @ [xi *. xi]) [] t in
sqrt (arithmetic_mean squares)


The generalized mean or "power mean" includes all of the means we have considered to date as special cases.

let power_mean p t =
(*http://en.wikipedia.org/wiki/Generalized_mean

In mathematics, generalized means are a family of functions for
aggregating sets of numbers, that include as special cases the
arithmetic, geometric, and harmonic means

*)
let powers = List.fold_left
(fun acc xi -> acc @ [( ** ) xi p]) [] t in
(arithmetic_mean powers)**(1.0/.p)

Note: Recovering the geometric mean from this definition requires looking at the limit of the expression as p -> 0 and application of L'Hopital's rule (see http://en.wikipedia.org/wiki/Generalized_mean for the details).

## Sunday, March 23, 2014

### Statistics

Some more basic functions, this time focusing on sequences of numbers. First sum.
let sum t =
(*http://en.wikipedia.org/wiki/Sum

Summation is the operation of adding a sequence of numbers

*)
List.fold_left (fun acc x -> acc +. x) 0.0 t

Now, arithmetic_mean follows trivially.
let arithmetic_mean t =
(*http://en.wikipedia.org/wiki/Arithmetic_mean

Simply the mean or average when the context is clear, is the sum
of a collection of numbers divided by the number of numbers in the
collection.

*)
(sum t)/.(float_of_int (List.length t))

This formulation shows clearly the relationship between standard deviation and arithmetic mean.
let standard_deviation t =
(*http://en.wikipedia.org/wiki/Standard_deviation

For a finite set of numbers, the standard deviation is found by
taking the square root of the average of the squared differences
of the values from their average value.

*)
let av = arithmetic_mean t in
let squared_diffs = List.fold_left
(fun acc xi -> ((xi -. av) *. (xi -. av)) :: acc) [] t
in sqrt (arithmetic_mean squared_diffs)

The harmonic_mean function is, I find, a more interesting function.
let harmonic_mean t =
(*http://en.wikipedia.org/wiki/Harmonic_mean

The harmonic mean is the preferable method for averaging
multiples, such as the price/earning ratio, in which price is in
the numerator. If these ratios are averaged using an arithmetic
mean (a common error), high data points are given greater weights
than low data points. The harmonic mean, on the other hand, gives
equal weight to each data point.

*)
let n = float_of_int (List.length t) in
let denom=List.fold_left
(fun acc xi -> acc +. 1.0 /. xi) 0.0 t
in n /. denom


## Saturday, March 22, 2014

### ppf build status update

Good results obtained on Windows 7 using Microsoft SDK 7.1 (msvc-10.0, x64), Boost-1.55.0, Blitz++-0.10, ActivePython-2.7.6 and Numpy (MKL-1.8).

There were a few 'tweaks' necessary to obtain the build. One thing to watch out for is the absence of these declarations in blitz-0.10/blitz/ms/bzconfig.h:

/* Macro for declaring aligned variables */
#ifndef BZ_ALIGN_VARIABLE
#  define BZ_ALIGN_VARIABLE(vartype,varname,alignment) \
vartype varname; \
/**/
#endif

/* Set SIMD instruction width in bytes */
#ifndef BZ_SIMD_WIDTH
#  define BZ_SIMD_WIDTH 1
#endif

I built ppf_date_time.pyd and ppf_math.pyd with a command along the lines of
bjam toolset=msvc-10.0 link=shared threading=multi variant=release address-model=64

I think the presence of these features confuses the install commands. I had to move the files into final position by hand. My user-config.jam was this:
import toolset : using ;

using msvc
: 10.0
: "c:/Program Files (x86)/Microsoft Visual Studio 10.0/VC/Bin/amd64/cl.exe"
;

using python
: 2.7
: "c:/python27/python.exe"
;

and my site-config.jam read like this
import project ;
project.initialize \$(__name__) ;
project site-config ;

alias boost_headers  : : : :
<include>c:/project/boost_1_55_0
;

alias python_headers : : : :
<include>c:/python27/include
;

alias numpy_headers : : : :
<include>c:/Python27/Lib/site-packages/numpy/core/include
;

alias blitz_headers : : : :
<include>c:/project/blitz-0.10
;

lib python
:
: <name>python27
: <search>c:/python27/libs
;

lib boostpython
:
: <name>boost_python-vc100-mt-1_55
<search>c:/project/boost_1_55_0/stage/lib
<toolset-msvc:version>10.0
;

lib boostunittestframework
:
: <name>boost_unit_test_framework-vc100-mt-1_55
<search>c:/project/boost-1_55_0/stage/lib
<toolset-msvc:version>10.0
;

lib boostdatetime
:
: <name>boost_date_time-vc100-mt-1_55
<search>c:/project/boost-1_55_0/stage/lib
<variant>release
<toolset-msvc:version>10.0
;

Lines 24-29 of ppf/ext/lib/date_time/src/register_date_more.cpp seem to trigger an ambiguous overload error - that needs looking at. I worked around it with the following for now.
  std::string to_string() const
{
//return this->get_override("to_string")();
return "not implemented - fix me";
}

After build & installation (python setup.py install), as always, don't forget to configure your environment approiately.
set PATH="c:/project/boost-1_55_0/stage/lib;%PATH%"
set PYTHONPATH="c:/python27/lib/site-packages;%PYTHONPATH%"

(of course make sure that the files c:/python27/lib/site-packages/ppf/date_time/ppf_date_time.pyd, c:/python27/lib/site-packages/ppf/math/ppf_math.pyd and c:/project/boost-1_55_0/stage/lib/boost_python-vc100-mt-1_55.dll exist).

Testing goes like this.

c:\Python27\Lib\site-packages\ppf\test>python test_all.py --verbose
python test_all.py --verbose
test_call (test_core.black_scholes_tests) ... ok
test_put (test_core.black_scholes_tests) ... ok
test (test_core.libor_rate_tests) ... ok
test (test_core.swap_rate_tests) ... ok
test_nth_imm_of_year (test_date_time.imm_tests) ... ok
test_first_imm_after (test_date_time.imm_tests) ... ok
test_first_imm_before (test_date_time.imm_tests) ... ok
test_discount_factor (test_hull_white.requestor_tests) ... ok
test_term_vol (test_hull_white.requestor_tests) ... ok
test (test_hull_white.state_tests) ... ok
test_numeraire_rebased_bond (test_hull_white.fill_tests) ... ok
test_libor (test_hull_white.fill_tests) ... ok
test_discounted_libor_rollback (test_hull_white.rollback_tests) ... ok
test_bond_option (test_hull_white.rollback_tests) ... ok
test_constant (test_hull_white.rollback_tests) ... ok
test_mean_and_variance (test_hull_white.evolve_tests) ... ok
test_bond (test_hull_white.evolve_tests) ... ok
test_explanatory_variables (test_hull_white.exercise_tests) ... ok
test_value (test_lattice_pricer.swap_tests) ... ok
test_value (test_lattice_pricer.bermudan_tests) ... ok
deep_in_the_money_test (test_lattice_pricer.moneyness_tests) ... ok
deep_out_the_money_test (test_lattice_pricer.moneyness_tests) ... ok
test (test_market.surface_tests) ... ok
test (test_math.linear_interpolation_tests) ... ok
test_list (test_math.cubic_spline_interpolation_tests) ... ok
test_array (test_math.cubic_spline_interpolation_tests) ... ok
test (test_math.solve_tridiagonal_system_tests) ... ok
test (test_math.solve_upper_diagonal_system_tests) ... ok
test (test_math.singular_value_decomposition_back_substitution_tests) ... ok
test (test_math.generalised_least_squares_fit_tests) ... ok
test1 (test_math.bisect_tests) ... ok
test2 (test_math.bisect_tests) ... ok
test (test_math.newton_raphson_tests) ... ok
test (test_math.piecewise_cubic_fitting_tests) ... ok
test1 (test_math.normal_distribution_integral_tests) ... ok
test2 (test_math.normal_distribution_integral_tests) ... ok
lognormal_martingale_test (test_math.integrator_tests) ... ok
tower_law_test (test_math.integrator_tests) ... ok
atm_option_test (test_math.integrator_tests) ... ok
no_trigger_limit_test (test_monte_carlo_pricer.tarn_tests) ... ok
trigger_after_first_flow_test (test_monte_carlo_pricer.tarn_tests) ... ok
monotonic_with_target_test (test_monte_carlo_pricer.tarn_tests) ... ok
test_value (test_monte_carlo_pricer.bermudan_tests) ... ok
deep_in_the_money_test (test_monte_carlo_pricer.moneyness_tests) ... ok
deep_out_the_money_test (test_monte_carlo_pricer.moneyness_tests) ... ok
test_lower_bound (test_utility.bound_tests) ... ok
test_upper_bound (test_utility.bound_tests) ... ok
test_equal_range (test_utility.bound_tests) ... ok
test_bound (test_utility.bound_tests) ... ok
test_bound_ci (test_utility.bound_tests) ... ok

----------------------------------------------------------------------
Ran 51 tests in 8.090s

OK


It seems there are builds now for Python 3.3 Numpy here. With these, it should be possible to produce a Python 3.3, 64-bit Windows ppf. This upgrade will require some minor changes to the ppf Python code.

## Friday, March 21, 2014

### Powers

bn is familiar when n is a whole integer (positive or negative). Here's an algorithm to compute it.
let rec pow (i, u) =
if i < 0 then 1.0 / (pow (-i, u))
else if i = 0 then 1.0 else u * pow ((i-1), u)

It is also familiar to us when c/d = n that is, for n a rational number (then the solution is found by taking the dth root of bc). Less so though I think when x = n is some arbitrary real. That is, generally speaking how is bx computed?

Logarithms and exponentiation. Since b = e log b then bx = (elog b)x = e x (log b).
let powf (b, x) = exp (x * log b)

This only works when b is real and positive. The case negative b leads into the theory of complex powers.

“Sometimes,' said Pooh, 'the smallest things take up the most room in your heart.”