Optimization

It is often useful to construct a distribution \(d^\prime\) which is consistent with some marginal aspects of \(d\), but otherwise optimizes some information measure. For example, perhaps we are interested in constructing a distribution which matches pairwise marginals with another, but otherwise has maximum entropy:

In [1]: from dit.algorithms.distribution_optimizers import MaxEntOptimizer

In [2]: xor = dit.example_dists.Xor()

In [3]: meo = MaxEntOptimizer(xor, [[0,1], [0,2], [1,2]])

In [4]: meo.optimize()
Out[4]: 
     fun: -3.0000017176631144
     jac: array([-2.99999958, -2.99999982, -3.00000009, -3.0000003 , -3.00000012,
       -3.00000006, -2.99999976, -2.99999973])
 message: 'Optimization terminated successfully.'
    nfev: 938
     nit: 85
    njev: 85
  status: 0
 success: True
       x: array([0.12500005, 0.12500008, 0.12500006, 0.1250001 , 0.12500007,
       0.12500007, 0.12500006, 0.12500008])

In [5]: dp = meo.construct_dist()

In [6]: print(dp)
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 3
RV Names:       None

x     p(x)
000   1/8
001   2369094/18952753
010   3843589/30748713
011   852370/6818959
100   1924125/15392999
101   2480133/19841065
110   1/8
111   1496478/11971825

Helper Functions

There are three special functions to handle common optimization problems:

In [7]: from dit.algorithms import maxent_dist, marginal_maxent_dists

The first is maximum entropy distributions with specific fixed marginals. It encapsulates the steps run above:

In [8]: print(maxent_dist(xor, [[0,1], [0,2], [1,2]]))
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 3
RV Names:       None

x     p(x)
000   1/8
001   5316449/42531591
010   1/8
011   1/8
100   1/8
101   5026722/40213777
110   1/8
111   1/8

The second constructs several maximum entropy distributions, each with all subsets of variables of a particular size fixed:

In [9]: k0, k1, k2, k3 = marginal_maxent_dists(xor)

where k0 is the maxent dist corresponding the same alphabets as xor; k1 fixes \(p(x_0)\), \(p(x_1)\), and \(p(x_2)\); k2 fixes \(p(x_0, x_1)\), \(p(x_0, x_2)\), and \(p(x_1, x_2)\) (as in the maxent_dist example above), and finally k3 fixes \(p(x_0, x_1, x_2)\) (e.g. is the distribution we started with).