# --------------------------------------------------------------------- # CONTINGENCY code for Maple (Version of 02-26-07) # Based on the paper: Counting Magic squares in quasi-polynomial time # by Alexander Barvinok, Alexander Samorodnitsky and # Alexander Yong # Report errors to Alexander Yong (ayong@math.umn.edu). # Maple compatibility: Tested for Maple 7,10. # --------------------------------------------------------------------- # ----------------------------------------------------------------------------------------- # QUICK INSTRUCTIONS: # # Purpose: This software provides an implementation of the simplified'' # estimator of the number of _magic squares_ with parameters # n and t, i.e., positive integer n x n matrices whose row and # column sums equal t. (Those familiar with magic squares from # recreational math will be familiar with the extra assumption # that the diagonals sum to the magic total'' t. Our terminology # uses the convention of usage in modern combinatorics, following, # e.g., MacMahon and Stanley.) # # In fact, this software applies the same techniques to more # contingency tables with more general margins. # # Usage: Start maple and load contingency.txt. Type contingency();'' # to run the user interface. The row and column margins are # inputted as a vector, e.g. [3,5,7,2,10]. In the case of # magic squares, use, e.g., same_array(15, 10)'' to create # a length 10 vector of 15's: [15,15,15,15,...,15] # # Example 1: Suppose you want to compute the number of 6x6 matrices # with line sum t=36. We do this by expressing the integral # as S_1\times\prod_{i=1}^{m-1}S_{i+1}/S_i (see section 3.6 # of the paper), where t_1=8. Each S_i,i>1 is computed by # hit and run Markov chain Monte Carlo. This is done at # each step with three samples derived after waiting for # the random walk to mix'' 7000 steps (thus a total of # 21000 steps at each stage). The integral S_1 is computed by a # sample mean using 100000 samples. Since the double stochasticizaion # used in the code is approximate, we choose an error of 0.001. # # Then Running contingency(); we answer the # questions, as follows: # # Number of rows: 6;'' # Number of columns: 6;'' # Row margins, e.g., [3,3,...,3]: same_array(36,6)'' # Column margins, e.g., [3,3,...,3]: same_array(36,6)'' # Number of telescoping steps: 20'' # Vector of hit and run steps: same_array(21000,20);'' # Vector of hit and run steps to wait between samples: same_array(7000,20);'' # Row margins in the final step: same_array(8,6);'' # Column margins in the final step: same_array(8,6);'' # The error used in the sigma calculation. Example 0.001: 0.001;" # # The code outputs sum intermediate data (the margins used in the # telescoping steps and log of the integral $S_i=\int_{\Delta}\Psi_{i}d\mu$). # This is useful for parallel processing. # The estimated number of tables is given as [NUMBER_OF_TABLES, 0.2205723997 10^28] # in the case of one run we did. # # Example 2: Alternatively, one can run Example 1 noninteractively via: # markov_telescope_July06(6,6, same_array(36,6), # same_array(36,6), 20, same_array(21000,20), # same_array(7000,20), 100000, same_array(8,6), same_array(8,6), # 0.00001,0); # (The last entry can be set to 1 if the user wants certain intermediate computations # outputted.) # # Example 3: In the above example, we estimated the number of magic squares. However, the # code also works to give an answer for contingency tables (although we have # no proven results in general, this was used to come up with conjectures in # the paper). This explains, e.g., why we use same_array(36,6) to describe # the line sum t=36. So for example, we could compute the number of 4x4 # contingency tables with row margins R=[220,215,93,64] and column margins # C=[108,286,71,127] by # markov_telescope_July06(4,4, [220,215,93,64], [108,286,71,127], 50, # [op(same_array(20000,50))], [op(same_array(20000,50))], # 100000, same_array(10,4), same_array(10,4), 0.001,0); # # So here, we only used one sample point in each of the intermediate telescoping steps # but sampled 100000 points in the final sample mean. This led to a fairly accurate # estimate of the correct answer. Note that we telescope to the 4x4 t=10 magic square # case. Our experience is that having the final step be a magic squares case with # t moderately larger than n leads to better estimates. # ----------------------------------------------------------------------------------------------- # Computes contingency tables via the hit and run markov chain # Monte Carlo method # INPUT: mxn are the dimensions of the matrix, row_marginsin ans column_marginsin # are self-explanatory, num_steps is the number of the telescoping ratios # to estimate. num_samples is the total number of sample points per step # num_iterations_before sample is the length of any random walk before # actually taking a point to average with. Note both of these two # previous inputs are VECTORS (hence the length and waiting time of walks # can vary from step to step. final_steps is the number of # naive MC samples to compute the final ratio. rowend_marginsin and # colend_marginsin are the margins of the tables that we compute with # naive MC at the end. error1 is used in Db_stochize... and plotdata is # 0 or 1 depending on if the user wants to see shape of f on a chosen # line during the hit and run process. # RETURNS: [log(integral), integral, #tables] estimations markov_telescope_July06:=proc(m,n, row_marginsin, column_marginsin, num_steps, num_samples, num_iterations_before_sample, final_steps, rowend_marginsin, colend_marginsin, error1, plotdata) local aa,ii,jj, row_descending_marginslist, column_descending_marginslist, row_dropvector, column_dropvector, NN, ans, row_margins, column_margins, numtables, log_factorial_product, rowend_margins, colend_margins; row_margins:=[]: column_margins:=[]: rowend_margins:=[]: colend_margins:=[]: for ii from 1 to nops(row_marginsin) do row_margins:=[op(row_margins), op(ii, row_marginsin)]: od: for jj from 1 to nops(column_marginsin) do column_margins:=[op(column_margins), op(jj, column_marginsin)]: od: for ii from 1 to nops(rowend_marginsin) do rowend_margins:=[op(rowend_margins), op(ii, rowend_marginsin)]: od: for jj from 1 to nops(colend_marginsin) do colend_margins:=[op(colend_margins), op(jj, colend_marginsin)]: od: row_descending_marginslist:=[row_margins]: column_descending_marginslist:=[column_margins]: row_dropvector:=[]: column_dropvector:=[]: NN:=sumarrayelts(row_margins, nops(row_margins)): # two symmetric cases, depending on which side of the rectangle is shorter if(m<=n) then for ii from 1 to m do row_dropvector:=[op(row_dropvector), (op(ii,row_margins)-op(ii,rowend_margins))/num_steps]: od: for jj from 1 to n do if(jj<=m) then column_dropvector:=[op(column_dropvector), (op(jj, column_margins)-op(jj,colend_margins))/num_steps]: else column_dropvector:=[op(column_dropvector), (op(jj,column_margins))/num_steps]: fi: od: #print(ROW_DROP_VECTOR, row_dropvector, COLUMN_DROP_VECTOR, column_dropvector); for aa from 1 to num_steps do for ii from 1 to m do row_margins[ii]:=row_margins[ii]-op(ii,row_dropvector): od: row_descending_marginslist:=[op(row_descending_marginslist), row_margins]: for jj from 1 to n do column_margins[jj]:=column_margins[jj]-op(jj,column_dropvector): od: column_descending_marginslist:=[op(column_descending_marginslist), column_margins]: od: else for jj from 1 to n do column_dropvector:=[op(column_dropvector), (op(jj,column_margins)-op(jj,colend_margins))/num_steps]: od: for ii from 1 to m do if(ii<=n) then row_dropvector:=[op(row_dropvector), (op(ii, row_margins)-op(ii,rowend_margins))/num_steps]: else row_dropvector:=[op(row_dropvector), (op(ii, row_margins))/num_steps]: fi: od: #print(ROW_DROP_VECTOR, row_dropvector, COLUMN_DROP_VECTOR, column_dropvector); for aa from 1 to num_steps do for jj from 1 to n do column_margins[jj]:=column_margins[jj]-op(jj,column_dropvector): od: column_descending_marginslist:=[op(column_descending_marginslist), column_margins]: for ii from 1 to m do row_margins[ii]:=row_margins[ii]-op(ii,row_dropvector): od: row_descending_marginslist:=[op(row_descending_marginslist), row_margins]: od: fi: print([row_descending_marginslist, column_descending_marginslist]); ans:=0; for ii from 1 to nops(row_descending_marginslist)-1 do ans:=ans+(hitandrun_July06(m,n, op(ii, row_descending_marginslist), op(ii, column_descending_marginslist), op(ii+1, row_descending_marginslist), op(ii+1, column_descending_marginslist), op(ii,num_samples), op(ii,num_iterations_before_sample), error1, plotdata)); print(ii, ans); od: ans:=ans+(op(1, genzoom_sigma(m,n, op(nops(row_descending_marginslist), row_descending_marginslist), op(nops(column_descending_marginslist), column_descending_marginslist), final_steps, error1, 1, 0))): # factors for the number of tables NN:=sumarrayelts(row_marginsin, m): log_factorial_product:=0: for ii from 1 to m do log_factorial_product:=log_factorial_product+log(op(ii, row_marginsin)!*1.0): od: for ii from 1 to n do log_factorial_product:=log_factorial_product+log(op(ii, column_marginsin)!*1.0): od: numtables:=evalf(log((NN*1.0)!))-NN*log(NN*1.0)+evalf(log((NN*1.0+m*n-1.0)!)) -evalf(log((m*n-1.0)!*1.0))-log_factorial_product: numtables:=numtables+ans; RETURN([NUMBER_OF_TABLES, exp(numtables)]); end: # hit and run hitandrun_July06:=proc(m,n, row_margins, column_margins, row_marginsprimed, column_marginsprimed, num_samples, num_iterations_before_sample, error1, plotdata) local aa, bb, ii, jj, tt, pos, gamma_matrix, dividing_total, random_list, gamma_list, ratioslist, mu_matrix, min_mu, t_plus, t_minus, tau_minus, tau_plus, stopflag, max_value, tset, fset, tau_left, tau_right, f_tau_minus, f_tau_left, f_tau_right, f_tau_plus, pairsets, new_max_value, p_total, pset, ptotal, randnum, currentp, choosej, taufinal, ans, L_value, actualratioslist, pick, newstartpoint, pick_forplotdata, ansset, stored_minus, stored_plus, f_tau_minus_temp, f_tau_plus_temp, f_tau_left_temp, f_tau_right_temp, plotset, vvv; ratioslist:=[]; newstartpoint:=1; pick_forplotdata:=1: pick:=0: for tt from 1 to num_samples do if(newstartpoint=1) then # perform one sample of GAMMA: sample initial GAMMA at random from # uniform distribution on the simplex, and improve by iterating hit and run # sample mn independent random variables gamma_matrix:=array(1..m, 1..n): random_list:=[stats[random, exponential[1,0]](m*n)]: pos:=1: for aa from 1 to m do for bb from 1 to n do gamma_matrix[aa,bb]:=op(pos,random_list): pos:=pos+1: od: od: # now warm up,..., essentially, the zoom-in transform with \beta=1 dividing_total:=0; for aa from 1 to m do for bb from 1 to n do gamma_matrix[aa,bb]:=gamma_matrix[aa,bb]*op(aa,row_margins)*op(bb, column_margins): dividing_total:=dividing_total+gamma_matrix[aa,bb]: od: od: for aa from 1 to m do for bb from 1 to n do gamma_matrix[aa,bb]:=gamma_matrix[aa,bb]/dividing_total: od: od: newstartpoint:=0: fi: pick:=pick+1: # counter that counts up to num_iterations_before_sample # To make one iteration of hit and run, we choose a random line \ell through # the current point \gamma\in\Delta, and improve \gamma\to better \gamma # mu_matrix defines the direction to hit and run from gamma_matrix mu_matrix:=array(1..m, 1..n): dividing_total:=0: random_list:=[stats[random, normald[0,1]](m*n)]: pos:=1: for aa from 1 to m do for bb from 1 to n do mu_matrix[aa,bb]:=op(pos,random_list): dividing_total:=dividing_total+mu_matrix[aa,bb]: pos:=pos+1: od: od: # balance for aa from 1 to m do for bb from 1 to n do mu_matrix[aa,bb]:=mu_matrix[aa,bb]-(1/(m*n*1.0))*dividing_total: od: od: # now we sample a point from \Delta\cap\ell # First, figure out the intersection \Delta\cap\ell min_mu:=infinity; for aa from 1 to m do for bb from 1 to n do if(mu_matrix[aa,bb]<0) then if((gamma_matrix[aa,bb]/(-mu_matrix[aa,bb]))0) then if(gamma_matrix[aa,bb]/mu_matrix[aa,bb]=f_tau_plus)) then tau_minus:=tau_left: stored_minus:=f_tau_left_temp: else if((f_tau_minus<=f_tau_left) and (f_tau_left>=f_tau_right)) then tau_plus:=tau_right: stored_plus:=f_tau_right_temp: else if(f_tau_minus>=f_tau_left) then tau_plus:=tau_left: stored_plus:=f_tau_left_temp: else print(THISSHOULDNTHAPPEN): fi: fi: fi: fi: new_max_value:=max(f_tau_minus, f_tau_left, f_tau_right, f_tau_plus): if(abs(exp(max_value) - exp(new_max_value))<0.000000001) then stopflag:=1; fi: max_value:=new_max_value: od: # give data to plot tset and fset if(plotdata=1) then plotset:=[]: for vvv from 1 to nops(tset) do plotset:=[op(plotset), [op(vvv,tset), op(vvv,fset)]]: od: print(PLOTSET, plotset); fi: # Finally, let us sample a new improved \gamma... # so at this point tset should give t_minus=t_0op(choosej,fset)) then taufinal:=op(choosej,tset)+((op(choosej+1,tset)-op(choosej,tset))/ (op(choosej+1,fset)-op(choosej,fset)))* log(randnum*(exp(op(choosej+1,fset)-op(choosej,fset))-1)+1): fi: if(op(choosej+1,fset)=poss_sigma_entry) then sigma_list:=[op(sigma_list[1..(ii-1)]), poss_sigma_entry, op(sigma_list[ii..nops(sigma_list)])]: per_list:=[op(per_list[1..(ii-1)]), poss_per_list_entry, op(per_list[ii..nops(per_list)])]: RETURN([sigma_list,per_list]): fi: od: else # now we have exactly k entries if(max(op(sigma_list))=poss_sigma_entry) then sigma_list:=[op(sigma_list[2..(ii-1)]), poss_sigma_entry, op(sigma_list[ii..nops(sigma_list)])]: per_list:=[op(per_list[2..(ii-1)]), poss_per_list_entry, op(per_list[ii..nops(per_list)])]: RETURN([sigma_list,per_list]): fi: od: fi: end: # Code to compute the Diaconis-Efron heuristic approximation formula # "which seems to work with m,n small and N large". Diaconis_Efron:=proc(row_vector,column_vector) local mm, nn, NN, ww, rr_bar_vector, cc_bar_vector, rr_bar_sq, cc_bar_sq, kk, ii, ans, rr_bar_prod, cc_bar_prod; mm:=nops(row_vector): nn:=nops(column_vector): NN:=sumarrayelts(row_vector, mm): ww:=1/(1+(mm*nn)/(2*NN)): rr_bar_vector:=[]: for ii from 1 to mm do rr_bar_vector:=[op(rr_bar_vector), ((1-ww)/mm)+(ww*op(ii, row_vector)/NN)]: od: cc_bar_vector:=[]: for ii from 1 to nn do cc_bar_vector:=[op(cc_bar_vector), ((1-ww)/nn)+(ww*op(ii, column_vector)/NN)]: od: rr_bar_sq:=0: for ii from 1 to mm do rr_bar_sq:=rr_bar_sq+(op(ii,rr_bar_vector)^2): od: kk:=((nn+1)/(nn*rr_bar_sq))-(1/nn): ans:=(((2*NN+mm*nn)/2)^((mm-1)*(nn-1)))*GAMMA(nn*kk)/((GAMMA(nn)^mm)*(GAMMA(kk)^nn)): rr_bar_prod:=1: for ii from 1 to mm do rr_bar_prod:=rr_bar_prod*op(ii, rr_bar_vector): od: cc_bar_prod:=1: for ii from 1 to nn do cc_bar_prod:=cc_bar_prod*op(ii, cc_bar_vector): od: ans:=evalf(ans*(rr_bar_prod^(nn-1))*(cc_bar_prod^(kk-1))): RETURN(ans): end: # Compute t=max{min_{i=1..m}r_i, min_{j=1..n}c_j} t_value:=proc(m,n, row_margins, column_margins) local ii, r_min, c_min, ans; r_min:=infinity; for ii from 1 to m do if(op(ii, row_margins)