DIMACS Workshop on Complexity and Inference

June 2 - 5, 2003
DIMACS Center, Rutgers University, Piscataway, NJ

Organizers:
Mark Hansen, Bell Laboratories, cocteau@stat.ucla.edu
Paul Vitanyi, CWI and the University of Amsterdam, Paul.Vitanyi@cwi.nl
Bin Yu, UC Berkeley, binyu@stat.berkeley.edu
Presented under the auspices of the Special Focus on Computational Information Theory and Coding.

Workshop Program:


Monday, June 2, 2003

The Principle of Minimum Description Length

 8:15 -  8:50  Breakfast and Registration

 8:50 -  9:00  Opening Remarks
               Melvin Janowitz, Associate Director of DIMACS

 9:00 -  9:45  The MDL Principle with Distortion
  	       Jorma Rissanen, Helsinki Institute for Information Technology 
  
 9:45 - 10:30  MDL and classification, revisited
               Peter Grunwald, CWI Amsterdam   

10:30 - 11:00  Break

11:00 - 11:45  Exact minimax estimation and MDL
	       Feng Liang, Duke University

11:45 - 12:05  Discussant
               Paul Vitanyi, CWI and the University of Amsterdam 

12:05 -  1:30  Lunch 

Information Theory/Individual Sequences 1:30 - 2:15 On the lower limits of entropy estimation Abraham Wyner and Dean Foster, University of Pennsylvania 2:15 - 3:00 Descriptions of words over a partially commutative alphabet Serap Savari, Bell Laboratories, Lucent Technologies 3:00 - 3:45 Universal discrete Denoising: Known Channel Marcelo Weinberger, Hewlett-Packard Laboratories 3:45 - 4:00 Break

Contributed Presentations 4:00 - 4:20 Redundancy of universal coding, Kolmogorov complexity and Hausdorff dimension Hayato Takahashi, Tokyo Institute of Technology 4:20 - 4:40 A new universal two part code for estimation of string Kolmogorov complexity and algorithmic minimum sufficient statistics Scott Evans, General Electric Research/RPI 4:40 - 5:00 Finite memory universal coding of individual binary sequences Eado Meron, Tel Aviv University 5:00 - 5:20 Complexity preserving functions Jan Lemeire, Vrije Universiteit Brussel 5:20 - 5:40 Some analysis of a predictive lossless coder for audio signals Peng Zhao, University of California, Berkeley Tuesday, June 3, 2003

Statistics and Learning 8:30 - 9:00 Breakfast and Registration 9:00 - 9:45 Data density and degrees of freedom in statistical models Andrew Gelman, Columbia University and Mark Hansen, UCLA 9:45 - 10:30 Conditional Akaike Information for Mixed Effects Models Florin Vaida, Harvard University 10:30 - 11:00 Break 11:00 - 11:45 Prequential Statistics and On-Line Learning Phil Dawid, University College, London 11:45 - 12:30 Hierarchical Designs for Pattern Recognition Donald Geman, Johns Hopkins University 12:30 - 2:00 Lunch 2:00 - 2:45 A General System for Incremental Machine Learning Ray Solomonoff, Oxbridge Research 2:45 - 3:30 Boosting: convergence, consistency, and minimax results Bin Yu, UC Berkeley 3:30 - 4:15 Date-dependent generalization bounds for Bayesian mixture algorithms Ron Meir, Technion 4:15 - 4:40 Break Contributed Presentations 4:40 - 5:00 Message length estimators, probabilistic sampling and optimal prediction Ian Davidson, SUNY Albany 5:00 - 5:20 Data compression and learning John Langford, IBM, TJ Watson 5:20 - 5:40 Classification or regression trees Clayton Scott, Rice University 5:40 - 6:00 Learnability Beyond AC^0 Adam Klivans and Rocco Servedio, Harvard University Wednesday, June 4, 2003 Cognitive Science 8:30 - 9:00 Breakfast and Registration 9:00 - 9:45 Incomputable randomness or computable regularity? Peter A. van der Helm, Nijmegen Institute for Cognition and Information 9:45 - 10:30 Articulation and Intelligibility Jont Allen, UIUC 10:30 - 11:00 Break 11:00 - 11:45 Science, Simplicity and Embodied Cognition Nick Chater, University of Warwick 11:45 - 12:30 William Bialek 12:30 - 2:00 Lunch

Applications 2:00 - 2:45 Individual Sequence Properties of Compounded Wealth, Portfolio Estimation, Option Pricing, and Model Selection Andrew Barron, Yale University 2:45 - 3:30 Similarity metric and algorithmic music clustering Paul Vitanyi, CWI and the University of Amsterdam 3:30 - 4:00 Break

Contributed Presentations 4:00 - 4:20 Subjective randomness and cognitive complexity Tom Griffiths, Stanford University 4:20 - 4:40 Minimum description length cognitive modeling Yong Su, Ohio State University 4:40 - 5:00 Using polynomial local search and Kolmogoro complexities to better understand evolutionary algorithms Natalio Krasnogor, University of Nottingham 5:00 - 5:20 Condensation of boolean formulas Kazuo Iwama, Kyoto University 5:20 - 5:40 Complexity and vulnerability analysis Stephen Bush, General Electric, Research Thursday, June 5, 2003

K-complexity 8:30 - 9:00 Breakfast and Registration 9:00 - 9:45 Complexity Distortion Theory Alex Eleftheriadis, Columbia University Daby Sow, IBM T. J. Watson Research Center 9:45 - 10:30 Predictive complexity, randomness and information Volodya Vovk 10:30 - 11:00 Break 11:00 - 11:45 Computational Depth Lance Fortnow, NEC Laboratories America 11:45 - 12:30 Uniform randomness test, over a general space Peter Gacs, Boston University 12:30 - 2:00 Lunch 2:00 - 2:45 The Kolmogorov Sampler David Donoho


Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 29, 2003.