Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

We study the problems of classification and closeness testing. A classifier associates a test sequence with the one of two training sequences that was generated by the same distribution. A closeness test determines whether two sequences were generated by the same or by different distributions. For both problems all natural algorithms are symmetric -- they make the same decision under all symbol relabelings. With no assumptions on the distributions' support size or relative distance, we construct a classifier and closeness test that require at most (n3/2) samples to attain the n-sample accuracy of the best symmetric classifier or closeness test designed with knowledge of the underlying distributions. Both algorithms run in time linear in the number of samples. Conversely we also show that for any classifier or closeness test, there are distributions that require ?(n7/6) samples to achieve the n-sample accuracy of the best symmetric algorithm that knows the underlying distributions.

Questions and Answers

You need to be logged in to be able to post here.