Please add your Week 3 comments and questions here. This includes the classes of Thursday July 21st (handouts and demo files now available) and Monday July 25th (coming soon).
Post-class update: The demo files folder has been reposted with the missing excel file, and the link to Mark Johnson’s invaluable slides on MaxEnt models that I refer to in the handout is now on the relevant “Models” page.
6 replies on “Week 3 Discussion”
Andrew Ng’s widely cited short paper about L1 vs L2 regularization:
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.81.145
It’s also available as slides from the associated talk:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.92.9860
Adaptive Resonance Theory (ART) is a connectionist learning model I find appealing that I mentioned after class. It’s inventor and primary proponent is Stephen Grossberg from Boston University:
http://cns-web.bu.edu/Profiles/Grossberg/
There are a variety of implementations, some of which are listed on their web site:
http://techlab.bu.edu/resources/software/C51
And interesting phonetic model is ARTSTREAM which deals with acoustic trajectory perception and pitch tracking. I’ve not seen any code for it around but it is described in fairly good detail in the book Musical Networks (which also features some PDP papers).
http://mitpress.mit.edu/catalog/item/default.asp?tid=7483&ttype=2
Ah, also published in Neural Networks:
http://www.sciencedirect.com/science/article/pii/S0893608003002727
Some ART models incorporate Self-Organizing Maps (SOM) which are a dimensionality reduction tool with a pleasant visual interpretation (at least when dealing with just a few dimensions anyhow).
See reply here.
In our last class, you briefly mentioned the Stanford system or methodology. And I was wondering what that was and how it differed from what we are using?
See reply here.
In reply to Jim White.
Thanks for these references! As I mentioned in class, a connectionist learning algorithm that looks useful for hidden structure learning is Hinton et al’s Wake-Sleep Algorithm.
In reply to Sam Perdue.
Right – I referred to the “Stanford school”, which was meant to be a slightly jocular way of saying it. This was in the context of a discussion of “modular” theories of grammar, and I made the point that there’s no reason one couldn’t incorporate weighted constraints in Kiparsky’s Stratal OT, which has a series of OT grammars that act on the output of different morpho-syntactic levels. I mentioned that I doubted this would be pursued any time soon, since another strand of work at Stanford is on an OT approach to variation developed by Kiparsky and Anttila, which is an alternative to the probabilistic HG approaches we’ve been focussing on.
It occurs to me that it’s not entirely clear to me why a learner should care about the type of generalization imposed by the M > F term. We as analysts care because “that’s the way languages are.” But it seems harder to find a motivation for a learner. Under an ordered constraint theory of grammar, there is always *some* analysis available for new words — and if the speakers all share a grammar, this analysis should be reliable and general. Maybe some other pressure that motivates M > F would be desirable?
Another option that occurred to me is that perhaps we *can’t* assume that speakers generally end up with the same ranking. If their available data varies or if there is noise in the learning system, we can only be certain about their weightings of constraints to within some margin of error (because there are often multiple ways of getting one language, and even more so for incompletely-observed languages). So perhaps we *wouldn’t* in general see learners generalizing in the same way. It could be that an M > F restriction is helpful in ensuring that after all…
Bergabunglah dengan UNTUNG33 dan nikmati sensasi luar biasa bermain slot! Temukan ragam permainan slot yang menarik dengan fitur-fitur seru dan jackpot menggiurkan. Setiap putaran adalah kesempatan untuk menang besar! Jadi, jangan lewatkan kesempatan ini untuk meraih kemenangan dan keseruan tak terlupakan. Bergabunglah sekarang dan rasakan sendiri sensasi kemenangan di UNTUNG33!
Link : https://untung33.io/