Learning Phonotactics from Linguistic Informants
Abstract
We propose an interactive approach to language learning that utilizes linguistic acceptability judgments from an informant (a competent language user) to learn a grammar. Given a grammar formalism and a framework for synthesizing data, our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies, asks the informant for a binary judgment, and updates its own parameters in preparation for the next query. We demonstrate the effectiveness of our model in the domain of phonotactics, the rules governing what kinds of sound-sequences are acceptable in a language, and carry out two experiments, one with typologically-natural linguistic data and another with a range of procedurally-generated languages. We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, and sometimes greater than, fully supervised approaches.
1 Introduction
In recent years, natural language processing has made remarkable progress toward models that can (explicitly or implicitly) predict and use representations of linguistic structure from phonetics to syntax (Mohamed et al., 2022; Hewitt and Manning, 2019). These models play a prominent role in contemporary computational linguistics research. But the data required to train them is of a vastly larger scale, and features less controlled coverage of important phenomena, than data gathered in the course of linguistic research, e.g. during language documentation with native speaker informants. How can we build computational models that learn more like linguists—from targeted inquiry rather than large-scale corpus data?
We describe a paradigm in which language-learning agents interactively select examples to learn from by querying an informant, with the goal of learning about a language as data-efficiently as possible, rather than relying on large-scale corpora to capture attested-but-rare phenomena. This approach has two important features. First, rather than relying on existing data to learn, our model performs data synthesis to explore the space of useful possible data-points. But second, our model can also leverage corpus data as part of its learning procedure by trading off between interactive elicitation and ordinary supervised learning, making it useful both ab initio and in scenarios where seed data is available to bootstrap a full grammar.

We evaluate the capabilities of our methods in two experiments on learning phonotactic grammars, in which the goal is to learn the constraints on sequences of permissible sounds in the words of a language. Applied to the problem of learning a vowel harmony system inspired by natural language typology, we show that our approach succeeds in recovering the generalizations governing the distribution of vowels. Using an additional set of procedurally-generated synthetic languages, our approach also succeeds in recovering relevant phonotactic generalizations, demonstrating that model performance is robust to whether the target pattern is typologically common or not. We find that our approach is more sample-efficient than ordinary supervised learning or random queries to the informant.
Our methods have the potential to be deployed as an aid to learners acquiring a second language or to linguists doing elicitation work with speakers of a language that has not previously been documented. Further, the development of more data-efficient computational models can help redress social inequalities which flow from the asymmetrical distribution of training data types available for present models (Bender et al., 2021).
2 Problem Formulation and Method
Preliminaries
We aim to learn a language comprising a set of strings , each of which is a concatenation of symbols from some inventory (so ). (In phonotactics, for example, might be the set of phonemes, and the set of word forms that speakers judge phonotactically acceptable.) A learned model of a language is a discriminative function that maps from elements to values in where 1 indicates that and 0 indicates that . In this paper, we will generalize this to graded models of language membership , in which higher values assigned to strings correspond to greater confidence that (cf. Albright, 2009, for data and argumentation in favor of a gradient model of phonotactic acceptability in humans).
We may then characterize the language learning problem as one of acquiring a collection of pairs where , and correspond to acceptability judgments about whether . Given this data, a learner's job is to identify a language consistent with these pairs. Importantly, in this setting, learners may have access to both positive and negative evidence.
Approach
In our problem characterization, the data acquisition process takes place over a series of time steps. At each time step , the learner uses a policy according to which a new string is selected; here is some set of possible strings, with . The chosen string is then passed to an informant that provides the learner a value corresponding to whether is in . The new datum is then appended to a running collection of (string, judgment) pairs , after which the learning process proceeds to the next time step. This procedure is summarized in Algorithm 1.
Conceptually, there are two ways in which a learner might gather information about a new language. One possibility is to gather examples well-formed strings already produced by users of the language (e.g. by listening to a conversation, or collecting a text corpus), similar to an \sayimmersion approach when learning a new language. In this case, the learner does not have control over the specific selected string , but it is guaranteed that the selected string is part of the language: and thus .
The other way of collecting information is to select some string from , and directly elicit a judgment from a knowledgeable informant. This approach is often pursued by linguists working with language informants in a documentation setting, where their query stems from a hypothesis about the structural principles of the language. Here, examples can be chosen to be maximally informative, and negative evidence gathered directly. In practice, learners might also use \sayhybrid policies that compare which of multiple basic policies (passive observation, active inquiry) is expected to yield a new datum that optimally improves the learner's knowledge state. Each of these strategies is described in more detail below.
Model assumptions
To characterize the learning policies, we make the following assumptions regarding the model trained from available data . We assume that the function acquired from can be interpreted as a conditional probability of the form . We further assume that this conditional probability is determined by a set of parameters for which a(n approximate) posterior distribution is maintained, with .
3 Query policies
In the framework described in Section 2, how should a learner choose which questions to ask the informant? Below, we describe a family of different policies for learning.
3.1 Basic policies
Train
The first basic policy, , corresponds to observing and recording an utterance by a speaker. For simplicity we model this as uniform sampling (without replacement) over :
Uniform
The second basic policy, , samples a string uniformly from and presents it to the informant for an acceptability judgment:
Label Entropy
The policy selects the string with the maximum entropy over labels under the current model state:
Expected Information Gain
The policy selects the candidate that, if observed, would yield the greatest expected reduction in entropy over the posterior distribution of the model parameters . This is often called the information gain (MacKay, 1992); we denote the change in entropy as :
(1) | ||||
The expected information gain policy selects the that maximizes , i.e.,:
where denotes the probability distribution that places all its mass on .
3.2 Hybrid Policies
Hybrid policies dynamically choose at each time step among a set of basic policies based on some metric . At each step, the hybrid policy estimates the expected value of for each basic policy , chooses the policy that has the highest expected value, and then samples according to . Here, we study one such policy: , with metric . We refer to the non-train policy as and the metric used to select at each step as .
We explore two general methods for estimating the expected value of for each policy : history-based and model-based. We also explore a mixed approach using a history-based method for and a model-based method for .
History
In the history-based approach, the model keeps a running average of empirical values of for candidates previously selected by and .
For instance, for history-based hybrid policy , (see Table 1(b)). Suppose at a particular step, the basic policy selected by chose query , which received label from the informant. Then the history-based would store the empirical information gain between for the chosen ; in future steps, it would then select the with the highest empirical mean of , in this case the empirical mean information gain, over candidates queried by each basic policy.
More formally, let refer to the mean of observed values for candidates selected by before step , where :
denotes 's score for the i'th data-point selected by under a model that as observed data .
Then at step , the history-based hybrid policies sample according to:
For , we automatically select and in a random order, each once, to ensure we have empirical means for both policies.
Model
The model-based approach is prospective and involves using the current posterior distribution over model parameters to compute an expected value for the target metric under the policy. We define two ways of computing these expectations.
computes an expectation over possible labels for the candidate that will be chosen by policy . We use to score non-train basic policies because they select deterministically given , i.e., selecting the inputs that maximize the objectives described in §3.1. More formally:
computes an expectation over possible inputs and assumes a fixed label . We score the train basic policy with because the randomness for is over forms in the lexicon that could be sampled by , and labels are always 1. More formally:
In practice, however, we approximate this expectation with samples from , since we do not assume that the model has access to the lexicon used by the informant. In particular, we model the probability that a form is in the lexicon as .
Using the policy-specific expectations defined above, the model-based approach selects the policy according to:
Mixed
Finally, the mixed policies combine the retrospective evaluation of the history-based method and the prospective evaluation of the model-based method. In particular, we use the model-based approach for non-train (i.e., scoring with ) and the history-based approach for train policy (i.e., scoring with ):
For , we always select to ensure we have an empirical mean for . Table 1 provides an overview of the query policies described in the preceding sections.
Basic Policy | Quantity Maximized |
---|---|
– | |
– | |
Label entropy | |
Expected info gain |
4 A Grammatical Model for Phonotactics
We implement and test our approach for a simple categorical model of phonotactics. The grammar consists of two components. First, a finite set of phonological feature functions ; if we say that feature is active for string . This set is taken to be universal and available to the learner before any data are observed. Second, a set of binary values , one for each feature function; if then feature is penalized. In our simple categorical model, a string is grammatical if and only if no feature active for it is penalized. thus determines the language: . We assume a factorizable prior distribution over which features are active: . To enable mathematical tractability, we also incorporate a noise term which causes the learner to perceive a judgment from the informant as noisy (reversed) with probability .
This model is based on a decades-long research tradition in theoretical and experimental phonology into what determines the range and frequency of possible word forms in a language. A consensus view of the topic is that speakers have fine-grained judgments about the acceptability of nonwords (for example, most speakers judge blick to be more acceptable than bnick; Chomsky and Halle, 1968), and that this knowledge can be decomposed into the independent, additive effects of multiple prohibitions on specific sequences of sounds (in phonological theory, termed Markedness constraints). Further, speakers form these generalizations at the level of the phonological feature, since they exhibit structured judgments that distinguish between different unattested forms: speakers systematically rate bnick as less English-like than bzick, despite no attested words having initial bn- or bz-. We reflect this knowledge in our generative model: to determine the distribution of licit strings in a language, we first sample some parameters which govern subsequences of features which are penalized in the language.
In our model we take to be a collection of phonological feature trigrams: an ordered triple of three phonological features with values that pick out some class of trigrams of segments in the language (see §5.1 for more details and examples). Since our phonotactics are variants on vowel harmony, these featural trigrams are henceforth assumed to be relativized to the vowel tier, regulating vowel qualities in three adjacent syllables. In order to capture generalizations that may hold differently in edge-adjacent vs. word-medial position, we pad the representation of each word treated by the model with a boundary symbol \say# — omitted generally in this paper, for simplicity — which bears the [ word boundary] feature that the trigram constraints can refer to (following the practice of Hayes and Wilson, 2008, inspired by Chomsky and Halle, 1968).
4.1 Implementation details
Our general approach and specific model create several computational tractability issues that we address here. First, all policies aside from and in principle require search for an optimal string within . In practice, we consider , i.e., is the set of strings with 2-5 syllables. This resulting set is still very large, so we approximate the search over by uniformly sampling a set of candidates and choosing the best according to . We sample candidates by uniformly sampling a length, then uniformly sampling each syllable from the inventory of possible onset-vowel combinations in the language (with replacement). We then de-duplicate candidates and filter , excluding previously observed sequences and those that were accidental duplicates with items in the test set.
Second, although the model parameters are independent in the prior, conditioning on data renders them conditionally dependent and computing with the true posterior is in general intractable. To deal with this, we use mean-field variational Bayes to approximate the posterior as . We use this approximation to both estimate the model's posterior (used by and ) and to make predictions about individual new examples. See Appendix D for details.
5 Experiments
We now describe our experiments for evaluating the different query policies. We evaluate on two types of languages. We call the first the ATR Vowel Harmony language (§5.1), which has grammar that regulates the distribution of types of vowels, inspired by those found in many languages of the world. The purpose of evaluating on this language is to evaluate how well our new approach, and specifically the various non-baseline query policies, work on naturalistic data. We also evaluate on a set of procedurally-generated languages (§5.2) that are matched on statistics to ATR Vowel Harmony, i.e., they have the same number of feature trigrams that are penalized, but differ in which. This second set of evaluations aims to determine how robust our model is to typologically-unusual languages, so we can be confident that any success in learning ATR Vowel Harmony is attributable to our procedure, rather than a coincidence of the typologically-natural vowel harmony pattern.
These experiments lead to three sets of analyses: in the first (§5.4), we both select hyperparameters and evaluate on procedurally-generated languages through k-fold cross validation. These results can be interpreted as an in-distribution analysis of the query policies. In the second set of results (§5.5), we evaluate the policies out-of-distribution by selecting hyperparameters on the procedurally-generated languages and evaluating on the ATR Vowel Harmony language. In the last analysis (§5.6), we evaluate the upper bound of policy performance by selecting hyperparameters and evaluating on the same language, ATR Vowel Harmony.
5.1 ATR Vowel Harmony
We created a model language whose words are governed by a small set of known phonological principles. Loosely inspired by harmony systems common among Niger-Congo and Nilo-Saharan languages spoken throughout Africa, the vowels in this language can be divided into two classes, defined with respect to the phonological feature Advanced Tongue Root (ATR); for typological data, see Casali (2003, 2008, 2016); Rose (2018), among others. In this language, vowels that are [+ATR] are {i, e}, and have pronunciations that are more peripheral in the vowel space; those that are [-ATR] are {, }, and are more phonetically centralized. For the sake of simplicity, we restrict the simulated language to only have front vowels. A fifth vowel in the system, [a], is not specified for ATR. This language has consonants {p, t, k, q}, which are distributed freely with respect to one another and to vowels with the exception that syllables must begin with exactly consonant and must contain exactly one vowel, a typologically common restriction. Since consonants are not regulated by the grammar we are working with, the three binary features (leaving out [word boundary]) create a set of 512 possible feature trigrams which characterize the space of all possible strings in the language. The syllable counts of words follows a Poisson distribution with 2.
The single rule active in this language governs the distribution of vowels specified for the feature [ATR]: vowels in adjacent syllables had to have the same [ATR] specification. This means that vowel sequences in a word can be [i…e] or […], but not [e …] or [e …]. Since [a] is not specified for ATR, it creates boundaries that allow different ATR values to exist on either side of it: for example, while the vowel sequence [e …] is not permitted, the sequence [e …a …] is allowed, because the ATR-distinct vowels are separated by the unspecified [a]. This yielded sample licit words like [katipe], [tp], and [qekat], and illicit ones [kkiqa], [ttaqik], and [qiqka].
Feature trigrams were composed of triples of the features and specifications shown in Appendix Table 3, any one of which picks out a certain set of vowel trigrams in adjacent syllables.
Data
We sampled 157 unique words as the lexicon , and a set of 1,010 random words, roughly balanced for length, as a test set. The model was provided with the set of features in Appendix Table 3, and restrictions on syllable structure for use in the proposal distribution.
Informant
The informant was configured to reject any word that contained vowels in adjacent syllables that differed in ATR specification (like [pekit] or [qetatkipe]), and accept all others.
5.2 Procedurally-Generated Languages
We also experimented with languages that share the same feature space, and thus the same set of 512 feature trigrams, as ATR Vowel Harmony (§5.1) but were procedurally generated by sampling 16 of the 512 total feature trigrams to be \sayon (i.e., penalized) and set all others to be off, creating languages with different restrictions on licit vowel sequences in adjacent syllables.
Data
For each \saylanguage (i.e., set of sampled feature trigrams to be penalized), we carried out a procedure to sample the lexicon , as well as evaluation datasets. For each set of 16 values representing penalized phonological feature trigrams, we created random strings as in Experiment 1, filtering them to ensure that the train and test set are of equal size, and the test set is balanced for length of word and composed of half acceptable and half unacceptable forms.
5.3 Experimental Set-Up
Hyperparameters
The model has several free parameters: a noise parameter that represents the probability that an observed label is correct (versus noisy), and , the prior probability of a feature being on (penalized), i.e., . There are also hyperparameters governing the optimization of the model: we denote by the number of optimization steps in the variational update.111These optimization parameters govern both the model’s learning and the evaluation of candidate queries for prospective strategies, i.e., , and the hybrid strategies. When , we optimize until the magnitude of the change in is less than or equal to an error threshold We also experiment with , in which we perform a single update.
We ran a grid-search over the parameter space of log(log( {0.1, 0.25, 0.5, 1, 2, 4, 8}, {0.001, 0.025, 0.05, 0.1, 0.2, 0.35}, and . We ran 10 random seeds (9 for the procedurally generated languages)222For the generated languages, seed also governed the “language,” i.e., phonological feature trigrams sampled as “on.” and all query policies in Table 1 for each hyperparameter setting. Each experiment was run for 150 steps.
For non-train policies, we generated candidates from .
Evaluation
At each step, we compute the AUC (area under the ROC curve) on the test set. We then compute the mean AUC value across steps, which we refer to as the mean-AUC; a higher mean-AUC indicates more efficient learning. We report the median of the mean-AUC values over seeds.
5.4 In-Distribution Results
Assessing the in-distribution results, shown in the left column of Figure 2, we see that interactive elicitation is on par with, if not higher than, baseline strategies (top left plot). The difference between the train and uniform baselines was not significant according to a two-sided paired t-test, and the only strategy that performed significantly better than train after correcting for multiple comparisons was Info. gain / train (model). This difference is more visually striking in the plot of average AUC over time (middle left plot), where Info. gain / train (model) both ascends faster, and asymptotes earlier, than train, although with greater variance across runs. In the bottom left plot of Figure 2, we see that the numerically-best-performing Info. gain / train (model) strategy moves rather smoothly from an initial train preference to an Info. gain preference as learning progresses. That is, information in known-good words is initially helpful, but quickly becomes less useful as the model learns more of the language and can generate more targeted queries.
5.5 Out-Of-Distribution Results
The out-of-distribution analysis on the ATR Vowel Harmony language found greater variance of median mean-AUC between strategies, and also greater variance within strategies across seeds (top center plot). We note that this performance is lower than what is found in the upper-bound analysis, since the hyperparameters (listed in Appendix Table 2) were chosen based on the pooled results of the procedurally-generated languages. As in the in-distribution analysis, we found no statistical difference between the two baselines, nor between the Info. gain strategy and uniform, although Info. gain performed numerically better. In terms of average AUC over time (middle center plot), we find again that the top two non-baseline strategies rise faster and peak earlier than uniform, but exhibit greater variance.
5.6 Upper Bound Results
Greedily selecting for the best test performance in a hyperparameter search conducted on ATR Vowel Harmony yields superior performance compared to the out-of-distribution analysis hyperparameters, as seen in the top right plot in Figure 2. Appendix Table 2 lists the hyperparameter values used. However, we found no significant difference between the stronger baseline (uniform) and any other strategy after correcting for multiple comparisons.
6 Related Work
The goal of active learning is to improve learning efficiency by allowing models to choose which data to query an oracle about (Zhang et al., 2022). Uncertainty sampling (Lewis and Gale, 1994) methods select queries for which model uncertainty is highest. Most closely related are uncertainty sampling methods for probabilistic models, including least-confidence Culotta and McCallum (2005), margin sampling Scheffer et al. (2001), and entropy-based methods.
Disagreement-based strategies query instances that maximize disagreement among a group of models (Seung et al., 1992). The distribution over a single model's parameters can also be treated as this ``group'' of distinct models, as has been done for neural models (Gal et al., 2017). Such methods are closely related to the feature entropy querying policy that we explore.
Another class of forward-looking methods incorporates information about how models would change if a given data-point were observed. Previous work includes methods that sample instances based on expected loss reduction (Roy and McCallum, 2001), expected information gain (MacKay, 1992), and expected gradient length (Settles et al., 2007). These methods are closely related to the policies based on information-gain that we explore.
Our hybrid policies are also related to previous work on dynamic selection between multiple active learning policies, such as DUAL (Donmez et al., 2007), which dynamically switches between density and uncertainty-based strategies.
The model we propose is also related to a body of work in computational and theoretical linguistics focused on phonotactic learning. Much of this work, largely inspired by Hayes and Wilson (2008), seeks to discover and/or parameterize models of phonotactic acceptability on the basis of only positive data, in line with common assumptions about infant language acquisition Albright (2009); Adriaans and Kager (2010); Linzen and O’Donnell (2015); Futrell et al. (2017); Mirea and Bicknell (2019); Gouskova and Gallagher (2020); Mayer and Nelson (2020); Dai et al. (2023); Dai (to appear). Our work differs from these in that we are explicitly not seeking to model phonotactic learning from the infants' point of view, instead drawing inspiration from the strategy of a linguist working with a competent native speaker to discover linguistic structure via iterated querying. Practically, this means that our model can make use of both positive and negative data, and also takes an active role in seeking out the data it will learn from.
7 Conclusion
We have described a method for parameterizing a formal model of language via efficient, iterative querying of a black box agent. We demonstrated that on an in-distribution set of toy languages, our query policies consistently outperform baselines numerically, including a statistically-significant improvement for the most effective policy. The model struggles more on out-of-distribution languages, though in all cases the query policies are numerically comparable to the best baseline. We note that a contributing factor to the difficulty of the query policies consistently achieving a significantly higher performance than baselines is the small number of seeds, which exhibit nontrivial variance, particularly in hybrid policies. Future work may address this with more robust experiments.
Acknowledgements
Thanks to members of the audience at Interactions between Formal and Computational Linguistics (IFLG) Seminar hosted by the Linguistique Informatique, Formelle et de Terrain group, as well as two anonymous SCiL reviewers, for helpful questions and comments.
We acknowledge the following funding sources: MIT-IBM Watson AI Lab (CB, RL), NSF GRFP grant number 2023357727 (AR), a MIT Shillman Fellowship (AR), a MIT Dean of Science Fellowship (AM), and NSF IIS-2212310 (JA, AR).
References
- Adriaans and Kager (2010) Frans Adriaans and René Kager. 2010. Adding generalization to statistical learning: The induction of phonotactics from continuous speech. Journal of Memory and Language, 62(3):311–331.
- Albright (2009) Adam Albright. 2009. Feature-based generalisation as a source of gradient acceptability. Phonology, 26(1):9–41.
- Bender et al. (2021) Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623.
- Casali (2003) Roderic F Casali. 2003. [atr] value asymmetries and underlying vowel inventory structure in niger-congo and nilo-saharan.
- Casali (2008) Roderic F Casali. 2008. Atr harmony in african languages. Language and linguistics compass, 2(3):496–549.
- Casali (2016) Roderic F Casali. 2016. Some inventory-related asymetries in the patterning of tongue root harmony systems. Studies in African Linguistics, pages 96–99.
- Chomsky and Halle (1968) Noam Chomsky and Morris Halle. 1968. The sound pattern of English. Harper & Row New York.
- Culotta and McCallum (2005) Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In Proceedings of the 20th National Conference on Artificial Intelligence - Volume 2, AAAI'05, page 746–751. AAAI Press.
- Dai (to appear) Huteng Dai. to appear. An exception-filtering approach to phonotactic learning. Phonology.
- Dai et al. (2023) Huteng Dai, Connor Mayer, and Richard Futrell. 2023. Rethinking representations: A log-bilinear model of phonotactics. Proceedings of the Society for Computation in Linguistics, 6(1):259–268.
- Donmez et al. (2007) Pinar Donmez, Jaime G. Carbonell, and Paul N. Bennett. 2007. Dual strategy active learning. In Proceedings of the 18th European Conference on Machine Learning, ECML '07, page 116–127, Berlin, Heidelberg. Springer-Verlag.
- Futrell et al. (2017) Richard Futrell, Adam Albright, Peter Graff, and Timothy J O’Donnell. 2017. A generative model of phonotactics. Transactions of the Association for Computational Linguistics, 5:73–86.
- Gal et al. (2017) Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1183–1192. JMLR.org.
- Gouskova and Gallagher (2020) Maria Gouskova and Gillian Gallagher. 2020. Inducing nonlocal constraints from baseline phonotactics. Natural Language & Linguistic Theory, 38:77–116.
- Hayes and Wilson (2008) Bruce Hayes and Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic inquiry, 39(3):379–440.
- Hewitt and Manning (2019) John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138.
- Lewis and Gale (1994) David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR '94, pages 3–12, London. Springer London.
- Linzen and O’Donnell (2015) Tal Linzen and Timothy O’Donnell. 2015. A model of rapid phonotactic generalization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1126–1131.
- MacKay (1992) David J. C. MacKay. 1992. Information-Based Objective Functions for Active Data Selection. Neural Computation, 4(4):590–604.
- Mayer and Nelson (2020) Connor Mayer and Max Nelson. 2020. Phonotactic learning with neural language models. Society for Computation in Linguistics, 3(1).
- Mirea and Bicknell (2019) Nicole Mirea and Klinton Bicknell. 2019. Using lstms to assess the obligatoriness of phonological distinctive features for phonotactic learning. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1595–1605.
- Mohamed et al. (2022) Abdelrahman Mohamed, Hung-yi Lee, Lasse Borgholt, Jakob D Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, et al. 2022. Self-supervised speech representation learning: A review. IEEE Journal of Selected Topics in Signal Processing.
- Rose (2018) Sharon Rose. 2018. Atr vowel harmony: new patterns and diagnostics. In Proceedings of the Annual Meetings on Phonology, volume 5.
- Roy and McCallum (2001) Nicholas Roy and Andrew McCallum. 2001. Toward optimal active learning through monte carlo estimation of error reduction. In International Conference on Machine Learning.
- Scheffer et al. (2001) Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active hidden markov models for information extraction. In Advances in Intelligent Data Analysis, pages 309–318, Berlin, Heidelberg. Springer Berlin Heidelberg.
- Settles et al. (2007) Burr Settles, Mark Craven, and Soumya Ray. 2007. Multiple-instance active learning. In Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc.
- Seung et al. (1992) H. S. Seung, M. Opper, and H. Sompolinsky. 1992. Query by committee. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT '92, page 287–294, New York, NY, USA. Association for Computing Machinery.
- Zhang et al. (2022) Zhisong Zhang, Emma Strubell, and Eduard Hovy. 2022. A survey of active learning for natural language processing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6166–6190, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Out-of-distribution analysis | Upper-bound analysis | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Policy | log(log()) | prior | s |
|
Std. err. | Policy | log(log()) | prior | s |
|
Std. err. | |||||
Info. gain / train (model) | 0.5 | 0.1 | 0.973 | 0.004 | Info. gain / train (mixed) | 0.25 | 0.1 | 0.977 | 0.010 | |||||||
Info. gain / train (history) | 1 | 0.1 | 0.970 | 0.006 | Information gain | 0.1 | 0.025 | 0.975 | 0.002 | |||||||
Info. gain / train (mixed) | 2 | 0.2 | 0.969 | 0.005 | Info. gain / train (history) | 0.1 | 0.05 | 0.974 | 0.013 | |||||||
Information gain | 0.25 | 0.025 | 0.966 | 0.004 | Info. gain / train (model) | 1 | 0.001 | 1 | 0.973 | 0.009 | ||||||
Label entropy | 0.1 | 0.1 | 0.964 | 0.009 | Label entropy | 0.5 | 0.05 | 1 | 0.968 | 0.011 | ||||||
Train (baseline) | 1 | 0.1 | 0.947 | 0.007 | Uniform (baseline) | 0.5 | 0.025 | 1 | 0.958 | 0.010 | ||||||
Uniform (baseline) | 1 | 0.1 | 1 | 0.940 | 0.008 | Train (baseline) | 8 | 0.35 | 1 | 0.932 | 0.003 |
Appendix A Phonological features for Toy Languages
As described in §5.1, the ATR Vowel Harmony language is based on the categorization of vowels as [+ATR], [-ATR], or unspecified. The features [high] and [low] also serve to distinguish vowels in the language, but are not governed by a phonotactic. In contrast, any of the 512 logically possible trigrams of specified phonological features may be penalized for the procedurally-generated languages. Table 3 displays the phonological features for each of the vowels in the languages.
Appendix B Hyperparameters for out-of-distribution and upper-bound analyses
In §5.3, we described the hyperparameters of our grammatical model and the process by which values were selected for the out-of-distribution analysis. These selected hyperparameter values are presented in Table 2.
[high] | [low] | [ATR] | |
---|---|---|---|
i | |||
i | |||
e | |||
a | 0 |
Appendix C Query Policy Implementation
We now revisit the query strategies introduced in §3 and describe how they are implemented for the model described in §4. In particular, under the described generative model, , as described above.
Let , i.e., is the probability of label for input under the variational posterior; this is equivalent to the probability of all features in being ``off''. Let indicate the probability of parameter being ``on'' (i.e., penalized) under the current variational . For this model, the quantities used by the query policies in §3 are computed as follows:
Label Entropy
Policy selects according to:
Expected Information Gain
Policy selects according to:
where is given by
and is given by
Appendix D Derivation of the Update Rule
We want to compute the posterior , which is intractable. Thus, we approximate it with a variational posterior, composed of binomial distributions for each . We further assume that the individual dimensions of the posterior (the individual components of ) have values that are not correlated. This allows us to perform coordinate ascent on each dimension of the posterior separately; thus we express the following derivation in terms of , where i is the index in the feature n-gram vector.
The variational posterior is optimized to minimize the KL divergence between the true posterior and ; we do this by maximizing the ELBO.
The coordinate ascent update rule for each dimension of the posterior, that is, for each latent variable, is:
Given the generative process, we can rewrite:
is assumed to be constant across values of (expressing the lack of dependence between parameters), so we can rewrite the update rule as:
Further, since is constant across values of , we can rewrite it once more:
Since our approximating distribution is binomial, we describe in turn the treatment of each of the two possible values of . First, we derive the update rule for when the label is acceptable ().
We know that there are two subsets of cases where this can happen. In proportion of them, is a correct label, which can only happen when for all . This occurs with probability . There is also, then, the proportion of cases in which is an incorrect label, and the true judgement is unacceptable. Under this assumption, at least 1 feature is on, which occurs with probability .
We can rewrite the expectation term to get approximate probabilities for both the and cases when :
If we know that for all , since we know that must be a noisy label. Thus:
.
We can normalize these quantities to get a proper probability distribution, i.e. we can set to the following quantity:
Using the expression as shorthand for , this results in the following update rule:
In practice, we update over batches of inputs/outputs rather than single datapoints, i.e.,
We update each either for a fixed number of steps , or until convergence, i.e., when:
where is an error threshold.