From: http://www.cs.utexas.edu/
Ensemble Learning:
Ensemble Learning combines multiple learned models under the assumption that "two (or more) heads are better than one." The decisions of multiple hypotheses are combined in ensemble learning to produce more accurate results. Boosting and bagging are two popular approaches. Our work focuses on building diverse committees that are more effective than those built by existing methods, and, in particular, are useful for useful for active learning.
Active Learning :
Active learning differs from passive "learning from examples" in that the learning algorithm itself attempts to select the most informative data for training. Since supervised labeling of data is expensive, active learning attempts to reduce the human effort needed to learn an accurate result by selecting only the most informative examples for labeling.
Transfer Learning:(*)
Traditional machine learning algorithms operate under the assumption that learning for each new task starts from scratch, thus disregarding any knowledge they may have gained while learning in previous domains. Naturally, if the domains encountered during learning are related, this tabula rasa approach would waste both data and computer time to develop hypotheses that could have been recovered by simply examining and possibly slightly modifying previously acquired knowledge. Moreover, the knowledge learned in earlier domains could capture generally valid rules that are not easily recoverable from small amounts of data, thus allowing the algorithm to achieve even higher levels of accuracy than it would if it starts from scratch.
The field of transfer learning, which has witnessed a great increase in popularity in recent years, addresses the problem of how to leverage previously acquired knowledge in order to improve the efficiency and accuracy of learning in a new domain that is in some way related to the original one. In particular, our current research is focused on developing transfer learning techniques for Markov Logic Networks (MLNs), a recently developed approach to statistical relational learning .
Reinforcement Learning:
consists of a set of machine learning methods that address a particular kind of learning task, in which the learner is placed in an unknown environment and is allowed to take actions that bring it rewards and can change its state in the environment. In general terms, the goal of the agent is to develop a policy, or a mapping from states to actions, that maximizes the reward it obtains while interacting with the environment.
Unsupervised and Semi-Supervised Learning and Clustering:
In many learning tasks, there is a large supply of unlabeled data but insufficient labeled data since it can be expensive to generate. Semi-supervised learning combines labeled and unlabeled data during training to improve performance. Semi-supervised learning is applicable to both classification and clustering. In supervised classification, there is a known, fixed set of categories and category-labeled training data is used to induce a classification function. In semi-supervised classification, training also exploits additional unlabeled data, frequently resulting in a more accurate classification function. In unsupervised clustering, an unlabeled dataset is partitioned into groups of similar examples, typically by optimizing an objective function that characterizes good partitions. In semi-supervised clustering , some labeled data is used along with the unlabeled data to obtain a better clustering.
没有评论:
发表评论