Jaime Guillermo Carbonell,
an American computer scientist and AI researcher with focus on machine learning and machine translation. He holds a Ph.D. in computer science, 1979 from Yale University and is Allen Newell professor at Carnegie Mellon University, co-founder and chairman, of Carnegie Speech Incorporated[1] and Wisdom Technologies Corporation[2]. He invented multiple well-known algorithms and methods, including: proactivemachine learning for multi-oracle cost-sensitive active learning, linked conditional random fields for predicting tertiary and quaternary protein folds, maximal marginal relevance for information novelty, retrieval and summarization, topic-conditioned modeling for novelty detection, symmetric optimal phrasal alignment method for trainable example-based and statistical machine translation, series-anomaly modeling for financial fraud detection and syndromic surveillance, knowledge-based interlingual machine translation, transformational analogy for case-based reasoning, derivational analogy for reconstructive justification-based reasoning, robust case-frame parsing, seeded version-space learning (with polynomial complexity, vs Mitchell’s original version space learning that exhibits exponential complexity), and developed improvements to several other machine learning algorithms. Current research foci include robust statistical learning and language models for mapping protein sequences to 3D structure and inferring functional properties, automated transfer-rule learning for machine translation, active and proactive machine learning, and context-based machine translation [3].
an American computer scientist and AI researcher with focus on machine learning and machine translation. He holds a Ph.D. in computer science, 1979 from Yale University and is Allen Newell professor at Carnegie Mellon University, co-founder and chairman, of Carnegie Speech Incorporated [1] and Wisdom Technologies Corporation [2]. He invented multiple well-known algorithms and methods, including: proactive machine learning for multi-oracle cost-sensitive active learning, linked conditional random fields for predicting tertiary and quaternary protein folds, maximal marginal relevance for information novelty, retrieval and summarization, topic-conditioned modeling for novelty detection, symmetric optimal phrasal alignment method for trainable example-based and statistical machine translation, series-anomaly modeling for financial fraud detection and syndromic surveillance, knowledge-based interlingual machine translation, transformational analogy for case-based reasoning, derivational analogy for reconstructive justification-based reasoning, robust case-frame parsing, seeded version-space learning (with polynomial complexity, vs Mitchell’s original version space learning that exhibits exponential complexity), and developed improvements to several other machine learning algorithms. Current research foci include robust statistical learning and language models for mapping protein sequences to 3D structure and inferring functional properties, automated transfer-rule learning for machine translation, active and proactive machine learning, and context-based machine translation [3].
Table of Contents
Selected Publications
[5]Jaime Carbonell, Ryszard Michalski, Tom Mitchell (1983). An Overview of Machine Learning.
Jaime Carbonell (1983). Learning by Analogy: Formulating and Generalizing Plans from Past Experience.
External Links
References
What links here?
Up one level