Meta-learning techniques












1














what are the meta-learning approaches (methods)?
are bagging, boosting, ... meta-learning techniques?
is there a good reference for meta-learning techniques?
Please give a description in your answer.










share|cite|improve this question







New contributor




jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    1














    what are the meta-learning approaches (methods)?
    are bagging, boosting, ... meta-learning techniques?
    is there a good reference for meta-learning techniques?
    Please give a description in your answer.










    share|cite|improve this question







    New contributor




    jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      1












      1








      1


      1





      what are the meta-learning approaches (methods)?
      are bagging, boosting, ... meta-learning techniques?
      is there a good reference for meta-learning techniques?
      Please give a description in your answer.










      share|cite|improve this question







      New contributor




      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      what are the meta-learning approaches (methods)?
      are bagging, boosting, ... meta-learning techniques?
      is there a good reference for meta-learning techniques?
      Please give a description in your answer.







      machine-learning data-mining






      share|cite|improve this question







      New contributor




      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question







      New contributor




      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question






      New contributor




      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 13 hours ago









      jimmy

      61




      61




      New contributor




      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      jimmy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          2














          I'm familiar with two meanings of "meta-learning."




          1. Learning methods which allow a model to quickly adapt and fit new data. One example is MAML and related models.


          "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" by Chelsea Finn, Pieter Abbeel, Sergey Levine




          We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.





          1. The second meaning of meta-learning is hyper-parameter tuning, such as using LIPO or Bayesian optimization to find the best parameters of a machine learning model (neural network, SVM, boosted tree ensemble). I don't have a reference at hand for this usage, since I've only seen it used this way on internet fora (comments on stats.SE posts, or threads in r/MachineLearning).


          I'm not familiar with a usage of "meta-learning" which includes bagging and boosting as examples. Bagging and boosting are typically used with ensemble methods (such as random forest or boosted trees).






          share|cite|improve this answer























            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            jimmy is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385088%2fmeta-learning-techniques%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            I'm familiar with two meanings of "meta-learning."




            1. Learning methods which allow a model to quickly adapt and fit new data. One example is MAML and related models.


            "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" by Chelsea Finn, Pieter Abbeel, Sergey Levine




            We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.





            1. The second meaning of meta-learning is hyper-parameter tuning, such as using LIPO or Bayesian optimization to find the best parameters of a machine learning model (neural network, SVM, boosted tree ensemble). I don't have a reference at hand for this usage, since I've only seen it used this way on internet fora (comments on stats.SE posts, or threads in r/MachineLearning).


            I'm not familiar with a usage of "meta-learning" which includes bagging and boosting as examples. Bagging and boosting are typically used with ensemble methods (such as random forest or boosted trees).






            share|cite|improve this answer




























              2














              I'm familiar with two meanings of "meta-learning."




              1. Learning methods which allow a model to quickly adapt and fit new data. One example is MAML and related models.


              "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" by Chelsea Finn, Pieter Abbeel, Sergey Levine




              We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.





              1. The second meaning of meta-learning is hyper-parameter tuning, such as using LIPO or Bayesian optimization to find the best parameters of a machine learning model (neural network, SVM, boosted tree ensemble). I don't have a reference at hand for this usage, since I've only seen it used this way on internet fora (comments on stats.SE posts, or threads in r/MachineLearning).


              I'm not familiar with a usage of "meta-learning" which includes bagging and boosting as examples. Bagging and boosting are typically used with ensemble methods (such as random forest or boosted trees).






              share|cite|improve this answer


























                2












                2








                2






                I'm familiar with two meanings of "meta-learning."




                1. Learning methods which allow a model to quickly adapt and fit new data. One example is MAML and related models.


                "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" by Chelsea Finn, Pieter Abbeel, Sergey Levine




                We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.





                1. The second meaning of meta-learning is hyper-parameter tuning, such as using LIPO or Bayesian optimization to find the best parameters of a machine learning model (neural network, SVM, boosted tree ensemble). I don't have a reference at hand for this usage, since I've only seen it used this way on internet fora (comments on stats.SE posts, or threads in r/MachineLearning).


                I'm not familiar with a usage of "meta-learning" which includes bagging and boosting as examples. Bagging and boosting are typically used with ensemble methods (such as random forest or boosted trees).






                share|cite|improve this answer














                I'm familiar with two meanings of "meta-learning."




                1. Learning methods which allow a model to quickly adapt and fit new data. One example is MAML and related models.


                "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" by Chelsea Finn, Pieter Abbeel, Sergey Levine




                We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.





                1. The second meaning of meta-learning is hyper-parameter tuning, such as using LIPO or Bayesian optimization to find the best parameters of a machine learning model (neural network, SVM, boosted tree ensemble). I don't have a reference at hand for this usage, since I've only seen it used this way on internet fora (comments on stats.SE posts, or threads in r/MachineLearning).


                I'm not familiar with a usage of "meta-learning" which includes bagging and boosting as examples. Bagging and boosting are typically used with ensemble methods (such as random forest or boosted trees).







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 10 hours ago

























                answered 11 hours ago









                Sycorax

                38.8k1197192




                38.8k1197192






















                    jimmy is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    jimmy is a new contributor. Be nice, and check out our Code of Conduct.













                    jimmy is a new contributor. Be nice, and check out our Code of Conduct.












                    jimmy is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385088%2fmeta-learning-techniques%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Quarter-circle Tiles

                    build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                    Mont Emei