机器学习实践课程:图像分类
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
利用预训练模型
训练卷积神经网络执行图片分类任务通常需要大量的训练数据,而且可能非常耗时,需要数天甚至数周才能完成。但是,如果可以利用基于庞大的数据集(例如通过 TensorFlow-Slim)训练的现有图片模型,并对其进行调整以用于自己的分类任务,会怎么样?
利用预训练模型的一种常见方法是特征提取:检索预训练模型生成的中间表示法,然后将这些表示法馈送到新模型中作为输入。例如,如果您要训练图片分类模型以区分不同类型的蔬菜,可以将胡萝卜、芹菜等图片的训练图片馈送到预训练模型,然后从其最终的卷积层中提取特征。该卷积层会捕获模型学到的所有关于图片的信息,包括将颜色、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、形状、纹理、形状、纹理、形状、纹理、形状、纹理、形状、纹理、形状、纹理、形状、形状、形状、纹理、形状、纹理、形状、形状为了在结合使用特征提取和预训练模型时提高性能,工程师通常会微调应用于所提取特征的权重参数。
有关使用预训练模型时特征提取和微调的更深入探索,请参阅以下练习。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2022-09-27。
[null,null,["最后更新时间 (UTC):2022-09-27。"],[[["\u003cp\u003ePretrained image models can be leveraged to perform image classification tasks, saving time and resources compared to training a new model from scratch.\u003c/p\u003e\n"],["\u003cp\u003eFeature extraction involves using the intermediate representations from a pretrained model as input for a new model, enabling the utilization of learned features like color, texture, and shape.\u003c/p\u003e\n"],["\u003cp\u003eFine-tuning the weight parameters of extracted features can further enhance the performance of the new classification model built on top of the pretrained model.\u003c/p\u003e\n"]]],[],null,["# ML Practicum: Image Classification\n\n\u003cbr /\u003e\n\nLeveraging Pretrained Models\n----------------------------\n\nTraining a convolutional neural network to perform image classification tasks\ntypically requires an extremely large amount of training data, and can be very\ntime-consuming, taking days or even weeks to complete. But what if you could\nleverage existing image models trained on enormous datasets, such as via\n[TensorFlow-Slim](https://github.com/tensorflow/models/tree/master/research/slim),\nand adapt them for use in your own classification tasks?\n\nOne common technique for leveraging pretrained models is *feature extraction* :\nretrieving intermediate representations produced by the pretrained model, and\nthen feeding these representations into a new model as input. For example, if\nyou're training an image-classification model to distinguish different types of\nvegetables, you could feed training images of carrots, celery, and so on, into a\npretrained model, and then extract the features from its final convolution\nlayer, which capture all the information the model has learned about the images'\nhigher-level attributes: color, texture, shape, etc. Then, when building your\nnew classification model, instead of starting with raw pixels, you can use these\nextracted features as input, and add your fully connected classification layers\non top. To increase performance when using feature extraction with a pretrained\nmodel, engineers often *fine-tune* the weight parameters applied to the\nextracted features.\n\nFor a more in-depth exploration of feature extraction and fine tuning when using\npretrained models, see the following Exercise.\n| **Key Terms**\n|\n| |----------------------|---------------|\n| | - feature extraction | - fine tuning |\n|"]]