雷

  • 首页

  • 关于

  • 标签

  • 分类

  • 归档

  • 站点地图

mobileNet迁移学习训练猫狗分类生成kmodel模型,k210运行

发表于 2019-09-05 更新于 2019-09-07 分类于 k210

通过mobileNet冻结预训练权重值,自定义训练猫,狗分类,转换模型kmodel在k210运行

代码下载地址:

https://codeload.github.com/AIWintermuteAI/transfer_learning_sipeed/zip/master

测试代码:(用来下载mobileNet模型并测试,检查本地环境)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

import keras
import numpy as np
from keras.preprocessing import image
from keras.models import Model
from keras.applications import imagenet_utils
from keras.applications import MobileNet
from keras.applications.mobilenet import preprocess_input

mobile = keras.applications.mobilenet.MobileNet()

def prepare_image(file):
img_path = ''
img = image.load_img(img_path + file, target_size=(224, 224))
img_array = image.img_to_array(img)
image.save_img(img_path + file, img_array)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return keras.applications.mobilenet.preprocess_input(img_array_expanded_dims)

preprocessed_image = prepare_image('German_Shepherd.jpg')
predictions = mobile.predict(preprocessed_image)
results = imagenet_utils.decode_predictions(predictions)
print(results)

preprocessed_image = prepare_image('24.jpg')
predictions = mobile.predict(preprocessed_image)
results = imagenet_utils.decode_predictions(predictions)
print(results)

preprocessed_image = prepare_image('48.jpg')
predictions = mobile.predict(preprocessed_image)
results = imagenet_utils.decode_predictions(predictions)
print(results)

训练代码:(images文件中放入要训练分类图片,比如新建cat,dog文件夹并放入相关图片用于训练)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

import keras
import numpy as np
from keras import backend as K
from keras.optimizers import Adam
from keras.metrics import categorical_crossentropy
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from keras.models import Model
from keras.applications import imagenet_utils
from keras.layers import Dense, GlobalAveragePooling2D, Dropout
from mobilenet_sipeed.mobilenet import MobileNet
from keras.applications.mobilenet import preprocess_input

def prepare_image(file):
img_path = ''
img = image.load_img(img_path + file, target_size=(128, 128))
img_array = image.img_to_array(img)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return keras.applications.mobilenet.preprocess_input(img_array_expanded_dims)


base_model=MobileNet(input_shape=(128, 128, 3), alpha = 0.75,depth_multiplier = 1, dropout = 0.001,include_top = False, weights = "imagenet", classes = 1000, backend=keras.backend, layers=keras.layers,models=keras.models,utils=keras.utils)


x=base_model.output
x=GlobalAveragePooling2D()(x)
x=Dense(100,activation='relu')(x) #we add dense layers so that the model can learn more complex functions and classify for better results.
x=Dropout(0.5)(x)
x=Dense(50,activation='relu')(x) #dense layer 3
preds=Dense(2,activation='softmax')(x) #final layer with softmax activation


model=Model(inputs=base_model.input,outputs=preds)
#specify the inputs
#specify the outputs
#now a model has been created based on our architecture


for i,layer in enumerate(model.layers):
print(i,layer.name)

# or if we want to set the first 20 layers of the network to be non-trainable
for layer in model.layers[:86]:
layer.trainable=False
for layer in model.layers[86:]:
layer.trainable=True

train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input) #included in our dependencies

train_generator=train_datagen.flow_from_directory('images',
target_size=(128,128),
color_mode='rgb',
batch_size=32,
class_mode='categorical', shuffle=True)
model.summary()
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
# Adam optimizer, loss function will be categorical cross entropy, evaluation metric will be accuracy

step_size_train=train_generator.n//train_generator.batch_size
model.fit_generator(generator=train_generator,steps_per_epoch=step_size_train,epochs=10)

model.save('my_model.h5')

测试代码:(测试生成的模型效果)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

import keras
import numpy as np
from keras.preprocessing import image
from keras.models import Model
from keras.applications import imagenet_utils
from keras.models import load_model

model=load_model('my_model.h5')

def prepare_image(file):
img_path = ''
img = image.load_img(img_path + file, target_size=(128, 128))
img_array = image.img_to_array(img)
image.save_img(img_path + file, img_array)
img_array_expanded_dims = np.expand_dims(img_array, axis=0)
return keras.applications.mobilenet.preprocess_input(img_array_expanded_dims)

preprocessed_image = prepare_image('cat.1.jpg')
predictions_cat = model.predict(preprocessed_image)

print('cat', predictions_cat)

preprocessed_image = prepare_image('dog.1.jpg')
predictions_dag = model.predict(preprocessed_image)

print('dog', predictions_dag)

h5转tflite

1
tflite_convert --output_file=my_model.tflite --keras_model_file=my_model.h5

放一些测试图片放到images文件夹里
tflite转kmodel

1
bash tflite2kmodel.sh workspace/my_model.tflite

生成my_model.kmodel

用kflash烧录到一个地址,比如:0x200000

制作一个lables.txt标签文件放到SD卡中:
格式为:

cat
dog

micropython代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

import sensor, image, lcd, time
import KPU as kpu
lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((128, 128))
#sensor.set_windowing((224, 224))
sensor.set_vflip(1)
sensor.run(1)
lcd.clear()
lcd.draw_string(100,96,"MobileNet Demo")
lcd.draw_string(100,112,"Loading labels...")
f=open('/sd4/labels.txt','r')
labels=f.readlines()
f.close()
task = kpu.load(0x200000)
clock = time.clock()
while(True):
img = sensor.snapshot()
#img2=img.resize(128,128)
clock.tick()
fmap = kpu.forward(task, img)
fps=clock.fps()
plist=fmap[:]
pmax=max(plist)
max_index=plist.index(pmax)
a = lcd.display(img, oft=(0,0))
lcd.draw_string(0, 224, "%.2f:%s "%(pmax, labels[max_index].strip()))
print(plist)
a = kpu.deinit(task)

问题:Converting Keras model, Conv2d error

原话:
Perhaps you used wrong parameters? https://github.com/kendryte/nncase#supported-layers

When using TensorFlow Conv2d/DepthwiseConv2d kernel=3x3 stride=2 padding=same, you must first use tf.pad([[0,0],[1,1],[1,1],[0,0]]) to pad the input and then use Conv2d/DepthwiseConv2d with valid padding.

大概意思:
或许你使用了错误的参数,参考:
https://github.com/kendryte/nncase#supported-layers

在使用 TensorFlow Conv2d/DepthwiseConv2d kernel=3x3 stride=2 padding=same 的时候,你必须先使用 tf.pad([[0,0],[1,1],[1,1],[0,0]]) 去填充输入 并且使用 Conv2d/DepthwiseConv2d 的padding=valid参数

支持的layer:
https://github.com/dotnetGame/nncase#supported-layers

i managed to get it to work by making sure that every conv2d layer have the ‘same’ padding. Do you have any tip so that the conversion from h5 to kmodel will result in a small memory footprint ? When i convert for now, i have a kmodel that is as large as the .pb graph.

参考:

https://www.instructables.com/id/Transfer-Learning-With-Sipeed-MaiX-and-Arduino-IDE/
https://bbs.sipeed.com/t/topic/986

# k210
模型转换说明
C++前端pytorch(libtorch)和opencv编译安装环境搭建
  • 文章目录
  • 站点概览
ray

ray

17 日志
23 分类
18 标签
RSS
  1. 1. 通过mobileNet冻结预训练权重值,自定义训练猫,狗分类,转换模型kmodel在k210运行
    1. 1.1. 代码下载地址:
    2. 1.2. 测试代码:(用来下载mobileNet模型并测试,检查本地环境)
    3. 1.3. 训练代码:(images文件中放入要训练分类图片,比如新建cat,dog文件夹并放入相关图片用于训练)
    4. 1.4. 测试代码:(测试生成的模型效果)
    5. 1.5. h5转tflite
    6. 1.6. 用kflash烧录到一个地址,比如:0x200000
    7. 1.7. micropython代码如下:
    8. 1.8. 问题:Converting Keras model, Conv2d error
    9. 1.9. 参考:
0%
© 2020 ray
由 Hexo 强力驱动 v3.9.0
|
主题 – NexT.Pisces v7.3.0