🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (2024)

部署自訂義 YOLOv8🔥 到 Android⚡️ 端的🚀最佳教程

gary.TsAI(Taiwan A.I.)

·

Follow

8 min read

·

Feb 19, 2023

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (2)

In this tutorial, I’ll show you how to deploy YOLOv8 using custom data sets on an Android device. Want to learn how to deploy YOLOv8 with your own data set on an Android device? I’ll walk you through it in this tutorial. If you’re interested in deploying YOLOv8 with custom data sets on an Android device, you’re in luck! This tutorial will show you how.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (3)

Utilizing YOLOv8 object detection on motion footage streamed from a GoPro to a mobile device can provide valuable information about the objects in the scene, including location and type. This can be particularly useful when capturing footage of a hiking trail, helping to identify potential obstacles or hazards, and objects of interest.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (4)

In situations where quick and accurate object detection is necessary, the YOLOv8 application on a mobile phone is essential. YOLOv8 is a deep learning-based object detection model that can rapidly and accurately detect objects in images or videos, and it can be used anytime and anywhere on a mobile device.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (5)

In order to deploy YOLOv8 with a custom dataset on an Android device, you’ll need to train a model, convert it to a format like TensorFlow Lite or ONNX, and include it in your app’s assets folder. Then, use Android Studio to create a project, add dependencies, load and parse the model, and load image data. Execute model inference, parse the output, and draw bounding boxes on the image to display detected objects. Finally, install and run the app on an Android device. However, optimizing the model for mobile devices and addressing performance issues like compression and acceleration is important for practical applications.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (6)

The above may sound simple, but I encountered several challenges while building. However, don’t worry, I will share the pitfalls I encountered and how I overcame them!

Step 0— Ultimate Guide to Understanding ncnn

Step 1— Training YOLOv8 with a Custom Dataset

  • Clone the Git Repository and Install YOLOv8
  • Performing Inference using a Pre-trained Weights
  • Data Preparation and Format Conversion
  • Running the Training Process
  • Converting the Weights to ONNX Format
  • Converting the Weights to NCNN Format

Step 2— Building and running on Android Studio

  • Download ncnn-android-yolov8
  • Download ncnn
  • Download opencv-mobile
  • Opening ncnn-android-yolov8 with Android Studio
  • Placing NCNN Format Weights in Folder
  • Modifying yolo.cpp

ncnn is an open-source high-performance neural network forward computing framework specially optimized for mobile phones. From the beginning of the design, ncnn has deeply considered the deployment and use of the mobile terminal, without third-party dependencies, cross-platform, and the CPU speed of the mobile terminal is faster than all known open source frameworks. Based on ncnn, developers can easily transplant deep learning algorithms to mobile phones for efficient execution, develop artificial intelligence APPs, and bring AI to your fingertips.

⭐Clone the Git Repository and Install YOLOv8

YOLOv8 has released a package called ultralytics, which can be installed using the command mentioned below.

$ mkdir yolov8
$ cd yolov8
$ git clone https://github.com/ultralytics/ultralytics
$ pip install -qe ultralytics
$ cd ultralytics

⭐Performing Inference using a Pre-trained Weights

To perform object detection on your selected video or image using YOLOv8’s pre-trained weights, you can execute the command provided below in the terminal.

# image
$ yolo task=detect mode=predict model=yolov8m.pt source="XXX.png"

# video
$ yolo task=detect mode=predict model=yolov8m.pt source="XXX.mp4"

If the execution is successful, the results will be stored in the folder YOLOv8/ultralytics/runs/detect/exp/.

⭐Data Preparation and Format Conversion

Visit the Kaggle and download the microcontroller-detection dataset.

To create a text file named chip.yaml and place it in the folder YOLOv8/ultralytics/, use the following command and add the desired content to the file.

train: ../datasets/images/train/
val: ../datasets/images/test/
# number of classes
nc: 4
# class names
names: ['Arduino Nano', 'ESP8266', 'Raspberry Pi 3', 'Heltec ESP32 Lora']

The data structure during training is presented in the table below.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (7)

Move .xml under the folder Microcontroller Detection/images/train/ to folder Microcontroller Detection/images/train_xml/

Move .xml under the folder Microcontroller Detection/images/test/ to folder Microcontroller Detection/images/test_xml/

Upload the folder Microcontroller Detection/images/train/ to the folder YOLOv8/datasets/images/.

Upload the folder Microcontroller Detection/images/test/ to the folder YOLOv8/datasets/images/.

To train the YOLOv8 object detection model with this dataset, it is necessary to convert the format from .xml to .txt.

$ cd ..
$ git clone https://github.com/Isabek/XmlToTxt
$ cd XmlToTxt
$ pip install -r requirements.txt

Modify YOLOv8/XmlToTxt/classes.txtaccording to your custom dataset.

Arduino_Nano
ESP8266
Raspberry_Pi_3
Heltec_ESP32_Lora

Upload the folder Microcontroller Detection/images/train_xml/ to the folder YOLOv8/XmlToTxt/.

Upload the folder Microcontroller Detection/images/test_xml/ to the folder YOLOv8/XmlToTxt/.

To convert a file from .xml format to .txt format, run the following command in your terminal.

# Remember to change the text in classes.txt to your own category
# Put the xml file you want to convert inside the xml folder

$ python xmltotxt.py -xml train_xml -out train
$ python xmltotxt.py -xml test_xml -out test

Move folder YOLOv8/XmlToTxt/train/ to YOLOv8/datasets/labels/.

Move folderYOLOv8/XmlToTxt/test/ to YOLOv8/datasets/labels/.

⭐Running the Training Process

Now that everything is set up, it is time to run the training process.

$ yolo task=detect \
mode=train \
model=yolov8n.pt \
data=./chip.yaml \
epochs=30 \
imgsz=416

The duration of the training process may vary depending on the hardware configuration and can take several minutes or even longer. As the training process is running, the output log will display messages similar to the following.

0/9 0G 0.1184 0.0347 0.03127 47 640: 4%|▎ | 3/85 [01:08<30:00, 21.95s/it]

After completing the training process, the resulting model ultralytics/runs/train/exp/weights/best.ptis now ready to make predictions!

$ yolo task=detect \
mode=predict \
model=/runs/train/exp/weights/best.pt \
conf=0.25 \
source='XXX.jpg'

⭐Converting the Weights to ONNX Format

Modifyultralytics/ultralytics/nn/modules.py with the following content.

class C2f(nn.Module):
# CSP Bottleneck with 2 convolutions
def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
self.c = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, 2 * self.c, 1, 1)
self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))

def forward(self, x):
# y = list(self.cv1(x).split((self.c, self.c), 1))
# y.extend(m(y[-1]) for m in self.m)
# return self.cv2(torch.cat(y, 1))

print("ook")
x = self.cv1(x)
x = [x, x[:, self.c:, ...]]
x.extend(m(x[-1]) for m in self.m)
x.pop(1)
return self.cv2(torch.cat(x, 1))

def forward(self, x):
shape = x[0].shape # BCHW
for i in range(self.nl):
x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
if self.training:
return x
elif self.dynamic or self.shape != shape:
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
self.shape = shape

# box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)
# dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
# y = torch.cat((dbox, cls.sigmoid()), 1)
# return y if self.export else (y, x)

print("ook")
return torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).permute(0, 2, 1)

The following command is used to convert the weights in the best.pt format to the ONNX format and save the resulting file as best.onnx.

$yolo task=detect mode=export model=runs/detect/train4/weights/best.pt 
format=onnx simplify=True opset=13 imgsz=416

⭐Converting the Weights to NCNN Format

Visit the convert weights website. The blue button selects the .onnx file and the green button starts the conversion. After execution, two files, best.bin and best.param will be generated.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (8)

Download ncnn-android-yolov8

Download ncnn-android-yolov8 to your desktop

⭐Download ncnn

Download ncnn-YYYYMMDD-android-vulkan.zip

Extract ncnn-YYYYMMDD-android-vulkan.zip into app/src/main/jni/

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (9)

Change the ncnn_DIR path to yours in app/src/main/jni/CMakeLists.txt

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (10)

⭐Download opencv-mobile

Download opencv-mobile-XYZ-android.zip

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (11)

Extract opencv-mobile-XYZ-android.zip into app/src/main/jni/

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (12)

Change the OpenCV_DIR path in app/src/main/jni/CMakeLists.txt

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (13)

⭐Opening ncnn-android-yolov8 with Android Studio

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (14)
🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (15)

💡 If there is a problem during the build, it should be a compatibility problem between NDK and CMake in SDK Tools. The modification method is as follows

👉 ctrl +alt +s to open the setting, install version 21.3.6528147 of NDK

👉 Install version 3.10.2.4988404 of CMake

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (16)

👉 Add CMake path in local.properties.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (17)

👉 Press the button Sync project with Gradle Files in the upper right.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (18)

⭐Placing NCNN Format Weights in Folder

Placebest.bin and best.param in folderapp\src\main\assets\

⭐Modifying yolo.cpp

Modify app\src\main\jni\yolo.cpp's num_class according to your custom dataset.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (19)

Modify app\src\main\jni\yolo.cpp's class_names according to your custom dataset.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (20)

Modify app\src\main\jni\yolo.cpp's layer_name according to your best.param.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (21)

Modify app\src\main\jni\yolo.cpp's weights name.

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (22)

Press the RUN button to execute the program successfully!

🚀About Author

  • Gary Tsai, I have over 2 years of experience in developing AI solutions and integrating AI technologies into foundry operations. My areas of expertise include cross-departmental coordination, independent project design, establishing project development and maintenance processes, and introducing deep learning techniques for foundry wafer image classification and object detection. Throughout my career, I have collaborated with various departments within the foundry, such as Layout, Etch, and Model departments, to successfully complete multiple AI projects including circuit inductance component object detection, GaAs wafer defect image classification, etc.
  • My contributions to these projects have enabled the foundry to streamline operations, improve product quality, and reduce costs through the use of AI technologies.
  • LinkedIn Profile
  • YouTube Channel
  • Author Website

Please feel free to comment if you have any questions 🙂

🚀Top Tutorials for Deploying Custom YOLOv8🔥 on Android⚡️ (2024)

References

Top Articles
Latest Posts
Article information

Author: Merrill Bechtelar CPA

Last Updated:

Views: 5365

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Merrill Bechtelar CPA

Birthday: 1996-05-19

Address: Apt. 114 873 White Lodge, Libbyfurt, CA 93006

Phone: +5983010455207

Job: Legacy Representative

Hobby: Blacksmithing, Urban exploration, Sudoku, Slacklining, Creative writing, Community, Letterboxing

Introduction: My name is Merrill Bechtelar CPA, I am a clean, agreeable, glorious, magnificent, witty, enchanting, comfortable person who loves writing and wants to share my knowledge and understanding with you.