INTRODUCTION
OVERVIEW
As our nation undergoes industrialization and modernization, it is crucial for the Korean industry to adopt advanced technologies and equipment to compete effectively with regional and global counterparts Equipping technical personnel with up-to-date knowledge is essential for accelerating the country's development processes.
Robotic engineering is increasingly applied across various sectors globally, enhancing efficiency in industrial production, national defense, medicine, and space exploration Despite its potential, the adoption of production robots remains limited, particularly in Vietnam, where industrial robots are still an emerging field However, the overall trend indicates that research and utilization of robotics in Vietnam are poised for significant growth.
AGV is a kind of robot used abroad by industries for automatic conveying However, for Vietnam this technology has not been applied much in practice
Many industries in our country are large-scale yet lack high quality, primarily due to outdated technology and machinery, resulting in only average labor productivity This situation is exacerbated by the need to hire a large workforce, which diminishes profits To address these challenges, the company aims to implement automation in production processes, particularly by replacing manual labor in transportation, thereby enhancing productivity and efficiency.
Our group focused on the "Design and Implementation of AGV Commodities Transportation" to explore the operational mechanisms of the Raspberry Pi 4 This project aims to enable vehicle control in challenging terrains, highlighting the versatility and extensive applications of AGVs in automotive systems Additionally, the utility of this technology facilitates effective journey management and tracking of vehicle movements.
1.1.2 REGARDING RATIONALE OF THE PROJECT
Building a comprehensive system is a complex endeavor that requires expertise and experience to ensure stability and safety A well-developed system must operate smoothly across various environmental conditions, highlighting the importance of adaptability and reliability in its design and implementation.
2 complexity of the problem, the thesis is only at the level of learning algorithms and applying basic algorithms to be able to build a complete operational model.
RESEARCH MISSION
In our research project, we focus on unmanned industrial cargo vehicles, known as Automated Guided Vehicles (AGVs), which come in various sizes to transport loads ranging from a few kilograms to hundreds of panels Due to the constraints of a graduation project, including limited funding and resources, our group has defined specific tasks aimed at the design and manufacturing of these AGV vehicles to ensure high feasibility.
The mechanical structure of the Automated Guided Vehicle (AGV) features a towing arm designed to ensure stability while transporting goods This design enhances the vehicle's stability and control, facilitating safe and efficient handling during movement.
− Electronic circuit design for stable operation of AGV
− Learn and design a car power source that is a DC source
OBJECT AND SCOPE OF THE PROJECT
Bring automation into production to replace workers in some critical stages and some dangerous places
Design and construct an automated vehicle capable of recognizing warehouse or box codes, navigating independently like a robot, and identifying specific locations for goods pickup.
And from there it will replace workers who do heavy work and have to deal with the following requirements:
− Improve labor productivity, increase time and number of product shipments and return profits to the company
− The topic includes many related specialized subjects, so through the topic, it helps me to know and grasp more deeply about the knowledge I have learned
− Research of Board Raspberry Pi 4
− Knowledge of Python programming language
− For factories and companies in need
The Raspberry Pi Board powers a simple experimental car model equipped with a camera for road observation Due to its Wi-Fi control, the operational distance is limited.
− Combine scanning a QR code to determine position of the warehouse and the usage of an arm to move items
− The goods transported in the project are small volumes
− The AGV vehicle system for goods transportation has a low capacity and is implementation in indoor conditions away from direct sunlight
To develop a system that provides highly accurate results, it is essential to incorporate enhanced image data The machine must process images in real-time, ensuring that the output is naturally optimized while maintaining the enjoyment of the viewing experience.
RESEARCH METHODS
− Research through books, newspapers, specialized articles and through the internet to draw the most suitable models
− Learn from experienced teachers in the field of robocon for faster and better solutions
− Solve each small module and then put it back together into a complete circuit
− Calculate and run tests to get the best solution
− Set up car models, run tests outside to find the best possible solutions.
THE PROJECT’S CONTENT
During the implementation of Capstone Project with the topic “Design and implementation of AGV commodities transportation” , we worked on overcoming and accomplishing the following contents:
− Content 1: Learn about the technical specifications, guiding thought and theoretical basis of the components of the circuit
− Content 2: OpenCV on Raspberry Pi 4 and some knowledge of Python programming language use to Image processing
To control a robotic arm, integrate an Arduino Uno R3 with the GPIO pins of a Raspberry Pi 4 (Model B) while employing an L298 motor driver to manage the wheel movements effectively.
− Content 5: Writing Word and PowerPoint reports.
STRUCTURE OF THE PROJECT
In this chapter, the reason for choosing the topic is presented, the objectives, the object, and the research scope of the topic.
LITERATURE REVIEW
SOFTWARE
2.1.1 OVERVIEW OF THE LIBRARY OPEN CV
OpenCV is engineered for computational efficiency and real-time applications, utilizing optimized C to leverage multicore processors For enhanced automatic optimization on Intel architectures, users can purchase Intel's Integrated Performance Primitives (IPP) libraries, which provide low-level optimized routines across various algorithm domains When the IPP library is installed, OpenCV automatically utilizes it at runtime to improve performance.
OpenCV aims to provide an easy-to-use computer vision infrastructure that enables the rapid development of sophisticated vision applications With over 500 functions, the OpenCV library covers various domains such as factory product inspection, medical imaging, security, user interfaces, camera calibration, stereo vision, and robotics Recognizing the synergy between computer vision and machine learning, OpenCV also includes a comprehensive Machine Learning Library (MLL), which emphasizes statistical pattern recognition and clustering This sub-library is essential for core vision tasks while being versatile enough to address a wide range of machine learning challenges.
Figure 2.1 illustrates the timeline of OpenCV, which was originally designed to advance the field of computer vision by making it more accessible and identifying innovative applications for the growing MIPS available in the market.
Figure 2.1 The Timeline of OpenCV
OpenCV is being widely used in applications including:
− Robots and self-driving cars
− Search and recover photos/videos
OpenCV has a modular structure, which is either a common shared library package or a static library Below is the module with
− Core - is a compact module that defines the basic data structure, including the multidimensional array Mat and basic functions used by all other modules
Imgproc is a comprehensive image processing module that offers a variety of functionalities, including both linear and non-linear filtering, geometric transformations such as resizing, affine transformations, and artificial perspective adjustments It also supports common table-based modifications and color space transformations, making it a versatile tool for various image manipulation tasks.
− Video - is a video analysis module that includes motion assessment, background subtraction and object tracking algorithms
− Calib3d – Basic multi-dimensional geometry algorithms, single camera and audio calibration, object estimation, stereo contrast algorithm and 3D reconstruction factors
− Features2d – feature detector, descriptor and description connection
− Objdetect – detects predefined objects and layers (e.g faces, eyes, cups, people, cars, etc.)
− Highgui – an easy-to-use interface for capturing videos, images and video codecs, as well as a simple user interface
− Gpu - CPU acceleration algorithms from different OpenCV modules kernels.
QR CODE
A QR code is a two-dimensional matrix barcode designed for quick scanning by smartphones, with "QR" standing for "Quick Response." This technology allows for the rapid decoding of stored data, making it an efficient tool for information retrieval.
QR codes consist of black modules arranged in a square pattern on a white background, encoding information such as text, URLs, or other data Designed for high-speed decoding, QR codes are rapidly gaining popularity worldwide Today, most mobile phones equipped with built-in cameras can easily recognize these codes, facilitating their widespread use.
Originally designed for tracking parts in vehicle manufacturing, QR codes have expanded their applications across various fields, including commercial tracking, entertainment, and in-store product labeling, particularly for smartphone users When scanned, QR codes allow users to access URLs or receive text information Users can easily create and print their own QR codes using QR code generating websites or apps The QR code system comprises an encoder, which encodes data and generates the QR code, and a decoder, which interprets the data from the QR code.
Figure 2.2 Working of QR code
Figure 2.2 illustrates how a QR code functions, beginning with a QR code encoder that transforms plain text, URLs, or other data into a corresponding QR code To access the information contained within the QR code, a QR code decoder or scanner is used to decode and extract the data.
QR codes provide convenient access to essential product information and facilitate online payments Commonly found on various products, these codes allow users to scan and instantly retrieve details such as the product's origin, type, composition, and related categories Additionally, QR codes streamline the payment process, making online transactions quick and efficient.
ALGORITHM
The Gaussian filter is a popular tool in image processing, primarily utilized for blurring images and minimizing noise It employs the Gaussian function to create a weighted kernel that effectively blurs unwanted details while maintaining edge clarity The fundamental representation of the Gaussian function is as follows:
The Gaussian filter is primarily used to blur or smooth images by convolving them with a Gaussian kernel This process modifies each pixel's value based on the surrounding pixels, resulting in an averaging effect that reduces high-frequency components and creates a smoother appearance For a clearer understanding, refer to the example comparing the original image with the blurred image.
Figure 2.3 Convert to Blur Image
The Gaussian filter emphasizes central pixels more than those further away, functioning as a low-pass filter that diminishes high frequencies This characteristic makes it a popular choice for edge detection in image processing.
Converting an image from RGB to HSV enhances the understanding of color information and facilitates advanced color-based image processing This transformation supports tasks like color segmentation, object detection, color filtering, and image enhancement based on color attributes.
In the RGB color space, color information is represented through individual channels for Blue, Green, and Red, each indicating the intensity of its respective color However, for certain applications, it is more intuitive to utilize a different model that separates color information into three distinct components: Hue, Saturation, and Value (HSV) This separation allows for easier interpretation and manipulation of colors For example, Figure 2.4 illustrates the transformation of a blurred image into an HSV representation, effectively distinguishing two lanes from the surrounding areas.
Figure 2.4 Convert Blur Image to HSV Image for yellow color
In this project, the implementation team utilizes three distinct colors for various detection purposes: yellow for lane detection, blue for identifying original positions, and red to indicate vehicle stops, as illustrated in Figures 2.5 and 2.6.
Figure 2.5 The results of red when convert RGB to HSV
Figure 2.6 The results of blue when convert RGB to HSV
The Canny algorithm is a multi-stage process that involves the following steps:
The algorithm begins by utilizing a Gaussian blur to effectively reduce noise and smooth out irregularities in the image, which helps prevent the detection of false edges caused by noise.
Gradient calculation involves processing a blurred image to determine the intensity changes within it This is commonly achieved through the Sobel operator, which assesses both the gradient magnitude and direction at each pixel.
Non-maximum suppression is a crucial algorithmic step that refines detected edges by scanning the gradient image It suppresses pixels that are not local maxima in their surrounding area along the gradient direction, ensuring that only thin and well-defined edges are preserved This process enhances the clarity and accuracy of edge detection in image processing.
Thresholding involves applying two values—a low threshold and a high threshold—to a gradient image Strong edges are identified by pixels with gradient magnitudes exceeding the high threshold, while pixels falling between the low and high thresholds are classified as weak edges Pixels below the low threshold are discarded, effectively retaining only the most significant edges and minimizing weaker edges and noise.
Edge tracking by hysteresis is a crucial process that enhances edge continuity in image processing In this method, strong edges are designated as true edges, while weak edges are only classified as true if they are connected to strong edges This approach effectively fills gaps in edges, leading to improved overall edge detection and continuity.
As shown in the 2.7 , the canny has three adjustable parameters: the width of the Gaussian and the low and high threshold for the hysteresis thresholding c) Original Image d) Edge Image
Figure 2.7 The results of convert Original Image to Edge Image
The Canny method generates a binary image that highlights edge pixels in white while marking all other pixels in black This binary output is valuable for various image processing applications, including region segmentation, feature recognition, and feature extraction.
The Hough transform is a powerful feature extraction technique used for detecting specific shapes such as straight lines, circles, and ellipses This method involves mapping the original image space into a parameter space, where voting occurs to generate the desired graphical representation In this article, lane line detection utilizes the statistical approach of Hough line detection, which transforms points into curves by converting the image's Cartesian coordinate system into polar coordinate Hough space Each straight line in this context can be represented by the equation y = mx + b, where (x,y) denotes a pair of image points.
In the Hough transform, a straight line is represented by its parameters, specifically the slope (m) and the intercept (b), rather than by specific points like (x1, y1) or (x2, y2) This approach allows a line to be depicted as a point in parameter space (b, m) However, challenges arise when dealing with vertical lines, as they do not have defined values for slope and intercept.
Incorporating polar coordinates, we introduce parameters r and θ, where r signifies the distance from the line to the origin, and θ indicates the angle from the origin to the nearest point on the line This innovative parameterization allows us to express a line effectively.
𝑦 = −𝑥𝑐𝑜𝑠(𝜃) sin (𝜃) + 𝑟𝑠𝑖𝑛(𝜃) (2.3 ) This relation can be rearranged into (2.4)
And the result is the transformation from each pixel coordinate P ( x,y ) in points to (, ) above the curve points as shown the figure 2.8
Figure 2.8 The principle of Hough transformation
DESIGN OF THE SYSTEM
REQUIREMENTS OF THE SYSTEM
The self-propelled vehicle system, specifically designed for automatic pick-up trucks, caters primarily to freight forwarding companies and small warehouses, necessitating adherence to specific technical requirements It must be compact and flexible, while efficiently dissipating low power to ensure prolonged operation of at least one hour using a battery or accumulator The system should self-correct deviations within a 2 cm orbit width and feature a road-aware sensor that functions effectively under varying light conditions Given the lack of standardization for such supporting devices, the research project implementation group focuses on selecting relevant and suitable parameters for this design.
HARDWARE CALCULATION AND DESIGN
The system consists of two blocks communicating with each other through the raspberry pi 4 as control panel shown in Figure 3.1
Figure 3.1 Block diagram of AGV Commodities Transportation
The input block captures real-world images and converts them from analog to digital signals Once these digital signals are generated, they are transmitted to the central processing unit, typically via a camera.
The Central Processing Block utilizes the Raspberry Pi 4 Model B as its core unit to receive, process, and analyze images captured by a camera It executes algorithms to effectively control the power circuit based on the processed data.
− Executive Block : It includes the L298 motor control module and a 5V DC motor
The L298 module interfaces with the Raspberry Pi's GPIO pins to amplify current, as the Raspberry's pins can only supply a maximum of 40mA, while motors require at least 500mA This setup protects the Raspberry Pi from potential damage by ensuring sufficient current is delivered to the motor.
Module arm robot : Receive signals from the central control unit to control the gripping arm
− Power supply : Use a 5V power source to supply the central processing unit Use a 12V power source to supply the peripheral control unit
The camera serves as the "eye" of the model, crucial for capturing images from reality and transmitting data to the central processing unit To ensure high image quality for effective processing, a 4-megapixel camera is utilized The input module's quality significantly influences both image clarity and QR code recognition, prompting the project team to select the Black 4MP Web Camera Megapixels (MP) measure the resolution of optical devices, calculated by multiplying the pixel width by height, with one MP equating to one million pixels (1,000 x 1,000) Thus, a 4 MP camera captures images with a resolution of 4 million pixels.
With a compact size of 61.5 x 90 x 45.6mm, this camera boasts an impressive 4 million pixels, making it ideal for experiments in small environments It connects directly to the Raspberry Pi embedded computer through a USB peripheral interface.
Figure 3.2 The connection diagram of the input block
The central processing unit (CPU) is responsible for receiving, processing, and analyzing images from the input module before sending the results to the motor control module Given the high processing speed and resource demands, a conventional microcontroller is inadequate for this task Consequently, the project implementation group has opted for the Raspberry Pi 4 Model B embedded system as the CPU, as illustrated in figure 3.3.
Figure 3.3 The connection diagram of Central Processing Block
The Raspberry Pi embedded computer features a powerful CPU with a processing speed of up to 1.2 GHz, making it ideal for various applications It offers robust support for communication with external devices and models While the Raspberry Pi includes multiple interface ports, this specific model utilizes only a select few.
The Raspberry Kit features four USB ports, with one dedicated to connecting a webcam Utilizing the USB 2.0 high-speed standard, it offers a maximum transmission speed of 480Mbps The USB cable is composed of power wires (+5V and GND) along with a twisted pair for efficient data transmission.
When selecting a storage memory card for an embedded computer, it is essential to have an operating system pre-installed on the card Given that the Raspbian operating system requires 4GB of storage, a minimum of 8GB is recommended to accommodate additional data and programs For optimal performance, we chose a 32GB Micro SD card with a reading speed of 48MB/s, as this speed significantly influences the data processing capabilities of programs, making it a suitable choice for efficient operation.
Figure 3.4 The connection diagram of memory card and Raspberry Pi
Using L298 to receive DC motor control signals from 4 GPIO pins from Raspberry
Pi 4 and amplify the output current, as the current from the Raspberry is not sufficient to power the four motors as shown in figure 3.5
Figure 3.5 The connection diagram of motor control
The robot arm consists of 4 joints: base, shoulder, elbow, and gripper These joints are controlled using 4 MG996R servo motors to change the angles of the joints The control
33 signals for the servo motors will be directly connected to the 4 GPIO pins of the Raspberry
Pi and controlled using PWM signals as shown in figure 3.6 and figure 3.7
Figure 3.6 The model of fourth-degree robotic arm
Figure 3.7 The connection diagram of Arm robot
The Raspberry Pi operates within a current range of 500-1000mA, while the USB 3.0 WebCamera demands 150-900mA The current needed for controlling the 4 logic GPIO pins and 4 PWM GPIO pins is minimal Overall, the total power consumption is around 1900mA Since the Raspberry Pi functions at a voltage of 5V, a 5V - 2A power bank will be utilized for the project implementation Refer to Table 3.1 for the calculated current consumption of the Raspberry Pi 4.
Table 3.1 Calculate consumption current of Raspberry Pi 4
The motor control module operates at 12V, while the robot arm functions at 6V To power the project, three parallel-connected 18650 batteries, each rated at 3.7V and 2000mAh, will be utilized, providing a combined output of 3.7V and 6000mAh These batteries will be connected to an 88% efficient DC-DC boost circuit, which will elevate the voltage to 6V with a capacity of 5280mAh This configuration allows the motor and robot arm to operate for approximately 1.3 hours Table 3.2 presents the calculated current consumption for the motor control.
Table 3.2 Calculate consumption current of motor control
The Schematic of system
The schematic diagram of the system is illustrated in the figure 3.8 below To see the functions of each implementation block more clearly, connect the blocks together
Figure 3.8 The schematic of system
SOFTWARE DESIGN
AGV commodities transportation is implemented in Python and operates on the Raspbian OS The system utilizes a camera to capture and analyze image data from QR codes, allowing for scanning and comparison An arm robot is then employed to pick up commodities, detect lanes, and identify the warehouse position based on a red line stop Subsequently, the QR code is scanned to confirm the warehouse location for the arm robot to deposit the commodities, while the original position is determined by detecting a blue color.
If the camera detects the image as a QR code, it will send a signal to the Raspberry
The Pi 4 sends a signal to the control arm to retrieve goods and communicates with the motor control Subsequently, the camera captures image data from the path, utilizing techniques like grayscale conversion, noise reduction, and contrast enhancement The process culminates in applying the Canny edge detection algorithm to identify lane markings.
To effectively identify lane markings, a specific region of interest (ROI) is selected, as lane markings are typically found within this area The edges detected within the ROI are then processed using the Hough transform, a technique that identifies lines in the edge-detected image, highlighting potential lane markings.
The project implementation group will calculate the lateral offset, which is the perpendicular distance from the center of the vehicle to a reference point on the lane model, once the vehicle's position within the lane is established.
To navigate effectively, the vehicle calculates the distance from its center to the nearest point on the lane model By assessing the compatibility gap, the navigation system activates, allowing the vehicle to proceed In the project, QR Code "Box2" is designated in red, QR Code "Box3" in blue, and the original position is marked in white Upon receiving the QR Code input, the vehicle identifies the lane and, while in motion, scans the red and blue lines to pinpoint the warehouse for dropping off purchases before returning to the original position The overall process is illustrated in the flowchart shown in Figure 3.9.
Figure 3.9 Flowchart of AGV Commodities Transportation
The algorithm utilizes a 480x640 input image captured by a vehicle-mounted camera to identify the lane center By cropping the region of interest from the image, we effectively minimize noise and enhance processing efficiency.
Image processing involves operations such as resizing, color space conversion, and image filtering, followed by the application of edge detection algorithms like the Canny edge detector and Sobel operator to identify edges in preprocessed images The process focuses on a defined region of interest (ROI), typically the road area where lane markings are expected, enhancing the efficiency of lane detection by minimizing processing in irrelevant areas The Hough Transform algorithm is then utilized to identify straight lines representing lane markings, which are displayed on the original image Finally, the steering angle is determined to control the motor effectively.
Figure 3.10 Flowchart of detect lane
The process involves capturing image and video frames from cameras installed on vehicles These frames are converted from RGB to HSV color space, allowing for the identification and calculation of contour areas If the calculated area exceeds 500, the motor control module will stop, as it cannot accurately determine the size of the detected object, and will resume detection thereafter.
40 lane In contrast, if the area is less than 500, the motor control module will detect line red and halt as shown the figure 3.11
Figure 3.11 Flowchart of stop line
The robot arm employs four MG996R servos with 5V input for a pulling force of 3.5 kg/cm, allowing the arm to move smoothly The MCU will utilize GPIO pins to produce
To control the L298N DC 5V motor, PWM pulses are utilized, requiring four GPIO pins for operation The output consists of varying high and low pulse signals, as illustrated in Figure 3.12.
Figure 3.12 Flowchart of the arm robot
After warming up, the arm joints will naturally rotate to a default position, ensuring the arm is always prepared to pick up objects efficiently This readiness is crucial as the vehicle arrives at the designated location.
42 selected object, the MCU will output the control pulse to the 4 servo motors and the arm to do its job, after that it will return to the default state
EXPERIMENT AND DISCUSSION
The results of system
The system operates through three main components: first, the user scans a QR code to identify the location of a specific commodity, with variations of QR codes representing different warehouses Once scanned, the system's camera decodes the QR code, allowing the system car to detect the lane and navigate accordingly As the system car moves, it scans the road to match the input QR code with the one already scanned Upon identifying the commodity's location, the robotic arm retrieves the item and proceeds to the designated position for initialization.
Figure 4.1 The result of the system
Following Figure 4.1, when the vehicle scans QR Code through the camera marked
1 When it receives the frame figure from the QR Code input, the arm robot will be notified to pick up the area purchasing as shown in figure 4.2
Figure 4.2 The result of arm robot pick up goods
The camera begins to scan the QR code mounted on the vehicles After that, the system receives a signal from the arm robot, and the vehicle begins to detect lanes
Case 1 : The vehicle runs forward as shown on the figure 4.3 and figure 4.4 When the vehicle runs at an established angle of 90 degrees when working with PID to modify the vehicle to run in the lane without deviation
Figure 4.3 The result of the system detect lane forward
Figure 4.4 The model experiment of detect lane forward
Case 2 : The vehicle runs turn left as shown on the figure 4.5 and figure 4.6 The angle will be less than 90 degrees when the tracking vehicle turns to the left Because the camera angle is limited, only one side of the left lane can be recognized when the vehicle turns left
Figure 4.5 The result of the system detect lane turn left
Figure 4.6 The model experiment of detect lane turn left
Case 3: The vehicle runs turn right as shown on the figure 4.7 and figure 4.8 The angle will be bigger than 90 degrees when the tracking vehicle turns to the right Because the camera angle is limited, only one side of the right lane can be recognized when the vehicle turns left
Figure 4.7 The result of the system detect lane turn right
Figure 4.8 The model experiment of detect lane turn right
Each color corresponds to a unique QR code, with "Box2" designated as red and "Box3" as blue When the camera captures a QR code, the vehicle identifies its lane and utilizes the road's color to determine the warehouse's location If the color is recognized at the same time as the QR code, the vehicle will halt and complete the delivery.
Figure 4.9 The model experiment of stop warehouse 1
Figure 4.10 The model experiment of stop warehouse 2
The project implementation group employs color detection methods to ascertain the vehicle's initial position, utilizing the color white to identify the original return, as illustrated in Figure 4.11.
Figure 4.11 The model experiment of determine original position
System evaluation
The system demonstrates smooth and stable operation, as highlighted in Table 4.1, which summarizes its recognition results, including commands such as turn left, go straight, detect QR codes, pick up the box, drop the box down, and stop.
The camera quickly scans QR codes in standard environments during system testing, as indicated by the input images and frames The results table details the system's performance, showcasing the number of test cases, correct attempts, incorrect attempts, and overall accuracy.
The recognition efficiency of QR codes is consistently 100%, significantly influenced by the user's selection of the QR Code Careful selection of QR Codes leads to improved recognition rates Additionally, the project implementation group employs color detection by converting RGB to HSV to accurately identify the original position and stop line After extensive testing, the overall efficiency of detection remains high.
51 classification of colors is found to be around 82% The results are stably by the stop and determine original position
Table 4.1 Performance results of the system
Case Number of correct attempts
The lane detection system focuses on two primary objectives: turning left and proceeding straight, with left turn detection accuracy at approximately 57% The project implementation team has observed unstable results, highlighting the need for further investigation Two key characteristics of the vehicle are evaluated: mass and brightness The vehicle's weight significantly impacts the performance of the DC motor; for instance, when the vehicle operates without an arm robot, lane detection accuracy reaches 73% In contrast, equipping the vehicle with an arm reduces accuracy to only 30%, indicating that increased mass negatively affects detection speed during left turns, potentially causing the motor to stall or operate at reduced speeds Consequently, the team has determined that enhancing the motor to enable active wheel movement is necessary Additionally, effective image processing requires strong lighting, as low light conditions can introduce noise that impairs detection capabilities.
Table 4.2 Performance results when the vehicle detect lane without and with arm robot
Case Number of correct attempts
The arm robot, as detailed in Table 4.1, achieves a box pickup success rate of approximately 82% This percentage reflects the robot's significant role in accurately delivering packages The effectiveness of the arm robot in dropping the box is contingent upon the stop line, as it will only lower the box precisely when the vehicle halts at the designated location within the warehouse.
CONCLUSION AND FUTURE WORK
CONCLUSIONS
The "Design and Implementation of AGV Commodities Transportation" model has been thoroughly researched, developed, and refined by the project implementation group, resulting in a system that operates accurately and meets operational requirements Notably, when the camera successfully recognizes the QR code, the system's vehicle effectively detects the lane, demonstrating a high level of stability.
To achieve optimal results, it is essential to address existing limitations in knowledge and equipment The accuracy of cameras when scanning QR codes is inconsistent under varying lighting conditions, particularly in low light and high-intensity environments Additionally, the performance of the system is compromised if lighting is insufficient.
FUTURE WORK
Following the completion of the system and its implementation results detailed in Chapter 4, the project implementation group identified opportunities for future development Consequently, they proposed expanding the functions and applications of the project, focusing on enhancing its capabilities and usability.
− Adding HMI and the mobile app to convenient for managing purchases and warehouse positions
To enhance future projects and streamline the integration of production and logistics, a design for an Automated Guided Vehicle (AGV) system utilizing AGV forklifts for unloading finished products at the warehouse is essential This approach enables seamless integration from the initial stages of production all the way to storage in the warehouse.
− Built-in network module, separate wifi for the automobile, allowing the vehicle to move further
− Improve accuracy by Deep Learning
[1] Bradski, G and Kaehler, A” Learning OpenCV: Computer vision with the OpenCV library” O'Reilly Media, Inc, 2008
[2] Raspberry Pi Trading Ltd, Published in June 2019, “Raspberry Pi 4 Computer Mode B” https://datasheets.raspberrypi.com/rpi4/raspberry-pi-4-datasheet.pdf
[3] Gary Bradski and Adrian Kaehler, “Learning OpenCV”, sofware that sees
[4] Dhanasingaraja R, Kalaimagal S, Muralidharan G “Autonomous Vehicle Navigation and Mapping Systemautonomous car”, International Journal of Innovative Research in Science, Engineering and Technology , Vol 3, No 3, pp.1, March 2014
[5] Mohammed, M.M.A., 2016 Robot Arm Design using Leap Motion
[6] Peerzada, P., Larik, W H., & Mahar, A A,”DC Motor Speed Control Through
Arduino and L298N Motor Driver Using PID Controller”, International Journal of
Electrical Engineering &Amp; Emerging Technology, 4(2), 21–24, 2021
[7]” Arduino and Open Source Computer Hardware and Software “, Nikola Zlatanov, 2013-01-18
[8] “Programming Arduino: Getting Started with Sketches 2nd Edition”, Kindle Edirtion, June 29, 2016
[9] M.Asif, M.R.Arshad, and P.A.Wilson, “AGV Guidance System: An Application of Simple Active Contour for Visual Tracking”, Proceeding world academy of science engineering and technology, Vol 6, No 2, pp 664 –667, 2007
[10] Mohamed Aly, “Real time Detection of Lane Markers in Urban Streets”, International Conference on Intelligent Vehicles Symposium, pp.7-12, 2008
[11] Chihab, N., Zergainoh, A., Astruc, J.-P, “Generalized non-uniform B-spline functions for discrete signal interpolation”, Proceeding on IEEE International symposium on Signal processing and its applications, pp.129-132, 2003
[12] Maria Petrou, Pedro Garcia Sevilla, “ Image Processing: Dealing with texture”, Wiley, March 2006
[13] S Tiwari, "An Introduction to QR Code Technology," 2016 International Conference on Information Technology (ICIT), Bhubaneswar, pp 39-44, India, 2016
[14] Dong-Hee Shin, Jaemin Jung, Byeng-Hee Chang ,“The psychology behind QR Codes: User experience perspective” , Science Direct, Computers in Human Behavior 28 , pp 1417-1426, 2012
[15] Phaisarn Sutheebanjard, Wichian Premchaiswadi, “QR Code Generator”, IEEE 2010 8th International Conference on ICT and Knowledge Engineering, pp 89-92, Nov 2010
[16] Ayadi, N., Turki, M., Ghribi, R and Derbel, N, “Identification and development of a real-time motion control for a mobile robot's DC gear motor”, International Journal of Computer Applications in Technology, 55(1), pp.61-69, 2017
[17] Bradski, G and Kaehler, A.,”OpenCV Dr Dobb’s journal of software tools”, pp.3,
[18] X W, ei, Z Zhang, Z Chai and W Feng, "Research on Lane Detection and
Tracking Algorithm Based on Improved Hough Transform," IEEE International
Conference of Intelligent Robotic and Control Engineering (IRCE), Lanzhou, China, pp 275-279, 2018.