Anda di halaman 1dari 26

LEMBAR

HASIL PENILAIAN SEJAWAT SEBIDANG ATAU PEER REVIEW


KARYA ILMIAH : JURNAL IMIAH

Judul Artikel Ilmiah : The Application Poly Aluminum Chloride as an Anionic Trash Catcher to Enhance Medium Paper
Properties
Nama Pengusul : Ni Njoman Manik Susantini
Jumlah Penulis :2
Status Pengusul (Penulis ke-) : Penulis ke-l
Identitas Jumal Ilmiah a. Nama Jumal : International Joumal of Scientific Research in Science and
Technology
b. Nomor ISSN : 2395-6011
c. Vol. No. Bln. Thn : Vol 10 No l, January 2023
d. Penerbit : TechnoScience Academy
e. Jumlah Halaman :

Kategori Publikasi Jumal Imiah


(beri r/ pada kategori yang tepat)
E Jurnal Ilmiah lnternasional Berputasi
Jurnal Ilmiah Internasional
E
tl
Jumal Ilmiah Nasional Terakreditasi
Jumal Ilmiah Nasional Tidak Terakreditasi
t_l Jurnal Ilmiah Terindex di DOAJ/lainnya

l. Hasil Penilaian Validasi

No Aspek UraiarlKomentar Penilaian

I Indikasi Plagiasi <A^),* ,-9^


2 Linieritas

II. Ha-sit Penilaian Peer Review:

Nilai Maksimal Jurnal Ilmiah (isi kolom yang sesuai)


Nilai Akhir
Komponen Yang lntemasional Internasional Nasional Nasional Tidak Nasional Terindex Yang
Dinilai Bereputasi Terakreditasi Terakreditasi DOAJ dll. Diperoleh

Kelengkapan
kesesuaian unsur
dan
isi
7 z
iurnal (10%)
Ruang lingl':up dan
kedalaman pernbahasan
(30%) t I
Kecukupan dan
kemutakhiran
data/intbrmasi dan I I
metodoloei (30Yo)
Kelengkapan unsur dan
kilalitas Penerbit (30%)
.I'orat
q 7
= (t00%l '4o 7,5
Kontribusi pengusul: (contoh: nilai akhir peer X Penulis,Pertama
Komentar/ Ulasan Peer Review '.
: 1 8 X 60% : (nilai akhir yang diperoleh pengusul) o.6 > Wz t,
?
N4.J"*I-.& \J& dUr"",+*-l
"g
*
Kelengkapan kesesuaian unsur

.rl

Ruang lingkup dan kedalaman


pembahasan I ,b^;lv
fr;zh
ry
"T9
I Jl *k
ry
dan kemutakhiran
Kecukupan
datalinl'orrnasi darr mctotlologi
D^iF N-.* W-- A-Jq*
*<;UvLV
h--
b, b-VLa

Kelengkapan unsur dan kualitas


Penerbit

% rn'
wry

Tanggal Review,

t W g 92-c
W t9o t

Unit kerja
Bidang Ilmu
Jabatan Akademik (KUM)
Pendidikan Terakhir a
LEryIBAR
IIT\SII, PENILAIAN SEJAWA'T SEtsTDANC ATAU P'-f'X REVIEW
KARYA II,MIAH: JtIRNAL IL'IIAH

Judul Artikel Ilmiah : The Application Poly Aluminum Chloride as an Anionic Trash Catcher to Enhance Medium Paper
Propefiies
Nama Pengusui : Ni Njoman Manik Susantini
Jumlah Penulis ;2
Status Pengusul (Penulis ke-) : Penulis ke-l
Identitas Jurnal Ilmiah a. Nama Jumal : International Joumal of Scientific Research in Science and
Technologr
b.NomorISSN :2395-6011
c. Vol. No. Bln. Thn : Vol 10 No l, January 2023
d.Penerbit' : TechnoScience Academv
e. Jumlah Haiaman :

Kategori Publiktrsi .ir-rmal Imiah Jurnal Ilmiah Intemasional Berputasi


theri . f lri:r karegorr -\ rns lcnrt I Jurnal Ilmiah Intemasioaal
Jumal Ilmiah Nasional Terakreditasi
Jumal Ilmiah Nasional Tidak Teralreditasi
Jumal trlmiah Terindex di DoAJflainnya

I. I lasil Pcnilaian Valirlasi

No Aspck {. JlaianlKolnentar Pcn i I aian

f,d4uL $e,.r.t -,d-uLt-+t-


ftI*a-+->
1 lndikasi Plagiasi /A%A.h

2 Linieritas

E
+
Fb e..*,
ll. ttasii Pcnilaian Peer Rcvielu:

Nilai Maksimal Jumzrl Ilmiah (isi kolom yang sesuai)


Komponen Yang Nilai Akhir
Lrtcrna-sional Internasional Nasional Nasional Tidak Nasional 'Ierindex Yang
Dinilai []ereputasi t/ Terakreditasi Terakreditasi DOAJ dII. Diperoleh

Kelengkapan dan
kesesuaian
:-----^l
lulrar
/1^o/\
( rvTo I
uil st.ir isi /.{ /.e
Ruang lingkup dan
kedalaman pemtrahasan
\JU/4,
Keeukupan dan
4.s f.{
kemutakhiran
data"iinformasi daa
rnetodolosi (30o/o)
+T ff
Keiengkapan unsur dan
kua!itas Pen*rbit { l0o.i)
Torat: (1$t)%)
4.5
/<-
1C
'/s-
Kontribusi pe*gusul: (contoh: niiai akhir peer X Fenuiis Pertama : I X 60% : (nilai akhir yrtrg riipcroieh pengusul) q'
-
1

Komentan' Ulasan Pcci' Rr,r.:1e10


/s-

-// L1'l-u1
/,-L-;
L//
[\ C
Kelengkapan kesesuaian unsur
/-r,?.A ft,tu9-aLq
/,2r. /--^^n<A
hs

Ruang lingkup dan kedalaman


pembahasan

R-^r"-d- P-rl."1

(Kktz*<-
2r,-, 4t*'

Kecukupan dan kemutakhiran


data,/informasi dan metodolo-qi

Kelengkapan unsur dan kualitas


Penerbit
P'-4 t

Tanggal Review,

Penilai II

.k.? TO )
NIDN 1); BS3 668
Unit kerja : g(J R lC fc- A'*
Bidang ilmu : c-tfag"4"1,1 Pe; g .?|A.&JN I N6;
Jabatan Akademik (KuM) ,
Penciidikan Terakhir t
Tb ,/ U7UT? e- KEpALn
NraisTe A AR s r -fiEeT-u <
190123
by Savior Task

Submission date: 18-Jan-2023 11:14PM (UTC-0500)


Submission ID: 1986519368
File name: IJSRST_Rizky_Darwis.doc (1.27M)
Word count: 2959
Character count: 14494
2

2
2

1
8
2
9

7
4

2
190123
ORIGINALITY REPORT

9 %
SIMILARITY INDEX
8%
INTERNET SOURCES
6%
PUBLICATIONS
6%
STUDENT PAPERS

PRIMARY SOURCES

1
Submitted to Higher Education Commission
Pakistan
3%
Student Paper

2
ijsrst.com
Internet Source 2%
3
www.neliti.com
Internet Source 1%
4
www.tandfonline.com
Internet Source 1%
5
Gennaro Bufalo, Claudia Florio, Giuseppe
Cinelli, Francesco Lopez, Francesca Cuomo,
1%
Luigi Ambrosone. "Principles of minimal
wrecking and maximum separation of solid
waste to innovate tanning industries and
reduce their environmental impact: The case
of paperboard manufacture", Journal of
Cleaner Production, 2018
Publication

6
www.x-mol.com
Internet Source 1%
7
Submitted to The Heritage School CN-173885
Student Paper <1 %
8
www.iiste.org
Internet Source <1 %
9
"International Conference on Communication,
Computing and Electronics Systems", Springer
<1 %
Science and Business Media LLC, 2021
Publication

Exclude quotes Off Exclude matches Off


Exclude bibliography Off
190123
PAGE 1

PAGE 2

PAGE 3

PAGE 4

PAGE 5
Table of Content
Editorial Board
International Journal of Scientific Research in Science and Technology
Print ISSN: 2395-6011 | Online ISSN: 2395-602X (www.ijsrst.com)
doi : https://doi.org/10.32628/IJSRST2310115

Car Dirtiness and Damage Detection For Automatic Service


Recommendation Using Machine Learning Techniques
Mohammed Abdullah Khan, Gundlapally Siri Reddy, Ramavath Tarun
CSE Department, Sreyas Institute of Engineering and Technology, Hyderabad, Telangana, India

ABSTRACT
Article Info Automobile Industry is growing by a huge percentage in a few decades
contributing about 7.5% to India’s total GDP. As the number of vehicle owners
Publication Issue is increasing the demand and need for automobile service is also high but
Volume 10, Issue 1 people are busy with their routines hence failing in proper maintenance of
January-February-2023 their vehicles. In this paper by using machine learning algorithms, and object
detection we have come up with the idea to develop a web application that can
Page Number suggest to users some offers and timing for their car maintenance by analyzing
144-150 a car using computer vision without the owner’s involvement. The primary
Aim of this project is to maintain a vehicle without disturbing the owner’s day-
Article History to-day routine. This project is built using the latest technologies and the most
Accepted: 10 Jan 2023 trending domains in the industry. We have used the YOLOV5 (You Only Look
Published: 25 Jan 2023 Once) object detection model and VGG16 architecture to analyze the car
images and Flask framework to create a responsive interface for the web
application. We generally don’t realize that multiple tasks can be done at a
time resulting in many tasks being incomplete one of which is vehicle
maintenance. This project aims at both owner’s convenience and the growth of
the service provider’s business.

In this paper, we present the Machine Learning based automated car


maintenance system with effective time utilization which is an IOT device that
could be installed at the parking’s main gate of places where people tend to
spend many hours like offices or malls. This Device consists of a camera that is
responsible for detecting a car image from the live video. These images are sent
for the further process where the device detects if there are any damages or
dirtiness in the car using pre-trained models.

Keywords: Car Damage Detection, Dirtiness detection, Feature Extraction,


Custom Object Detection

Copyright: © the author(s), publisher and licensee Technoscience Academy. This is an open-access article distributed under the 144
terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use,
distribution, and reproduction in any medium, provided the original work is properly cited
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

I. INTRODUCTION trusted algorithms for custom object detection and


being used for decades. Many researchers bring
Today, one of the growing sectors is the automobile changes and additional features regularly to make the
industry. The Increasing demand and need for cars model more accurate and efficient. YOLOv5 demands
are resulting in a high need for Car detailing services. a good amount of data to train the model. On the
Car detailing service is a million-dollar industry other hand, VGG16 is one of the most favorable
where the service provider goes for the classification algorithms this scans for every corner of
advertisements that cost a pretty penny. In the the image to correctly predict the category of the
automobile industry, Artificial Intelligence based on image to which it belongs.
machine learning can assist with challenges including
efficient utilization of time and proper maintenance For these diverse advantages of object detection
of the car. However, developing applications to Author [9] proposed a way of recognizing and
address such issues remains difficult, particularly classifying damages to buildings using high-
when using machine learning models like YOLOV5 resolution satellite images. These could be very
and VGG16 architecture. YOLOV5 is an effective and much beneficial to maintain the record of untouched
latest version of YOLO (you only look once) which is buildings which are remained empty for decades. To
an object detection model that could be trained if design this the images of buildings both perfect and
there are necessary and pretty good resources. Object damaged with annotations are provided as input to
detection models demand large datasets in order to train our model.
train the model to get the desired accuracy with
The similarity in papers [1, 2], is that they used Deep
specifications. Whereas VGG16 takes the input as an
learning techniques to detect if there are any damages
image and processes it in different parts and displays
in the car’s body. But due to the limitation of dataset
the result in string format. This is not only suitable for
availability, the accuracy of almost all models lies on
automatic service recommendation but also can be
the same scale.
used in AI-powered service stations to completely
scan for minute damages and detailing As the demand and need for cars are increasing day
. by day there is an increase in the insurance
For now, there are no publicly available datasets for companies too, and claiming the insurance manually
car-damaged or dirt photos. However, we are going to is a big and hectic task a solution to this problem is
use the images which are available to the public to proposed by the Author [6]. In which a model was
train our model. In this instance, we are using up to trained using deep learning techniques to identify the
400 images to train our model but we would say damaged cars and automatically claim the insurance
“more the images better the training”. without the actual involvement of the person saving
both time and energy.
II. LITERATURE REVIEW Several other damage detection approaches have been
proposed and applied to car body damage detection,
One of the key research topics in computer vision is Srimal.al[9] propose to use of 3D CAD models to
object detection. On the instance level, it determines handle automatic vehicle damage detection via
the category and position information of the object of photography.
interest in the image. Object detection could help in
An Anti-Fraud System for car insurance claims based
solving many real-time problems that we tackle
on visual evidence – They proposed an approach to
regularly in our day-to-day life. YOLO is one of the
generate robust deep features by locating the damages

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 145
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

accurately and using YOLO to locate the damaged After annotating the images using the labelimg
regions. Gontscharov.al tries to solve vehicle body python module our text file of the labels folder looks
damage by using multi-sensor-data fusion. as shown in Figure 1

An attempt is also being made in building AI-


powered service stations where the entire process
starts with observing the car for damages or minute
Figure 1: Random file from labels
scratches on the car body and then fixing it with less
or completely no involvement of humans. This Here there are 5 values the first value represents the
revolutionary change will not only save money but class name (here 0 means Scratch) and the other four
also time. values denote the coordinates of the rectangle or the
bounding box. After annotating all the images we
III. METHODS AND MATERIAL
move further to the main aspect which is training
the model.
This section of the paper discusses the data set used
for testing, validating, and training the model. The We have used a file with .yaml extension in the
basic structure of custom object detection datasets training of our custom object detection model. This
and files will be explained here. The dataset is used to file specifies the class (specified damage) as shown in
train both models is downloaded from Kaggle. Figure 2. This file helps the model decode the
annotated files.
We are maintaining 2 different datasets for training 2
different models we are using. Each dataset contains
about 190 images (which is low to get the desired
accuracy). Of these 190 images, - 170 images are
separated for training (train dataset), and 20 images
are separated for validation (validation dataset). For
the damage dataset, each image is labeled using
labelImg.

labelImg is a free, open-source tool for graphically


labelling images. It’s written in python and uses QT
Figure 2: Custom-data.yaml file
for its graphical interface. It’s an easy, free way to
label a few hundred images to try our custom object
detection model. • Train: train dataset path
• Val: validation dataset path
A. Dataset-1 for damage detection
• Test: test dataset path which is optional
The folder structure of dataset-1 looks like figure 1
• NC: Nc describes the number of classes
which contains three folders namely test, train, and
validation. Train and validation have two subfolders • Names: names of the classes (specification or
namely images and labels. The images folder contains type of damage) these four class names are
the images used to train the model whereas labels are denoted using numbers 0, 1, 2, and 3 (0
the text files extracted after drawing bounding boxes means Scratch, 1 means Broken Glass, 2
(annotating) for each image with their respective class means Deformation, and 3 means Broken)
names.

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 146
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

B. Dataset-2 for dirtiness detection


The folder structure of dataset-2 is quite similar to
the folder structure of dataset-1, The only difference
is there are no separate folders named labels because
here we are using VGG16 architecture which doesn’t
require images to be annotated in order to use to train
the model. Instead, we maintain two separate folders
Figure 3: Architecture Diagram
namely clean, and dirty later we provide all images as
folders at once while training the model. To train our A. YOLOv5 algorithm
model to detect if the car is clean or messy we just
need to provide enough images of both clean cars and The YOLOV5 algorithm basically is the most
dirty cars using which our model will automatically commonly used object detection algorithm. It first
understand and learn the features so that when a new scans the image then using the trained data it tries to
image is entered it looks for the exact same features detect in which class detected damages should be
using which we trained it. placed.

Training the model using VGG16 is quite easier than The algorithm flow is as follows:
training the model using the YOLOv5 model. we are
1. The architecture mainly consists of three
using two models to tackle two different problems.
parts namely Backbone, Neck, and head.
VGG16 requires less dataset when compared to
YOLOv5 because in YOLOv5 we need to annotate 2. Model Backbone is mostly used to extract key
the images and categorize them into different class features from an input image. CSP (Cross
names and when the model is trained we expect the Stage Partial Networks) are used as a
output with boundary box and class name. whereas backbone in YOLO v5 to extract rich useful
VGG16 displays a string output in either of the characteristics from an input image.
statements provided and necessary conditions are 3. The neck model is mostly used to create
fulfilled. feature pyramids. Feature pyramids aid
models in generalizing successfully when it
comes to object scaling. It aids in the
IV. IMPLEMENTATIONS
identification of the same object in various
sizes and scales.
Machine learning algorithm called YOLOV5 and
VGG16 is used in the implementation of the model in 4. The model head is mostly responsible for the
this paper. The model is implemented using final detection step. It uses anchor boxes to
supervised learning. Figure 3 depicts the working construct final output vectors with class
process of our system in recognizing the damages or probabilities, objectness scores, and bounding
dirt and giving an option to the user to get it fixed boxes.
while they are busy with their routine work. 5. The final output is displayed after processing
through all layers which is the image with
the damaged area and its category.

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 147
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

When a car is detected, the system captures the provide different datasets of different categories
image and this is sent for further processing the final without any specific instructions hence leaving our
output image is displayed as shown in Figure 4 model to learn on its own. Once our model is ready
we test images by just passing it in form of input
where the size adjustment is done and a String form
Output is displayed. In our case, we get output either
as clean or messy where messy means dirty and
clean indicates the car is perfectly fine.

The algorithm flow is as follows:

1. The architecture mainly consists of two stacks

2. The initial size is always 224 x 224 x 32 and for


every iteration, the image size is halved.

3. The size of the activations at the end of the first


Figure 4: Working flow in YOLOv5 stack is 112 x 112 x 64

B. VGG16 Architecture
4. The output at the end of both these stacks will be
7 x 7 x 512.
VGG16 proved to be a significant milestone in the
5. The stacks of convocational layers are followed
quest of mankind to make computers “see” the world.
by three fully connected layers with a flattening
A lot of effort has been put into improving the
layer in between.
features and abilities under the discipline of computer
vision (CV) for several decades. It is a convocational 6. The first two have 4096 neurons each, and the
Neural Network model proposed by Karen Simonyan last fully connected layer serves as the output
and Andrew Zisserman at the University of Oxford. layer and has 1000 neurons corresponding to the
VGG was named after the department of Visual 1000 possible classes for the ImageNet dataset.
Geometry Group at the university of oxford.
The input to any of the network configurations is V. RESULTS AND DISCUSSION
considered to be a fixed-size 224 x 224 image with
three channels – R, G, and B. Our dataset consisted The results were quite accurate even though a small
of more than a thousand images collected from dataset was used. We first process the image through
different sources and methods hence all the images the VGG16 architecture to find out if the car is clean
are not of the same size and format as required by or dirty for this the car image is detected by the
our VGG16 architecture. So in the first step, we first system and single string output will be displayed
convert all the images from any other format to ,jpg either of the both “Your car is clean” or “Your car is
or .jpeg format, and then we use a python module messy”. Once the car is checked for dirtiness then we
that can perform some basic operations on the process our second model which is YOLOv5 custom
images and using which we resize the images which object detection to detect if there are any scratches,
were given as input while training or testing of our broken glass, deformation, etc. The output of the
model. second model is the image of the car with bounding
boxes and a confidence score. If there is any damage
VGG16 architecture could also be treated as a feature
or dirtiness detected then the system displays a QR
extraction machine learning model because here we
code by scanning which the user lands on the main

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 148
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

page of the website where he can avail of the service Scratch area is very high and accurate. This is very
at their comfort. surprising considering the fact that a small dataset (of
190 images) was used to train our model. This is
For the purpose of testing and validation, we took a because the annotation was clear and helped our
random image from the test dataset of both datasets model to train faster and more accurately. There were
to check for damage detection and dirtiness detection some exceptional cases too where our model failed to
The test image can be seen below in figure 5. detect the damages or dirtiness because of the unclear
visibility which might be due to the rainy weather or
a car that has stickers on its body. To overcome this
we need more images or a dataset that has various and
diversified possibilities. If we work more on our
dataset and provide more images (in thousands) then
our model could achieve a great confidence score and
the total loss will be almost negligible.

As we observe above the car has damages that were


detected by our model now in the next step the
Figure 5: random image from the test dataset
system will display a QR code. When the user scans
the QR code he will be redirected to our website
We can observe the car is not that dirty but there are where he can fill out the form and verify his
a few damages that need to be fixed. When This credentials to avail of the service if he is going to stay
image is processed for dirtiness detection we get the there for a long duration. If the user selects to avail of
output “Your car is clean” now we send this image for the service an agent nearby will come to the location
damage detection where we get the output which is to pick up the car and then takes it to the service
shown below in figure 6. center. When all the work is done the agent parks the
car at the exact same place or hands it over to the user
at their desired location, this way the user’s work
schedule will not be distributed.

VI. CONCLUSION

To deal with the problem of regular maintenance of


the car and allocating separate time for maintenance
and not being able to use the car for such a long time
the idea proposed here is a good alternative to the
Figure 6: Output after damage detection traditional methods that are being followed. The
The results were very good and quite accurate. We suggested approach of using two algorithms YOLOv5
can observe that the small scratches or dents on the and VGG16 will not only help in showing the
car are also getting noticed or detected by our model. accurate result but also both algorithms can work
Those scratches are not easily visible to our naked eye parallelly to display the results faster because the
but the model still counts it as a damaged part and system is going to be installed at the parking’s main
also the deformation. The confidence score of the

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 149
Mohammed Abdullah Khan et al Int J Sci Res Sci & Technol. January-February-2023, 10 (1) : 144-150

gate where generally a user spends an average time of Learning Techniques. In2019 IEEE Fifth
10 to 30 seconds. International Conference on Multimedia Big Data
(BigMM).IEEE, 199–207.
Data extension can be done in the future to raise the [7] Najmeddine Dhieb, Hakim Ghazzai, Hichem
dataset’s size, and gather additional automobile Besbes, and Yehia Massoud. 2019. A very deep
damage images under various degrees of illumination transfer learning model for vehicle damage
and weather conditions. More and More images could detection and localization. In 2019 31st
be given to training the model which will help in International Conference on Microelectronics
detecting the minute scratches. The same project (ICM). IEEE,158–161.
could be expanded by including other vehicles like [8] WA Rukshala Harshani and Kaneeka Vidanage.
two-wheelers, trucks, and other big-sized vehicles. 2017. Image processing based severity and cost
The same approach could be used to cover all other prediction of damages in the vehicle body: A
vehicles which will help everyone maintain their computational intelligence approach. At the 2017
vehicle efficiently and regularly. National Information Technology Conference
(NITC).IEEE, 18–21.
VII. REFERENCES [9] F .Samadzadcgan and H. Rastrivcisi, “Automatic
detection and classification of damaged buildings,
[1] Mahavir Dwivedi, Malik Hashmat Shadab, SN using high resolution satellite imagery and vector
Omkar, Edgar Bosco Monis, Bharat Khanna, and data,” The International Archives of the
Satya Ranjan. [n.d.].Deep Learning Based Car Photogrammetry, Remote Sensing and spatial
Damage Classification and Detection. ([n. d.]. Information Sciences, vol.37.pp.415-420, 2008.
[2] Kalpesh Patil, Mandar Kulkarni, Anand Sriraman,
and Shirish Karande. 2017. Deep learning-based Cite this article as :
car damage classification. In 2017 16th IEEE
International Conference on Machine Learning Mohammed Abdullah Khan, Gundlapally Siri Reddy,
and Applications (ICMLA). Ramavath Tarun, "Car Dirtiness and Damage
[3] S. Ren, K. He, R. Girshick, and J. Sun, ‘‘Faster R- Detection For Automatic Service Recommendation
CNN: Towards real-time object detection with Using Machine Learning Techniques", International
region proposal networks,’’ IEEE Trans. Pattern Journal of Scientific Research in Science and
Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Technology (IJSRST), Online ISSN : 2395-602X, Print
Jun. 2017, DOI: 10.1109/tpami.2016.2577031. ISSN : 2395-6011, Volume 10 Issue 1, pp. 144-150,
[4] Jeffrey de Design. 2018. Automatic Car Damage January-February 2023. Available at doi :
Recognition using Convolutional Neural https://doi.org/10.32628/IJSRST2310115
Networks. (2018). Journal URL : https://ijsrst.com/IJSRST2310115
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and
Jian Sun. 2016. Deep residual learning for image
recognition. In Proceedings of the IEEE
conference on computer vision and pattern
recognition. 770–778.
[6] Ranjodh Singh, Meghna P Ayyar, Tata Sri Pavan,
Sandeep Gosain, and Rajiv Ratn Shah. 2019.
Automating Car Insurance Claims Using Deep

International Journal of Scientific Research in Science and Technology (www.ijsrst.com) | Volume 10 | Issue 1 150

Anda mungkin juga menyukai