In recent years, automotive manufacturers have equipped their vehicles with innovative Advanced Driver Assistance Systems (ADAS) to ease driving and avoid dangerous situations, such as unintended lane departures or collisions with other road users, like vehicles and pedestrians. To this end, ADAS at the cutting edge are equipped with cameras to sense the vehicle surrounding. This research work investigates the techniques for monocular visionbasedvehicledetection. A system that can robustly detect and track vehicles in images. The system consists of three major modules: shape analysis based on Histogram of oriented gradient (HOG) is used as the main feature descriptor, a machine learning part based on support vector machine (SVM) for vehicle verification, lastly a technique is applied for texture analysis by applying the concept of gray level co-occurrence matrix (GLCM). More specifically, we are interested in detection of cars from different camera viewpoints, diverse lightning conditions majorly images in sunlight, night, rain, normal day light, low light and further handling the occlusion. The images has been pre-processed at the first step to get the optimum results in all the conditions. Experiments have been conducted on large numbers of car images with different angles. For car images the classifier contains 4 classes of images with the combination of positive and negative images, the test and train segments. Due to length of long feature vector we have deduced it using different cell sizes for more accuracy and efficiency. Results will be presented and future work will be discussed.
Considering that the research’s result data of license plate detectionsystem is also used in motor vehicle license plate recognition system, then in motor vehicle license plate detectionsystembased on mobile is provided with the process to deliver the image of motor vehicle plate to personal computer (PC). Windows phone does not provide file transfer protocol facility so that data delivery to computer is using socket facility provided by Microsoft. The image of vehicle license plate’s object which has been found is processed by cropping process, gray-scaling process, storing height data of a license plate, storing width data of a license plate and storing all the pixels’ value on the image of motor vehicle license plate’s object. Obtained data is converted to become text and sends to the computer using socket. Data received by the computer is rearranged to become a digital image to be stored. The following is the receiving process on personal computer. Recognition system of motor vehicle license plate further is processing the image to perform character recognition to the image, where the outcome of character recognition is stored in the form of text file. Simultaneously on the mobile device side is waiting for the text file to be downloaded.
In the measurement system, BMU (Basic Measurement Unit) is a basic measurement module based on stereo vision, which detects the wheel center and calculates its 3D coordinates. As is shown in Figure 2, BMU_L1 is composed of Camera C1 and Camera C2; BMU_L2 of Camera C3 and Camera C4; BMU_R1 of Camera C5 and Camera C6; BMU_R2 of Camera C7 and Camera C8. BMU_L1, BMU_L2, BMU_R1 and BMU_R2 detect and reconstruct the wheel center of left front wheel, left back wheel, right front wheel and right back wheel respectively. The four BMUs should be calibrated by 3D target beforehand to obtain the extrinsic parameter matrixes among them and the equation of the vehicle supporting plane. When 3D coordinates of the four wheel centers are obtained, the wheelbase, the wheelbase difference and static wheel radius can be figured out according to the corresponding mathematical model.
This paper proposes an automatic surveillance of a vehicle. It can be installed in private automobiles so that their negligence in driving and accidents can be monitored at all times. This can result in a safer over watch with an immediate response to the road calamities. This process is done by using internet which is available and used by everyone.
KiranSawant et al. created an accident alert system using GSM and GPS modem and Raspberry Pi. A piezoelectric sensor first senses the occurrence of an accident and gives its output to the microcontroller. The GPS detects the latitude and longitudinal position of a vehicle. The latitudes and longitude position of the vehicle is sent as message through the GSM. The static IP address of central emergency dispatch server is pre-saved in the EEPROM. Whenever an accident has occurred the position is detected and a message has been sent to the pre-saved static IP address .
This is the most important level in which the control references, depending on the mode of operation, state of the UR and environment, are derived and commands the low level controller to activate the propellers of UR. The parameters defining the state of the UR and the environment are derived using the onboard sensors. The proposed system uses these parameters to generate the navigational command.
The high demand of automobiles has also increased the traffic hazards and the road accidents. Life of the people is under high risk. This is because of the lack of best emergency facilities available in our country. An automatic alarm device for vehicle accidents is introduced in here. The proposed design is a system which can detect accidents in significantly less time and sends the basic information to first aid centre within a few seconds covering geographical coordinates, the time and angle in which a vehicle accident had occurred. This alert message is sent to the rescue team in a short time, which will help in saving the valuable lives. Switch is also provided in order to terminate the sending of a message in rare case where there is no casualty, this can save the precious time of the medical rescue team.
Noisy and tilt license plate are serious problems arising in the development of such systems. The existing segmentation methods include mathematical morphology methods, selection borders, Hough transformation, horizontal and vertical projection , AdaBoost algorithm , convolutional neural networks (CNS) . To solve the problem of recognition of commonly used decision trees, hidden Markov models, support vector machines, pattern matching, various algorithms based on artificial intelligence: multilayer perceptrons, neural networks [4, 5], CNS and others.
According to World Health Organization(WHO), India is leading country in the road accident deaths. In India, 13 million peoples were dead in road accident in the year of 2014-15. These statistics are reported accidental records but there are numbers of accident which are unreported. Hence the numbers of actual accident are more than the statistic of WHO. According to the survey of Global Status Report on Road Safety, the reasons of the road accident are speeding, drunken driving, minimum use of safety appliances lie helmet and seat belts etc. The existing system mostly focused on the safety of the passenger but not on the immediate help after accident.
IJEDRCP1502001 International Journal of Engineering Development and Research (www.ijedr.org) 2 The traffic Accident Recording and Reporting System (ARRS) is an image-actuated moving picture recording and reporting system used to analyze and evaluate the occurrence of traffic crashes at intersections. The system consists of a charge coupled device (CCD) camera located on the corner of the intersection to obtain a view of incidents, an image processing unit that detects images which could be related to a traffic crash, a digital video recorder (DVR) that has recorded all the situations of the intersection for the previous two weeks, and a communication unit that send the Accident Moving Pictures (AMPs) to the Traffic Monitoring Centre (TMC). When the ARRS detects an event that could be a collision and captures the Accident Moving Picture (which includes five seconds before the event and five seconds after the event) from the Digital Video Recorder, the AMPs are sent tothe TMC by Virtual Private Network (VPN). This AMP consists of pictures taken five seconds before and after the event which will activate the system. These signal phases are then encoded onto the recorded AMP.
Many different types of ADAS have been developed, especially on distance detection between vehicles, as shown in [5 - 7]. The reason distance between vehicles is desired is because a vehicle can keep track of its location with respect to its surrounding environment and avoid collisions. The distance between vehicles is especially desired for adaptive cruise control, because the vehicle could keep track of how far it is from the vehicle that is in front of it, and try to match speed when it is at a safe speed and distance. There are many different methods of attaining ADAS as shown through [7 – 12]. Through research, this thesis will focus on obtaining an ADAS through monocular vision (MV) and machine training (MT). It was found that there were multiple steps in attaining an ADAS; MV choice, image augmentation techniques, NN architecture and training, license plate localization system construction, and vehicle distance detecting; which is covered below.
While calculating the fitness of the driver, we have to first enter some fields for checking the accuracy of driver’s fitness. The fields which we are filling for the calculating the fitness are as age, experience of driving in kms, check the health condition weather the driver is minor problem or is perfectly fine or is having major problem. Also, check the vision of the driver weather he/she is having perfect vision, or having some problem in vision. After entering all the fields in this module, calculate the driving score of the driver which will totally based on different fields and generate the calculated score by giving the output as either Completely Fit for Driving, or Partially Fit for Driving or Not Fit for Driving.
discover accessible parking spot in the most productive way, to dodge activity clog in a stopping territory, is turning into a need in auto stop administration. ADAS structure offers assistance to the driver and makes progress driving foundation. Current auto stop administration is subject to either human work force monitoring the accessible auto stop spaces or a sensor based framework the establishment and support cost of a sensor construct framework is reliant with respect to the quantity of sensors utilized as a part of an auto stop. The picture division calculation utilized with the expectation of complimentary space identification. Visionbased framework that can distinguish and demonstrate the accessible parking spots by utilizing camera in vehicle.
A real time computer visionsystem for vehicle tracking and traffic surveillance proposed method for detecting vehicles in complex environments using features based techniques. There were many techniques for visionbased systems to detect vehicles but they were developed using MATLAB. MATLAB is slow to process the videos. Another drawback of MATLAB is that it can work only with recorded videos.
ABSTRACT: The objective of this project is to develop a low cost vision-based driver assistance system over FPGA, to provide a solution of detection and avoidance of potholes in the path of a vehicle on road. We propose a conceptual framework where a centralized system detects and assists the driver to avoid potholes on roads. The system also identifies the potholes which are to be repaired immediately. The project will use an FPGA model to deploy image processing algorithms efficiently so that the output can be achieved in real time.Pothole avoidance may be considered similar to other obstacle avoidance except that the potholes are depressions rather than extrusions from a surface. A vision approach was used since the potholes were different visually from the background surface. The basic idea of this system is to detect the pothole at a distance from which driver is driving the vehicle, to alert the driver if pothole is arriving in the way to reduce the speed of the vehicle or take another path as well as to alert the local road development authorities for immediate repairing.The detailed description of the systembased on image processing developed to process and analyze the dataset captured using the camera that gives high efficiency and accuracy compared to the conventional methods of pothole detection.
The process of falling asleep at the wheel can be characterized by a gradual decline in alertness from a normal state due to monotonous driving conditions or other environmental factors; this diminished alertness leads to a state of fuzzy consciousness followed by the onset of sleep. The critical issue that a fatigue detectionsystem must address is the question of how to accurately and early detect fatigue at the initial stage(Qiong et al., 2006). Possible techniques for detecting fatigue of drivers can be broadly divided into three major categories which are based on sensors and computer vision while one other method in which fatigue is measured either verbally or by questionnaire method.
Pre-processing of video frames: Using a GUI tool developed as part of our vehicledetectionsystem, the user can select the region of interest on the captured vi- deo frame. The detection and tracking algorithms are only performed on this cropped image region to reduce the processing time of the system. The user is required to specify detection and speed zones using horizontal vir- tual reference lines. The detection zones are areas where the interest points are evaluated, vehicles detected and vehicle counts are incremented. The speed zones are ad- jacent to the detection zones, where the interest points are re-evaluated and vehicles are detected. As a rule of thumb, the detection zone length should be less than the vehicle length as seen in the video feed and the speed zone length should be just greater than the vehicle length as seen in the video feed. The user specifies the virtual vertical lane reference lines that segment the lanes on the video frame. These vertical lines are used to determine vehicle counts by lane. Also, the user specifies the direc- tion of vehicle motion or traffic flow, the calibration re- ference line and the corresponding distance in physical distance. This reference distance is used to evaluate the speed of the vehicle.
the use of canny edge detection is applied, the white point is count. Accordingly, the percentage matching and the time allocation is done. Further, these are implemented using the hardware in which the four ways traffic intersection model is designed. These four ways intersection model is consists of four arrays of LEDs with each array having red and green light. Python programming language is used for image processing and Arduino development board is used for controlling the LEDs. In  the image is captured by using camera than its converted into a grayscale image. The grayscale image is converted into the threshold image. The edge detection method using canny edge detector. On which the contour has been drawn in order to calculate the vehicle count. The vehicles are boxed to find the count, the output screen in the command prompt to display the vehicle count. Density measurement is implemented by using OpenCV software for image processing, by just displaying the various conversion of the image on the screen. Finally surrounding the box on the vehicle in the given image. The number of vehicles counted and the density of the vehicle is counted by using mat lab. In  the density of the vehicle count is done by using the video and the image. Overview of vehicledetection and counting system consists of the input frame, segmentation, and detection, tracking, and counting. In these we have used vehicledetection using image processing consists of the input image, Converting RGB to gray, Convert to binary, Edge detection, Image enhancement, Labeling the detected region, Vehicle tracking and vehicle counting. 3. PROPOSED SYSTEM
Intelligent vehicledetection and counting are becoming increasingly important in the field of highway management. However, due to the different sizes of vehicles, their detection remains a challenge that directly affects the accuracy of vehicle counts. To address this issue, this paper proposes a vision-basedvehicledetection and counting system. A new high definition highway vehicle dataset with a total of 57,290 annotated instances in 11,129 images is published in this study. Compared with the existing public datasets, the proposed dataset contains annotated tiny objects in the image, which provides the complete data foundation for vehicledetectionbased on deep learning. In the proposed vehicledetection and counting system, the highway road surface in the image is first extracted and divided into a remote area and a proximal area by a newly proposed segmentation method; the method is crucial for improving vehicledetection. Then, the above two areas are placed into the YOLOv3 network to detect the type and location of the vehicle. Finally, the vehicle trajectories are obtained by the ORB algorithm, which can be used to judge the driving direction of the vehicle and obtain the number of different vehicles. Several highway surveillance videos based on different scenes are used to verify the proposed methods. The experimental results verify that using the proposed segmentation method can provide higher detection accuracy, especially for the detection of small vehicle objects. Moreover, the novel strategy described in this article performs notably well in judging driving direction and counting vehicles. This paper has general practical significance for the management and control of highway scenes.
Based on the results of the cascaded system, it can be seen that the average time is approximately equivalent to the sum of the processing time of the two individual systems. This is due to the alternate execution of each algorithm. For the pothole detection, the accuracy was decreased by 5.35%, while for the speed hump detection; the accuracy was increased by 0.10%. However, the increase in accuracy was due to the longer route taken. Taking a look at the number of false negative under the speed hump detection on the cascaded system, it can be seen that the number of missed speed hump is almost half the number of speed humps to be detected. Again, this is due to the alternate execution of each algorithm. Hence, it can be said that when the two systems are cascaded, the results are still reliable and accurate, however the number of potholes and speed humps missed increases. This observation on the system will be problematic as the number of potholes and speed humps to be detected increases. Thus, the system can be cascaded as long as the potholes and speed humps are minimal.