• No results found

Multiple face detection based on machine learning

N/A
N/A
Protected

Academic year: 2021

Share "Multiple face detection based on machine learning"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

Multiple face detection based on machine learning

LIIAN, Department of computer science Faculty of Sciences Dhar-Mahraz Sidi Mohamed Ben Abdellah University,Fez, Morocco.

Hajar Filali

Jamal Riffi

Adnane Mohamed Mahraz

Hamid Tairi

Abstract—Facial detection has recently attracted increasing interest due to the multitude of applications that result from it. In this context, we have used methods based on machine learning that allows a machine to evolve through a learning process, and to perform tasks that are difficult or impossible to fill by more conventional algorithmic means.According to this context, we have established a comparative study between four methods (Haar-AdaBoost, LBP-AdaBoost, SVM, GF-NN). These techniques vary according to the way in which they extract the data and the adopted learning algorithms. The first two methods "Haar-AdaBoost, LBP-AdaBoost" are based on the Boosting algorithm, which is used both for selection and for learning a strong classifier with a cascade classification. While the last two classification methods "GF-SVM, GF-NN" use the Gabor filter to extract the characteristics. From this study, we found that the detection time varies from one method to another. Indeed, the LBP-AdaBoost and Haar-AdaBoost methods are the fastest compared to others. But in terms of detection rate and false detection rate, the Haar-AdaBoost method remains the best of the four methods.

Keywords: Machine Learning, Haar-AdaBoost, LBP-AdaBoost, GF-SVM, GF-NN, Boosting, Cascade, Gabor.

I. INTRODUCTION

With rapidly increasing computing capacity and the avail-ability of recent detection equipment and technologies, of in-vestigation and representation, computers are becoming more and more intelligent. Many research projects and commercial products have demonstrated the ability of a computer to inter-act with humans in a natural way by looking at people through cameras, listening to citizens through microphones etc. One of the fundamental techniques that allows such interaction (HCI) is face detection. Face detection, thus, determines the locations and sizes of human faces present in arbitrary (digital) images. It detects facial devices and ignores anything else, such as buildings, trees, bodies, and anything other than the face . Although this appears to be an insignificant task for human beings, it is an extremely difficult task for computers, and has been one of the most studied research topics in recent decades. The difficulty associated with face detection can be attributed to many variations in scale, location, orientation (rotation in the plane), pose (rotation out of plane), facial expression, occlusions and lighting conditions. This technol-ogy is inevitable in many applications in image indexing and information retrieval, where it can be used to search images

containing people, automatically associate a face with a name in a web page [1], identify the main people in a video by clustering [2] .It can also be used to determine a user’s attention, for example facing a screen in the public space, which can also, once the face is detected , determine the sex and age of the person to provide targeted advertising [3]. This can also be used to see if a person is present in front of a television on, and if not, put the unit in standby mode or reduce the the luminosity to save energy [4]. In addition, face detection is the first step towards more advanced applications that require facial localization, such as facial recognition, facial expression recognition, the evaluation of the age or sex of a person [5], face tracking or the estimation of the direction of view and visual attention [6]. Many face detection methods have appeared in the last two decades, while the classical methods have been very successful, their major disadvantage is that they use only 2D facial photos. However, we know that such a representation is sensitive to changes in expression, illumination and poses. In addition, the complexities of calculation and storage as well as the procedure for generating the examples are very complicated. The requirements of these systems may include the addition of machine learning capability to enable a machine to evolve through a learning process, and perform tasks that are difficult or impossible to complete by more conventional algorithmic means. These new methods are essentially based on the notion of learning, which has been at the heart of artificial intelligence research for many years. Since face detection can be understood as a two-class model identification problem (face or non-face). In this paper we start with an introduction, then we present in the first section the detection methods based on Haar, LBP and Gabor extraction techniques. Then in the second section we expose the approaches of automatic learning Boosting, SVM and Neural Networks. Then we present the results and the experiment containing a comparison between the four methods (Haar-AdaBoost, LBP-AdaBoost, GF-SVM, GF-NN) according to the processing time of the test images; the detection rate of the faces; and the rate of false detections. Finally we will finish with a conclusion.

(2)

II. FEATURE EXTRACTION A. The pseudo-Haar features

The Viola and Jones [7] method is one of the first methods that can efficiently detect objects in an image in real time. Invented originally to detect faces. It consists of scanning an image using an initial 24px by 24px detection window (in the original algorithm) and determining if a face is present. When the image has been traveled entirely, the size of the window is increased and the scanning starts again, until the window is the size of the image. This method is an appearance-based approach, which involves traversing the entire image by computing a number of features in overlapping rectangular areas. It has the distinction of using very simple characteristics but very numerous.

Fig. 1. Features pseudo-haar to only two features.

There are other methods but Viola and Jones are the best performers at the moment. What sets it apart from others is:

The use of integral images that can calculate the charac-teristics more quickly.

The selection by boosting characteristics.

The cascade combination of boosted classifiers, bringing a net gain of execution time.

1) The integral image:

To calculate these features quickly and efficiently on an image, the authors also propose a new method, which they call the integral image. It is a representation in the form of an image, the same size as the original image, it contains in each of its points the sum of the pixels located above and to the left of the current pixel. More formally, the integral image ii at the point (x, y) is defined from the image i by:

ii(x, y) = 

x≤x,y≤y

i(x, y) (1)

2) Advantages:

They are the most admired algorithms for face detection in real time.

The main advantage of this approach is the uncompetitive detection rate, while allowing a relatively high detection accuracy, comparable to that of slower algorithms. High accuracy. Viola Jones gives an accurate face

detec-tion.

3) Disadvantages:

Limited head pose. Don’t detect black faces.

B. Local Binary Patterns (LBP)

Local Binary Patterns (LBP) is a texture descriptor that can also be used to represent faces, because a face image can be thought of as a composition of micro-texture-patterns. The procedure involves dividing a facial image into multiple regions where the LBP features are extracted and concatenated into a feature vector that will later be used as a facial descriptor.

1) The principle of local binary patterns:

The original LBP operator was introduced for the first time by Ojala and al [8]. The latter works with the eight neighbors of a pixel in a neighborhood of 3 × 3 , using the value of the central pixel as a threshold. If a neighboring pixel has a gray value higher than the central pixel (or the same one) then the value 1 is assigned to this pixel, otherwise it gets a zero. The LBP code for the central pixel is then produced by the concatenation of the eight 1 or 0 clockwise or counterclockwise to a binary code.

Fig. 2. Labeling LBP.

Let the pixel position(xc, yc), The resulting decimal value is an 8-bit word that can be expressed as follows:

LBP(xc, yc) =

7

 n=0

S(In− Ic)2n (2) With lc corresponds to the gray value of the central pixel (xc, yc), lnto the gray values of the 8 neighboring pixels, and the functions(k) is defined as follows:



1 if K ≥ 0

0 if K < 0 (3)

2) LBP facial representation:

Each image of the face can be considered as a composition of micro-patterns that can be effectively detected by the LBP operator. Ahonen and al. [8] Introduced a face-based LBP Representation for facial recognition. To examine the shape information of the faces, they divided the face images intoM small non-overlapping regions R0, R1, ..., RM. The LBP his-tograms extracted from each subregion are then concatenated into a single spatially enhanced feature histogram defined as follows:

Hi, j= 

x,y

(3)

3) Advantages:

Effective for describing the image texture feature. Used in texture analysis, image retrieval, face recognition

and image segmentation.

Object detection in motion by background subtraction. The most important properties of LBP features are

toler-ance against monotonic lighting changes and simplicity of calculation.

4) Disadvantages:

The method is not sensitive to small changes in the localization of the face.

The significant use of local areas increases errors. Not precise.

Used only for binary and gray images. C. Gabor filter

In image processing, a Gabor filter is called Dennis Gabor. It is a linear filter used for edge detection. Gabor filters have representations of frequency and orientation similar to those of the human visual system, and are considered particularly appropriate for the representation and discrimination of tex-ture. Shiguang and. Al. [11] Define the 2D Gabor filter in the domain of space and spatial frequency as follows:

Φu,v(Z) =||ku,v|| 2 δ2 e (||ku,v||2||z||22δ2 )[eiku,vZ − e−δ2 2 ] (5) In the equation (5): ku,v= kvei,φu (6) kv=kmfaxv (7) φu,v=8 (8)

(7) give the frequency, (8) gives the orientation, (6) The oscillatory wave function, its real part is the function of the cosine andeiku,vZ The imaginary part of a sinusoidal function.

In the equation (7), v controls the scale of the Gabor filter, that is, it controls the frequency andu controls the orientation.

1) Extraction of characteristics:

Shiguang and. Al. [11] Use the Gabor filter with five frequencies corresponding to eight orientations. It can also be shown as follows:

v∈ {0, 1, 2, 3, 4}etu ∈ {0, 1, 2, 3, 4, 5, 6, 7}

In addition other necessary equations for the Gabor filter are: δ= 2π, kmax=π2fv=2

It is found that Gabor filters have strong characteristics of spatial locality and selectivity of orientation. (fig 3) illustrates a face and its 40 images filtered by Gabor.

Fig. 3. Example of a face image and its forty images filtered by Gabor.

III. CLASSIFICATION A. Adaboost

1) Learning algorithm based on Adaboost:

we consider a set ofn images (x1, ..., xn) and their asso-ciated labels(y1, ..., yn), which are such that yi= 0 if xiis a negative example andyi= 1 if xi is an example of the object to detect. The boosting algorithm consists of a number T of iterations, and for each iteration t and each characteristic j, we construct a weak classifier hj. Ideally, the goal is to get anh classifier that exactly predicts the labels for each sample, that is,

yi = h(xi)∀i ∈ {1, ..., n}. In practice, the classifier is not perfect and the error generated by this classifier is given by:

j=

n  i=1

ωi|hj(xi) − yi| (9)

The wi being the weights associated with each example and updated at each iteration according to the error obtained at the previous iteration. We then select the classifier ht with the lowest error at iteration t : t= min(j).The final strong classifierh(x) is constructed by thresholding the weighted sum of the selected weak classifiers:



1 if Tt=1αtht(x) ≥ 12Tt=1αt

0 if not (10)

αt are coefficients calculated from the errort:

αt= 12ln1 −  t

t (11)

2) Cascading classifiers:

(4)

Fig. 4. Illustration of the architecture of the cascade: the windows are processed sequentially by the classifiers, and rejected immediately if the answer is negative (F).

B. Support Vector Machine (SVM)

Support vector machines, also known as wide-margin sep-arators, are supervised learning techniques for solving classi-fication problems. This technique is a two-class classiclassi-fication method that attempts to separate positive examples from neg-ative examples in all of the examples. The method then looks for the hyperplane that separates the positive examples from the negative ones, guaranteeing that the margin between the nearest positive and negative is maximal. The interest of this method is the selection of support vectors which represent the discriminant vectors by which the hyperplane is determined. The examples used when searching for the hyperplane are then no longer useful and only these support vectors are used to classify a new case.

1) Linear classifier:

A classifier is called linear when it is possible to express its decision function by a linear function:

h(x) =< ω, x > +b =

n  i=1

ωixi+ b (12) Pour décider à quelle catégorie un exemple estimé x appar-tient, il suffit de prendre le signe de la fonction de décision : y= sign(h(x)) function sign() is called classifier.

2) maximum margin hyperplane:

The margin is the distance between the separation boundary and the closest samples. In the SVM, the separation boundary is chosen as the one that maximizes the margin. The geometric margin represents the Euclidean distance taken perpendicularly between the hyperplane and the xi example. By taking any point xp on the hyperplane, the geometric margin can be expressed by:

ω

||ω||(xi− xp) (13)

The hyperplane with maximum margin is the model used in support vector machines. The estimate of the parameters (w∗, b) of the hyperplane that maximizes the margin is done by solving the following optimization problem:

(w∗, b) = argmax

(w,b){mini(yi(ωxi+b)), ||ω|| = 1} (14) To say that the two classes of the learning sample S are linearly separable is equivalent to saying that there exist parameters(w∗, b∗) such that we have for all i(= 1, 2, ..., n): ω∗xi+ b∗>0 si yi= 1 (15)

ω∗xi+ b∗<0 si yi= −1 (16) What is equivalent to:

yi(ω∗xi+ b∗) > 0; ∀i = 1, 2, ..., n (17) The definition consists in saying that there must be a hyper-plane leaving on one side all the positive data and on the other, all the negative data. Figure 5 illustrates this situation. In our definition of the hyperplane, it is possible that different equations correspond to the same geometrical plane:

a(< ω, x > +b) = 0 (18)

It is therefore possible to resize (w∗, b∗) so that the two parallel planes respectively have for equations:

(ω∗x

i+ b∗) = 1 (19)

(ω∗x

i+ b∗) = −1 (20)

These two hyperplanes are called canonical hyperplans. Thus the marginγ between these two planes is equal to:

γ= 2

||ω∗|| (21)

Fig. 5. Canonical hyperplans and maximum margin.

3) Quadratic minimization under constraints:

Now we can formulate a mathematical optimization problem such that its solution provides us with the optimal hyperplane that maximizes the margin:

M inimized 1

2||ω2|| (22)

such as yi(< w, xi>+b) ≥ 1 (23) In this formulation, the variables to be fixed are the compo-nents ωi and b. The vector ω has a number of components equal to the size of the input space. Generally in this type of case the dual form of the problem is solved. We must train what is called the Lagrangian. It is a question of bringing the constraints into the objective function and of weighting each of them by a dual variable:

(5)

with respect to primal variables ωi and b and maximized relative to dual variables αi.

The saddle point must therefore satisfy the necessary condi-tions of stationarity that correspond to the condicondi-tions Karush Kuhn and Tucker (KKT), we find:

∂L(ω, b, α)

∂ω = 0 (25)

∂L(ω, b, α)

∂b = 0 (26)

This allows us to obtain: ω= n  i=1 αiyixi (27) n  i=1 αiyi= 0 (28)

Note that with this formulation, we can calculateω by setting only n parameters. The idea will be to formulate a dual problem in which is replaced by its new formulation. In this way, the number of parameters to be fixed is relative to the number of examples of the training sample and no longer to the size of the input space. To do so, we substitute (27) and (28) in the Lagrangian, we obtain the following equivalent dual problem: M aximised ω(α) = n  i=1 αi−12 n  i,j=1 yiyjαiαj< xi, xj> (29) suchas n  i=1 αiyi= 0 (30) αi≥ 0 (31)

This last problem can be solved using standard quadratic programming methods. Once the optimal solution α∗ = (α∗

1, α∗2, ..., α∗n) of the problem (29) obtained, the weight vector of the maximum margin hyperplane sought is written:

ω∗=

n  i=1

α∗iyixi (32)

As the parameter b is not included in the dual problem, its optimal value b∗ can be derived from the primal constraints, ie: b∗=−(maxyi=1(< w , x i>) + minyi=1(< w∗, xi>)) 2 (33) After calculating the parameters α∗ andb∗, the classification rule of a new observation based on the maximum margin hyperplane is given by:

h(x) = sign(

n  i=1

yiα∗i < xi, x >+b∗) (34) Note that a large number of terms of this sum is zero. Indeed, only the α∗i corresponding to the examples found on the canonical hyperplanes (on the constraint) are not null. These

examples are called Vectors Supports (VS). They can be seen as representatives of their categories because if the learning sample consisted only of VS, the optimal hyperplane that one would find would be identical.

4) Non-linear Svm:

The previous paragraph describes the principle of SVM in the case where the data are linearly separable. However, in most real problems, this is not always the case and it is therefore necessary to work around this problem.

For this purpose, the idea is to project the learning points xi in a space T of dimension q, higher than n thanks to a nonlinear function Φ that we call kernel function, chosen a priori and apply the same method of optimization of the margin in the spaceT . The space T thus obtained and called space of the characteristics or also transformed space. All we have to do is solve the problem (29) in the space T , replacing < xi, xj> by <Φ(xi), Φ(xj) >. The separator hyperplane obtained in the space T is called the generalized optimal hyperplane. The scalar product <Φ(xi), Φ(xj) >. Can be easily calculated using aK symmetric function, called kernel, defined by:

k(xi, xj) =< Φ(xi), Φ(xj) > (35) 5) Advantages:

SVMs have solid mathematical foundations.

The test examples are compared just with the vector carriers and not with all the training examples.

Quick decision. The classification of a new example consists in seeing the sign of the decision function f (x). 6) Disadvantages:

Binary classification hence the need to use the one-on-one approach.

A large quantity of input examples implies a large matrix calculation.

High computation time when regularizing the parameters of the kernel function.

C. Artificial Neural Network 1) The formal model:

The formal neurons (fig 6) are endowed with characteristics inspired by that of the biological neurons :

Fig. 6. Schema of a formal neuron.

(6)

can only transmit a single value that corresponds to its activation state.

The dendrites: biological neurons allow them to receive different signals from the outside. In the same way, a formal neuron can receive ei signals from several neurons. These signals are combined into a single input signalE:

E=ωi.ei (36)

where theωiare the weights assigned to external signals. Synapses: the numbers ωi weight the signals emitted by the different neurons located upstream. We find here the analog of synapses which, let us remember, can be inhibitory i<0), or excitatory (ωi >0).

Activation threshold: an activation function A manages the state of the formal neuron. Usually,siA≈ 1 the neu-ron is excited and is at restif A approx−1or A approx0 depending on the case. The pace of this function is generally such that there is a threshold of activation of the neuron : the neuron is excited only if it receives a higher input signal E at this threshold s.

2) Learning:

Learning a formal neural network consists of determining the optimalωiweights according to the problem to be solved. There are two types of neural networks depending on the type of learning they are subjected to:

The networks in supervised learning, which are gener-ally intended to reproduce a process of which only a few variables and the corresponding results are known; The networks unsupervised learning,used for example

in classification when the classes to which the data must belong are not known a priori.

The data used for the learning are therefore different in both cases, since they consist of a series of pairs (input, corresponding desired output) for the supervised learning, while for the unsupervised learning, the data are only entries.

3) Some famous networks : a) The perceptron:

This is one of the first neural networks. It is linear and monolayer. It is inspired by the visual system. The first layer represents the retina. The neurons of the next layer are the association cells, and the final layer the decision cells. The outputs of the neurons can only take two states (-1 and 1 or 0 and 1). The weight modification rule used is Widrow-Hoff: w≤ w + k(d − s), Where s is the output obtained, the desired output and k a positive constant.

b) The multilayer Perceptron MLP:

They are an improvement of the perceptron comprising one or more intermediate layers called hidden layers. Each neuron is connected only to the neurons of the layers directly

preceding and following, but to all the neurons of these layers. MLPs use a learning algorithm to modify their weights; there is a hundred but the most popular is the retro gradient propagation, which is a generalization of the Widrow-Hoff rule. It is always a question of minimizing the quadratic error, propagating the modification of the weights of the output layer up to the input layer, so this algorithm goes through two phases:

Inputs are propagated from layer to layer to the output layer.

If the output of the MLP is different from the desired output then the error is propagated from the output layer to the input layer by changing the weights during this propagation.

4) Advantages:

Classifier very precise (if well set). Automatic learning of weights.

Ability to do parallelism (the elements of each layer can work in parallel).

Fault tolerance (if a neuron no longer works, the network does not disturb).

5) Disadvantages:

Determination of the network architecture is complex. Parameters difficult to interpret (black box).

Difficulty of parametrization especially for the number of neurons in the hidden layer.

IV. RESULTS ANDEXPERIMENTATION

To test the performances of the different methods studied we used the test images whose characteristics are detailed in the table below:

Resolution size(px) Number of faces

150 × 65 9750 7 181×180 32580 12 181 × 181 32761 12 130 × 96 12480 15 245 × 245 60025 25 405 × 198 80190 32 170 × 170 28900 36 350 × 217 75950 40 224 × 240 53760 43 190 × 190 36100 49 600 × 254 152400 56 1012 × 369 373428 95 TABLE I

CHARACTERISTICS OF THE TEST IMAGES.

(7)

Fig. 7. test image (56 Faces, 600*254 Px).

In a first step we will expose the results of face detection on the image that was given in the example.

Haar-AdaBoost

LBP-AdaBoost

GF-SVM

GF-NN

TABLE II

FACE DETECTION RESULTS ON THE IMAGE GIVEN IN THE PREVIOUS EXAMPLE.

To generalize the study of the chosen methods, we will present in the following subsections the results obtained in the experiment which was made on 12 test images. In addition, and to test these four methods we have based on the following metrics: the processing time of the test images; the detection rate of the faces; and the rate of false detections.

1) Processing time:

This figure illustrates the variation of the detection time for the four methods on the 12 test images presented previously. It shows that the LBP-AdaBoost method is the fastest in terms of the treatment time followed by the Haar-AdaBoost method which gave good results. . For the GF-SVM and GF-NN methods it is found that the detection time becomes important when the size of the image is large.

Fig. 8. Processing time

2) Detection rate:

The figure below illustrates the detection rate for the four methods on the 12 test images presented previously. Note that the Haar-AdaBoost method gives an accurate detection of face compared to others. Unlike the LBP-AdaBoost method, which has a low detection rate on the test images, the GF-SVM and GF-NN methods give results similar to the Haar-AdaBoost method. When the size of the image is small and less than 10 KP, a high accuracy of detection is noted, and a rate greater than 60 % for large images.

Fig. 9. Detection rate.

3) False detection rate:

(8)

noticed that the rate of false detections exists in the majority of the test images and it varies between 0% and 58%.

Fig. 10. False detection rate.

V. CONCLUSION

Face detection is currently a very active area of research. Recent years have shown great advances in algorithms dealing with complex environments. Some of the best algorithms are still too expensive in terms of calculation to be applicable in real time, but this is likely to change with upcoming im-provements in hardware. In this paper, we present an extensive research on face detection methods, all methods have their own advantages and disadvantages, for example The characteristics of Haar used in the effort by Viola and Jones are very simple and effective for the detection frontal of the face, but they are less ideal for random faces. The most direct direction to come is to further improve the algorithms and learning features. It is interesting, to see face detection techniques increasingly used in real applications. For example, most digital cameras today have built-in face sensors, which can help the camera to better focus autofocus and auto exposure, which is also an important technique for interfaces. man-machine, to allow a more natural interaction between a human and a computer, etc.

REFERENCES

[1] Zhongfei Zhang;Srihari, R.K.; Rao,A. Face detection and its applications in intelligent and focused image retrieval [archive], IEEE International Conference on Tools with Artificial In-telligence, 1999.

[2] Ji Tao;Yap-Peng Tan, Face clustering in videos using constraint propaga-tion, IEEE Interna-tional Symposium on Circuits and Systems, p. 3246-3249, 2008.

[3] Visual of Focus of attention demo and application [archive].

[4] Ryo Ariizumi, Shigeo Kaneda, Hirohide Haga, Energy saving of TV by face detection, international conference on PErvasive Technologies Related to Assistive Environments 2008.

[5] Cha Zhang et Zhengyou Zhang, A Survey of Recent Advances in Face Detec-tion [archive], Microsoft Research, 2010.

[6] Kevin Smith, Sileye O. Ba, Daniel Gatica Perez, Jean-Marc Odobez, Tracking the multi person wandering visual focus of attention. Interna-tional conference on Multimodal interfaces, 2006.

[7] Paul Viola, Michael Jones.Robust real-time object detection. In Second international work shop on statistical and computation atheories of vision, Vancouver, Canada, July 13 2001.

[8] T. Ojala, and al.Multiresolution gray-scale and rotation invariante texture classification with local binary patterns. IEEE Transr, PAMI, 2002. [9] T. Ahonen, A. Hadid, and M. Pietikäinen. Face recognition with local

binary patterns. in Proc. Euro. Conf. Computer Vision (ECCV), pp.469-481, 2004.

[10] ] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. Proc. of CVPR, pages 511-518, 2001.

References

Related documents

Since quinine is a low clearance drug with a relatively high oral bioavailability, and is primarily metabolised by human liver CYP3A4, the lack of effect of grapefruit juice on

Convolutional neural network (CNN) is one of the most successful methods and has acquired various applications in remote sensing commu- nity [20 – 22]. Despite the success of CNN,

Furthermore, we found that following shift of the glp-1 (ts) mutant to the restrictive temperature, the majority of cells undergo a single mitotic division and complete the

The experimental results showed that the addition of low content rates of n- butanol–methanol to neat gasoline adversely affects the engine performance and exhaust

Based on the recommendation of the Qinshan Project, the client (SNN) of Cernavoda NPP unit 2 and the Project Management Team (MT) decided to use a stainless steel liner system for

Observed and modelled values for the coefficient of variation for maximum snow depth (CV sd ) and spatial distributions of mean annual ground surface temperature (MAGST) at the

Predominance of metabolism-based resistance to photosystem II (PSII), acetolactate synthase (ALS), and p -hydroxyphenylpyruvate dioxygenase (HPPD) inhibitors in a multiple-

(CPD) to enhance/update their skills, knowledge and competences for optimal performance. 4) Students should be properly guided/counseled to help them in their