The Devolution Act and Implementation Guidelines place great emphasis on the monitoring and evaluation of county funded education projects. In Counties, the responsibility of monitoring is placed upon the various stakeholders who are responsible for implementation of the project. For monitoring and evaluation of projects to be effective, one must ask the right questions, investigate the real issues and generate relevant information to enable those monitoring and evaluating the projects to make an accurate assessment of the project. Project managers are always looking forward to seeing public projects perform well. This involves finishing the project on time, within budget, meeting end product specifications, meeting customer needs and requirements and meeting management objectives. Despite the quest for project success, many county funded education projects in Makueni County have continuously experienced time overrun, budget overrun, unmet end product specifications, unmet customer needs and requirements and unmet management objectives. Influence of monitoring and evaluation (M&E) practices is vital for tracking and measuring results and throwing light on the causes of the challenges faced in managing county funded education projects. This project sought to investigate the effect of Monitoring and Evaluation Practices on performance of county funded education projects in Makueni County. The objectives of the study were to establish the extent to which training of the M&E staff, stakeholder’s involvement, M&E planning and use of baseline survey influenced performance of county funded education projects in Makueni County. The research design used is descriptive survey. The study targeted 31 county funded education projects in Makueni County. A Stratified random sampling technique was used
ABSTRACT The article focuses on how the present vocational education and training (VET) system in Australia might be modified to better accommodate possible VET futures change. It begins with the premise that VET’s role is to contribute to skills acquisition through formal education and training. The authors propose a simple VET futures role and purpose statement and outline a possible futures public policy environment in which its actualisation might need to be achieved. They continue, first by developing a policy intervention framework and a monitoring and evaluation framework germane to that futures purpose and policy mix, and second, by employing those frameworks to explain how a futures VET system might function. They discuss the present VET system in the context of the constructed futures VET system and draw conclusions from comparisons made. They find (a) that skills policy should be redefined to accommodate broader economic and social policy contexts in general, and sustainable industry policy in particular; and (b) that a more sophisticated policy mix, consisting of unified and complementary supply-side and demand-side interventions, should replace the VET sector’s reliance on simplistic supply-side policy responses alone. They outline an incremental approach for transforming the present VET system into the envisioned futures VET system and check and balance their findings through international comparisons.
The key shortcomings of modern control systems are: teachers have a subjective opinion in the implementation of work evaluation; teachers check too large sections of the material; control measures are only sporadic; didactic and organizational techniques for using computer tools to implement control are not sufficiently developed; the ability to fully take into account the features of the controlled are absent in the current situation. The resolution of the above problems in the system of control and appraisal activities can be based on the following principles of competence approach: the principle of open and accessible assessment criteria; the principle of objectivity (evaluation criteria are known in advance); the principle of purposeful control of the implementation of the goal; the principle of regular checks and evolution of control methods; the principle of reverse engineering educational programs (from the stated educational goals); the principle of student-centeredness (an assessment of what the student can do now, and not an estimate of the amount of learning material learned). The quality management system for students at the University of Minin is based on the developed Regulation on the system of internal independent assessment of the quality of education at the University of Minin. ,  The quality of the development of educational programs is assessed through ongoing monitoring of progress, intermediate and state final certification of students.  The evaluation of the quality of students' training is carried out according to two criteria: assessment of the degree of mastering by students of educational courses and academic disciplines; assessment of the degree of formation of students' competencies. In addition to the mandatory criteria for assessing the performance
institutions that contribute to the concepts, national curricula’s and professional standards; The external evaluation of VET schools through professional and administrative inspections by the inspection department of MEST; Quality Assurance Center Enabling Professional LSW and mobile centers by the professional division LSW; Internal evaluation of schools through the experience recently introduced of self-assessment; Tests and external exams specially Secondary school graduation and standardized tests after 9th grade; Role of quality assurance Standards and Assessment Office in MPMS; Approval of VET qualifications and accreditation of institutions that develop, evaluate and provide training; The development of the National Qualifications Framework (NQF); The collection of statistical data from the units of Education Management Information (EMIS); The role of municipalities according to the law for supervision and inspection of the education process according to the guidelines established by the MEST and other areas that affect the quality of VET providers; The inclusion of the participating parties through QNA's.
The above-mentioned scenarios have prompted Commission on Higher Education to take initiatives in adopting the recent thrust through the implementation of the OBME to all MHEIs in conformance to the mandates and the nature of STCW ’78 as amended in 2010. It has turned the academic discipline into internationalization of standards based under this convention as a mechanism for ensuring the uniform implementation for maritime education and training. By raising its standards, the maritime education programs are now considered special courses. The ratification of the STCW in the Philippines connotes that seafaring is global and not only national or regional standards. The Convention became part of the law of the land. The MHEIs are now compelled to meet the certification system under this law. Generally, the critical findings of European Maritime Safety Agency (EMSA) threatened the country’s position in the International Maritime Organization (IMO) White List. Naval State University was included in the shortcomings of meeting the global standards for its operation in maritime education. Tracing its roots, on the 10 th day of June 2013, Dr. Patricia B. Licuanan, Chairperson of Commission on Higher Education, issued, a “Closure Order” to NSU-College of Maritime Education in its operation of BS in Marine Transportation and BS in Marine Engineering. NSU-CME has continuously undergone series of contingency measures to save the BS Offerings and in fact underwent CHED-MARINA OBME compliance even up to the present and continually working out on the procurement of the prescribed simulator and other intended facilities and equipment. The university has risen and stood still to meet all the adverse findings, observations, and non-conformances found during the series of stringent audits and monitoring by MARINA and CHED. Moreover, the university has established Enhanced Support Level Program for deck and engine, aligned its curriculum with outcomes-based education, obtained more than 30% shipboard training for the last three years and increased the number of licensure passers with topnotchers, and procured state of the art facilities and equipment.
To map a conceptual framework for GHE requires critical reflections on definitional, translational and practical aspects of global health, both in general and in the field of education. The definitional problems involved in the descriptor global health are discussed in depth elsewhere  and it has been shown that the object of global health mainly depends on the question of how the term ‘ global ’ is conceptualised. The diversity of what is understood to be ‘ global ’  obviously entails evaluation challenges, how- ever, it is crucial that an analytical framework minimises redundancy and provides clarity about the object of the assessment. Such a framework does not exist up to now due to the absence of a commonly used or even agreed definition [29,30].
of education and age of the farmers and the number of family members are of factors effective on the efficiency . Covaci (2006) has studied the growth and productivity of wheat in Slovakia . By applying the Malmquist method, he has concluded that the quality of outputs has close relationship with the quality of inputs and has experienced a positive productivity growth during the time period and the climate conditions has been most effective factor on the function of wheat production. Coelli (2006) has studied the growth of productivity of total factors in the agriculture of Belgic by Malmquist index . He has studied 1728 farms (more than 100 farms every year) ; the inputs are land, capital, labor and so on ; the outputs are corn and the other products. He concluded that the average of growth of productivity for each year is about one percent and the small farms have lower level of productivity growth. Adtola and Sam'on (2013), in economic study of breed of broiler chicken, showed that the chickweed and workforce were the most important inputs effective on the production of each of the three breeds Marshal, Hubbard and Aryour. Furthermore, the average productivity of input of chickweed in these three breeds has been %39, %38 and %37, respectively ; and the average productivity of workforce has been 3.9, 3.62 and 3.62, respectively.
In the remainder of this section we discuss the key findings of the scoping review and their implications for MHPSS monitoring and evaluation practice. First, we noted a lack of specificity in how goals and outcomes were phrased. A strong overlap in goals and outcomes, or a lack of differentiation between these two levels, may suggest challenges in conceptualizing a common overall goal for MHPSS practitioners. According to logical frameworks, the goal is intended to be a high-level achievement that is not necessarily attained by one program, but by a portfolio of programs. Each outcome is defined as an achievement set for a specific program. Within the field of project management, the inconsistent use of terminology in logframes has been one of the chief criticisms of this approach [21–23]. In the MHPSS field this challenge may be related to the fact that improvements in mental health and well-being are often conceptualized as the key focus of MHPSS activities (hence included as an outcome), yet because mental health has many influences, mental health and well- being may also be perceived as higher order constructs that require a portfolio of program activities to achieve (hence they can also be included as a goal). For example, water and sanitation, protection, education, as well as health programs may all be conceptualized to contribute to overall well-being. Within monitoring and evaluation more broadly, however, attention has been drawn to the use of unclear terminology at the goal and outcome level  indicating that this issue is not unique to MHPSS. A lack of differentiation between terms may also point to a lack of training. Training in monitoring and evaluation is needed so that those implementing programs understand how logframes should be developed, specifically how terminology for the individual components of logframes (i.e. goals, outcomes, indicators) should be used and how both quantitative and qualitative means of verification should be selected and administered.
ratings of other aspects have verbal interpretation of “Mostly Complied”. The data indicate most of the aspects of the physical facilities are still lacking with specific provisions. In other words, concerned SUCs should comply with the needed requirements of the aspect with mostly complied in a given period of time. The aspect of Physical Facilities of local Universities and Colleges (LUCs) with respect to with respect to Safety facilities (fire hydrant, building permit and occupancy) as per monitoring and evaluation of the Commission on Higher Education Region IV has the highest mean rating of 3.5577. The aspects of Physical Facilities with highest mean rating and other aspects such Medical/Dental clinic and Waste Management and Disposal System have verbal interpretation of “Complied Completely”. On the other hand, the aspect of Physical Facilities with respect to provision of Canteen has the lowest mean rating of 3.0962. The aspect with the lowest mean rating and other aspects have verbal interpretation of “Mostly Complied”. The result explains that most of the aspects of Physical Facilities have inadequacies of provisions of some items to meet complete compliance. In terms of Physical Facilities of Private Higher Education Institutions (PHEIs) with respect to Classrooms (size, indoor P.E/NSTP, whiteboards) has the highest mean rating of 3.6667 and the aspect Waste Management and disposal system has the lowest mean rating of 3.4722. The ratings of all aspects of Physical Facilities have verbal interpretation of “Complied Completely”. The result emphasizes that the Private Higher Education Institutions (PHEI’s) in CALABARZON “Complied Completely” the different aspects of Physical Facilities.
The external quality assurance in higher education is an important and indispensable component of the Bologna process. In accordance with the priorities and policies of the Bologna process, independent (often more than one) quality assurance agencies in higher education operate in almost all European countries (e.g. Bulgaria - NEAA, UK - QAA, ODLQC, etc.). Today, 51 of these national quality assurance organizations (from 30 countries) are full members of the European Association for Quality Assurance in Higher Education (ENQA). Their systems for quality assurance follow the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) . The Higher Education Law in Bulgaria  (Article 11.) defines the legal framework for the existence of the Bulgarian National Evaluation and Accreditation Agency (NEAA). Quality evaluation, accreditation and control in HE (after 2016) is performed by NEAA  on the basis of criteria systems, depending on the type of procedure and in accordance with ESG  developed by ENQA. The quality evaluation of higher education is based on a large number of criteria for a variety of objects and processes. This requires the processing of huge amounts of data to objective evaluation. Another important point is the requirement that the evaluation has to be carried out periodically and to reflect the results of processes and states of objects in different time periods. Dynamic monitoring of the procedures and activities related to the quality evaluation in higher education involves the collection (on the basis of automated accumulation, aggregation, analysis and interpretation) of a huge amount of data that function or are results of institutional information and management systems of the higher education institution (for the learning process, academic staff development, etc.), learning management systems, digital repositories, etc. Concrete results have been achieved in the field of automation of procedures for (self)assessment and quality management in higher education .
Both these words are inter related to each other and are specially adopted together as a task of management.(Peter .T.Schumi, 1991) The monitoring is the continuous collection of the information to know the prevailing system and to take steps to bring change in it. On the other hand, evaluation is to examine the effectiveness of the system and to provide decisions about the progress, improvement and its impact (Rengasamy, 2010). So the both words are used in the series, one give data while the other use that data changing into information and giving value. As both the words are totally different from one another, but they are collectively used and adopted as a management tool, so the difference between both words disappears (Bartle, 2010). As the monitoring provides data, so it is very sensitive, if the collected data is managed then the evaluation proves worthless and fruitless, so the collected data be accurate, meaningful and authentic to assign it value to improve the system in a proper way.
An evaluation of the implementation of the School Monitoring, Evaluation and Adjustment (SMEA) System of the Department of Education (DepEd) was conducted to aid on the improvement and upgrading of the system. A researcher-made instrument was utilized to quantitatively analyze the implementation and interviews were conducted to identify the challenges and struggles of its implementation. Also, a close look at the SMEA questionnaires was done to identify the level of compliance with the established standards. The analysis showed high levels of implementation, however, the qualitative analysis showed gaps in the validity and reliability of results. The burden of too many indicators and the confusion by questions that are hard to quantify in a single questionnaire were identified as the major problem. The need for indicators, which consider the setting and context of each school, was identified as the primary need. A sense of ownership of the indicators and commitment building were among the recommendations. In addition, a systems approach of integrating established school systems such as School Improvement Planning, Results-Based Performance Management System and the Monitoring and Evaluation System was also suggested. The result of the study could contribute to the management of schools, enhancement of policies and improvement of DepEd systems.
To strengthen the development and management of ECDE, there is need to develop and enforce relevant policy framework which the Ministry of Education has done. However, there is evidence, as earlier stated, that this policy framework is not effectively implemented by ECDCs. As a practitioner, and from the interaction carried out with other practitioners, the author points to clear evidence that there is a problem in the policy implementation. As evidenced in the Kafu Committee (1998) document, the recommendations were not clearly followed and thus, there was a need to carry out the research. Hence this study was designed to find out the level and effectiveness of implementation of this policy framework by ECDCs in Bungoma South District. This study was designed to establish the practices carried out at the ECDCs and their relationship with the ECDE policy framework in Kenya. Curriculum implementation is the backbone of an education system and has a direct bearing on the quality of education. This is what makes most governments very keen on implementation of curriculum. For this to be achieved, proper practices must be put in place for an enabling environment for implementation and evaluation of the curriculum implementation (Shiundu and Omulando, 1992).
Higher education activities provide educational services. Higher education products in the form of science and education, are used by students, and therefore students are customers. Science and education are invisible things, and therefore universities can also be seen as a service industry. The service industry in achieving success and service quality has always been customer-focused. In higher education, efforts to meet the needs of students must be the main focus in managing the quality of educational services. Students become very important other than as a significant customer is because the success of the learning process is very dependent on student participation, no matter how smart the lecturers who teach. Under the concept of university quality management, that the implementation of education must carry out monitoring and evaluation, it is necessary to know how student satisfaction with the services that have been provided. Inevitably it will affect the quality of education services provided.
The teacher’s role is crucial to effective and efficient learning, the teacher is expected to provide essential inputs like adequate planning of lesson notes, effective delivery of lessons, proper monitoring and evaluation of students’ performance, providing regular feed-back on students’ performance, improvisation of instructional materials, adequate keeping of records and appropriate discipline of students to produce and enhance expected learning achievement in secondary schools (Ayeni, 2010). The purpose of any teacher in the classroom is to help learners learn, inquire, solve problems, and cope with their own emotional needs and tensions. The teacher promotes quality education from the domain of teaching and learning through creative idea, participation and cooperative learning, research, analysis and critical thinking, problem solving, innovation and encouragement of creative and divergent thinking. These lead to the proper development of knowledge, skills, attitude, values that enable students to function effectively and live as responsible citizens and also makes useful contribution to the society.
trends, one of the policies offered worldwide is to shift away from lecture-based education and assigning part of the learning responsibility to students. Thus, the student with conduction of instructor can learn part of the theoretical courses remotely. Thus, in face to face classes, more time is left for presenting courses that require more clarification and justification by the instructor and also conducting some of the issues in format of educational workshops.
Surveillance for biosecurity hazards is being conducted by the New Zealand Compe- tent Authority, the Ministry for Primary Industries (MPI) to support New Zealand ’ s biosecurity system. Surveillance evaluation should be an integral part of the surveil- lance life cycle, as it provides a means to identify and correct problems and to sus- tain and enhance the existing strengths of a surveillance system. The surveillance evaluation Framework (SurF) presented here was developed to provide a generic framework within which the MPI biosecurity surveillance portfolio, and all of its components, can be consistently assessed. SurF is an innovative, cross-sectoral effort that aims to provide a common umbrella for surveillance evaluation in the ani- mal, plant, environment and aquatic sectors. It supports the conduct of the following four distinct components of an evaluation project: (i) motivation for the evaluation, (ii) scope of the evaluation, (iii) evaluation design and implementation and (iv) report- ing and communication of evaluation outputs. Case studies, prepared by MPI subject matter experts, are included in the framework to guide users in their assessment. Three case studies were used in the development of SurF in order to assure practi- cal utility and to confirm usability of SurF across all included sectors. It is anticipated that the structured approach and information provided by SurF will not only be of benefit to MPI but also to other New Zealand stakeholders. Although SurF was developed for internal use by MPI, it could be applied to any surveillance system in New Zealand or elsewhere.