Top PDF EXPLICIT DOMAIN KNOWLEDGE REPRESENTATIONAOSD APPROACH

EXPLICIT DOMAIN KNOWLEDGE REPRESENTATIONAOSD APPROACH

EXPLICIT DOMAIN KNOWLEDGE REPRESENTATIONAOSD APPROACH

The domain of a first application contains for example the obvious concepts customer, shop- ping cart, product (of which Book and CD are specializations), customer profile, and some obvious relationships between them. Constraints on this static domain model are for exam- ple”a customer can buy at most 10 products at the same time” or”if the purchased products are shipped, the order cannot be cancelled”. Related to calculating the price of an order there are a number of rules such as”if a customer has previously bought 15 products, he or she is entitled to a 10% discount on the next order”, ”if it is Holi, everybody gets a 5% discount” and ”if a customer’s last purchase was a CD in the category of classical music, then he or she gets a discount of 15% on the next classical music CD”. It becomes interesting when one thinks of the possible interferences of these rules and constraints and how to deal with them. What happens when a customer who has already purchased more than 10 products orders something during Holi?
Show more

7 Read more

A measurable approach for risk justification of explicit and tacit knowledge assessment

A measurable approach for risk justification of explicit and tacit knowledge assessment

Knowledge has become a central organizing principle in society to the extent that knowledge management has become a mainstream activity in organizations. Nevertheless, knowledge- related risks remain relatively neglected in the risk management domain. Whilst knowledge reduces uncertainty and the associated risks, the increased knowledge intensity in organizations also represents a risk factor that has to be assessed. The paper describes and validates an organizational risk assessment approach that considers knowledge-related and knowledge management risks in an integrated manner. The approach makes it possible to calculate risk ratings in terms of vulnerability and likelihood for 50 threats to all activities and phases of the knowledge life cycle. These risk ratings are plotted against 24 potential risks in the human, organizational, and technical domains. To impress on management the significance of these knowledge-related risks, the risk ratings are transformed to approximated financial figures. The approach is applied to 10 Slovenian organizations, two of which are discussed in detail in the paper, to demonstrate that it can be successfully used in a wide variety of organizations. It is concluded that the approach offers a way to assess both knowledge-related and knowledge- management-related risks, that the costs that individual risks potentially hold can be approximated, and that for a diversity of organizations mitigation strategies can be suggested for the identified risks.
Show more

19 Read more

A knowledge based approach to information extraction for semantic interoperability in the archaeology domain

A knowledge based approach to information extraction for semantic interoperability in the archaeology domain

another event connects the same ‘coin’ with the ‘Roman period’, with events having ‘coin’ as a common argument. The events are implicitly defined in this example; there is no explicit mention of the event that deposited the coin in the hearth nor how the coin was originally produced. However, it can be assumed that since the coin has been found in the hearth, it must have been deposited in that place and since the coin is described as Roman, it has probably been produced during the Roman period (modelling the full complexity of spatio-temporal style periods is outside the scope of this paper). This modelling of events differs from the ML technique followed by Byrne and Ewan (2010), which detects events as mentions of verb phrases carrying a single event type (which might contain several arguments). The OPTIMA pipeline is driven by the CRM-EH ontological structure and generates event types defined by standard ontological definitions, which can be exploited by retrieval applications or used for information integration.
Show more

33 Read more

Knowledge Creation and Knowledge Management

Knowledge Creation and Knowledge Management

'Knowledge creation is a spiraling process of interactions between explicit and tacit knowledge.' The interactions between these kinds of knowledge lead to the creation of new knowledge. The SECI model serves only as an outline for 'knowledge creation and the idea of elf- transcendence is quite abstrct.

5 Read more

jamesgeer.pdf

jamesgeer.pdf

A  lack  of  proper  information  might  contribute  to  some  of  these  issues.  Knowledge   management  is  still  a  relatively  new  discipline,  especially  in  small  organizations  and   without  empirical  data  supporting  its  benefit,  organizations  might  be  hesitant  to   direct  resources  towards  its  optimization  and  implementation.  The  designers   interviewed  indicated  a  lack  of  belief  about  the  tangible  benefits  and  viewed  the   passive  transfer  of  information  as  opposed  to  the  active  transfer  as  “good  enough.”   Several  indicated  they  believed  that  if  information  should  be  transferred  it  would,  as   job  performance  was  motivation  enough  to  seek  out  and  distribute  professional   knowledge.  To  some  extent,  users  supported  this  belief.  However,  information  flow   generally  consisted  of  face-­‐to-­‐face  meetings  in  offices  or  email  exchanges—which   employees  indicated  could  be  difficult  because  they  required  the  presence  and   availability  of  other  employees.  For  time-­‐sensitive  material,  this  potentially   presented  a  hindrance  for  optimizing  time  management  and  sensitive  material.    
Show more

50 Read more

Effect of Knowledge Conversion and Knowledge Application on Performance of Commercial Banks in Kenya

Effect of Knowledge Conversion and Knowledge Application on Performance of Commercial Banks in Kenya

This study examines the effect of knowledge conversion and knowledge application on performance of Commercial Banks in Kenya. The four modes of knowledge conversion process comprising of socialization, externalization, combination and internalization are utilized in this study. knowledge application was measured using indicators comprising of problem solving, elaboration, efficient processes, IT support, and infusion In addition, performance was measured using non-financial indicators comprising new products, speed of response to market crises, product improvement, customer retention, and new processes. The study adopted explanatory and cross-sectional survey design. The target population of this study comprised of all the 43 Commercial Banks in Kenya. The unit of observation was the functional area in each bank. Five areas were identified in each bank comprising human resource, finance, marketing, information communication technology, and operations in each bank. This study used primary and secondary data. Primary data was collected using a semi- structured questionnaire. The questionnaire was administered using the drop-and-pick later method. Secondary data was collected using document review and was used to validate information collected from the questionnaire. The response rate in this study was approximately seventy three percent which was considered sufficient for making inferences and drawing conclusions. Quantitative data was analysed using descriptive and inferential statistics. Descriptive statistics included percentages, frequencies, means, and standard deviations while inferential statistics involved regression analysis. Results from quantitative data analysis were presented using figures and tables. Qualitative data was analysed on the basis of common themes and presented in narrative form. The findings of the study established that knowledge conversion and knowledge application positively influence performance. Management of Commercial Banks should encourage interaction between employees and customers. Moreover, bank’s processes should be used to enhance understanding and translation of knowledge (explicit) into application (tacit knowledge). Keywords: Knowledge Management, Knowledge Conversion, Knowledge Application and Organizational Performance
Show more

14 Read more

Knowledge Creation in Constructivist Learning

Knowledge Creation in Constructivist Learning

Scardamalia et.al [4] suggest that ‘there needs to be a shift in locus of constructing knowledge from the individual to collective construction. They argue that education needs to be refashioned in a fundamental way so that students are initiated into a knowledge creating culture and see themselves as part of a global effort to advance knowledge.’ According to McLoughlin et.al, [5] Web technologies can play a crucial role in fostering knowledge building in communities or networks and aid in the pre-eminence of content creation over content consumption and the collaborative production of knowledge. Grant [6] is of the opinion that these occur through a shared goal of developing and sharing ideas publicly with peers, offering critiques and alternative explanations. If our task is to transform educational practices, we have to provide some new ideas about how students’ active engagement, meaningful learning and knowledge advancement could be facilitated.
Show more

5 Read more

An Automatic Algorithm Selection Approach for Planning

An Automatic Algorithm Selection Approach for Planning

In order to understand the accuracy of the algorithm selection, we compared the performance of the three best algorithms. In Tables III we present the results of this comparison, in terms of time/quality IPC score, that has been done on the benchmark problems of the selected domains. The performances are shown in terms of IPC score, average CPU time (quality) and solved problems. The * indicates the algorithm selected by ASAP. The mean CPU time/quality are calculated on instances solved by all the three best algorithms of the given domain. We would remark that ASAP selects the most promising algorithm on the basis of the results achieved on the learning problems, while in Tables III the comparison is made by ordering the algorithms on the results that they achieved on the testing instances. Concerning the planners included in ASAP, all of them appear at least once. We can then derive that all the planners are able to efficiently exploit, at least on one domain, the knowledge extracted under the form of different encodings. Considering the runtime optimization, the planner that appears most frequently in Table III is LPG, followed by Metric-FF and Probe. We can derive that LPG and Metric-FF are the planners that better exploit macro- operators and entanglements for improving runtime. This is quite surprising if we consider that LPG and Metric-FF are the oldest planners included in ASAP, and that they appeared more rarely in Table II. One could argue that, since the plans found by Metric-FF were used for reformulating the domains, the fact that Metric-FF performs well while exploiting entanglements or macro-operators, is not surpris- ing. From this perspective, it is worth noting that LPG is the planner which is able to better exploit this additional knowledge, and that the plans found by Metric-FF were used due to their good quality and to the relatively low CPU time required for finding them. If we focus on the best algorithms for optimising quality of the solutions, the planner which appears most frequently in Table III is again LPG, but in this case it is followed by Lama-11. While LPG is often the best basic solver, as shown in Table II, Lama is the best one only in TPP. It seems then reasonable to deduce that
Show more

9 Read more

Searching and Researching for Located Narratives as Data of the  ‘Other’

Searching and Researching for Located Narratives as Data of the ‘Other’

Epistemology deals with the ways of knowing and also discerning limits and validity of knowledge. At the epistemological level, the nature of traditional social science research has always been masculine. The influence of Western-Colonial knowledge has always considered that the components like gender and politics tend to pollute the purity of the scientific data. Therefore, the definition of data needs to be reinvestigated. The main features of scientific knowledge revolve around empiricism, experimentation, generalisation and prediction. Scientists had developed faith in the positivist method. The principle of objectivity and neutrality-that determines the relationship between the researcher and the subject- has remained very significant (Somekh and Lewin, 2012, 3-6). The positivists believed that faith in methods was the only path to actual knowledge. However, Lyotard (1984) made it clear that knowledge cannot be reduced to a science, as some of the knowledge comes from experiences, human values and social interactions. Therefore, knowledge can be based on both: empiricism and narratives (Mcnabb, 2004, 7-10). In this context, the definition of methodology revolves around the ways of teaching and studying an area of concern. The process of decolonisation is also crucial to question the validity of Eurocentric knowledge, and it asks for culturally acceptable approaches to study the local and grounded. It always highlights a need for an indigenous methodology that keeps a distance from Western epistemology that exercises power and control over knowledge production; it also leads to the disempowerment of the marginalised by ignoring their voices, histories and agency (Porsanger, 2004).
Show more

12 Read more

Using Formal Concept Analysis with a Push based Web Document Management System

Using Formal Concept Analysis with a Push based Web Document Management System

As stated in section 3.1, the approach adopted for assessing the feasibility of utilising MCRDR heuristic classification knowledge for browsing documents of a domain would involve performing a statistical comparative analysis between the generated concept lattice structure and the storage folder structure. To fulfil this, a sub-domain of the eHealth domain was first selected to be utilised as the source of data for generating the concept lattice. The reason why only a sub-domain was selected is because the limited system resources available meant it would take a significant amount of time to generate a single complete concept lattice for the entire eHealth domain. Also, since the storage folder structure could be distinctly divided into the various sub-domains of eHealth (as is done on the iWeb Web portal site), it was much simpler to just deal with a small portion of the overall structure for the purpose of analysing it. Consequently, the sub-domain of ‘Diseases’ was selected for the purpose of the analysis. It contained the most information out of all the sub-domains and also had the largest storage folder structure. To enable a concept lattice to be generated from the Diseases sub-domain data, iWeb FCA was used to reduce the number of documents in any folder to be no more than 32. This figure was chosen through a trial and error approach based on the amount of time it took to generate a concept lattice with the available system resources. It resulted in a total number of 1063 classified documents making up the reduced data set.
Show more

58 Read more

A MODEL FOR MEASURING ARTICLES KNOWLEDGEABILITY LEVELS

A MODEL FOR MEASURING ARTICLES KNOWLEDGEABILITY LEVELS

According to Al-Oqaily et al. [3, 7, 8] and Olsen [31], there are three main successful implementations for explicit knowledge measurement; (1) knowledge acquisition to collect the explicit knowledge based on the real need of working environment, (2) knowledge conversion to design and retrieve the knowledge usefully based on structure and clear format, and (3) knowledge sharing with employees in the context of working environment in real time. Olsen [31] mentioned that, the most successful factors of these processes is the knowledge content preparation and collection; the collected knowledge should agree with organizations strategies of activities and the users’ skills in working environment. Gold et al. [32] focused on knowledge acquisition as the successful foundation of knowledge implementations. Gold et al. [32] and Lee & Kang [33] mentioned that, the knowledge measurement factors such as contextual, essentiality, and performance are important to convert useful explicit knowledge based on the employees’ need of knowledge. Karaszewski [34], Gold et al. [32] explained that the knowledge measurement factor plays important roles in the value of the shared knowledge.
Show more

10 Read more

 EDUCATIONAL MODELLING IN CLOUD COMPUTING USING IMS LEARNING DESIGN

 EDUCATIONAL MODELLING IN CLOUD COMPUTING USING IMS LEARNING DESIGN

Knowledge Management is an important field for an organizational learning process. The existence of knowledge repositories within an organization becomes an important element of knowledge management activities that must be managed within the organization to support the ongoing process of learning itself. PT. XYZ is the IT solution provider company which has handled many clients. One IT Solution offered by this company is the implementation of an ERP system based on SAP Business One. The major problem of SAP Operations Division at PT. XYZ is the difficulty in doing the tracking problem, because they don’t have a specific standard, so the storage of existing documents are not structured and stored properly. In providing its services, XYZ requires a knowledge repository that helps in the management of information and knowledge documentation. The development of knowledge repository can be used by organizations as a solution to capture and empower the knowledge of its members (knowledge workers). Knowledge workers can also improve their knowledge by exploring the
Show more

8 Read more

Volume 42: Multi-Paradigm Modeling 2010

Volume 42: Multi-Paradigm Modeling 2010

Another observation is that there is no systematic approach to define semantics of domain- specific models. All approaches are ad-hoc and not defined within a general framework or using a methodology. In this paper, we propose to adopt a methodology to define the semantics of domain-specific models through the definition of properties. These properties describe certain characteristics or qualities of the models and need to be verified over the given set of models. Using the proposed approach, domain experts are not only able to capture their knowledge into models but also to specify the meaning of their models by defining properties over the models in a domain-specific language. As argued in [PS07] the description of the semantics of DSMLs depends on the concepts in the application domain, the choices of the language designer, the requirements of a particular application area, and the fact that semantics can be used for various purposes in design or analysis. This requires flexibility in describing semantics. Our proposed approach addresses this flexibility requirement because it does not demand to specify the exact meaning of each language or model element.
Show more

13 Read more

A Movie Recommender System using Ontology Based Semantic Similarity Measure

A Movie Recommender System using Ontology Based Semantic Similarity Measure

Abstract: Continuous growth in information available on the Internet overwhelms the users during navigation. This information overload may result in users’ dissatisfaction which is undesirable. Users’ satisfaction is very important aspect in every domain. Recommender systems play a vital role in dealing with information overload problems. The recommender systems filter the huge information on the Internet to generate limited and personalized information to users. This helps in increasing users' satisfaction by retaining his/her interests during navigation. Pure Web usage data based recommender systems have been used from last few years. However, they lag in precise recommendations because of absence of domain knowledge. Further, the similarity measures play a vital role in recommendation process and hence affect the performance of the recommender systems. The performance of recommender systems can be enhanced through integration of domain knowledge with usage data. This paper presents an approach to movie recommender system that integrates domain knowledge with usage data. The ontology is used to represent domain knowledge. The proposed approach is based on a new ontology based semantic similarity measure. The experimental results prove that the recommendations’ quality andaccuracy of prediction can be enhanced through integration of ontological domain knowledge with Web usage data.
Show more

7 Read more

Learn More about Your Data: A Symbolic Regression Knowledge Representation Framework

Learn More about Your Data: A Symbolic Regression Knowledge Representation Framework

In this paper, we propose a flexible knowledge representation framework which utilizes Symbolic Regression to learn and mathematical expressions to represent the knowledge to be captured from data. In this approach, learning algo- rithms are used to generate new insights which can be added to domain knowledge bases supporting again symbolic regression. This is used for the generalization of the well-known regression analysis to fulfill supervised classification. The approach aims to produce a learning model which best separates the class members of a labeled training set. The class boundaries are given by a separation surface which is represented by the level set of a model function. The separa- tion boundary is defined by the respective equation. In our symbolic approach, the learned knowledge model is repre- sented by mathematical formulas and it is composed of an optimum set of expressions of a given superset. We show that this property gives human experts options to gain additional insights into the application domain. Furthermore, the representation in terms of mathematical formulas (e.g., the analytical model and its first and second derivative) adds additional value to the classifier and enables to answer questions, which sub-symbolic classifier approaches cannot. The symbolic representation of the models enables an interpretation by human experts. Existing and previously known ex- pert knowledge can be added to the developed knowledge representation framework or it can be used as constraints. Additionally, the knowledge acquisition framework can be repeated several times. In each step, new insights from the search process can be added to the knowledge base to improve the overall performance of the proposed learning algo- rithms.
Show more

8 Read more

The Role of Information Technology to Support Knowledge Management Processes in Higher Education of Malaysian Private Universities

The Role of Information Technology to Support Knowledge Management Processes in Higher Education of Malaysian Private Universities

Knowledge acquisition is a fundamental process in the KM implementation. It is the process that enables an organization to obtain the knowledge from external sources. External knowledge sources are important and one should therefore take a holistic view of the value chain [17]. Sources include suppliers, competitors, partners/alliances, customers, and external experts that can extend well outside the firm. Knowledge acquisition capabilities consist of processes and techniques for collecting information and creating knowledge from internal and external sources. Acquisition of external knowledge indicates the identification function, which represents the “generator” of intelligence for the organization. External environmental signals are identified, and information on those signals are gathered and transmitted across the organizational boundary. The more knowledge that can be collected in the firm, the better the acquisition capability works. Information and knowledge may be acquired through several processes from various sources, by learning, when observing other organizations, by implementing knowledge possessing components and by intentional search and monitoring. The speed of a firm's efforts to identify and collect knowledge can determine the quality of a firm's acquisition capabilities. The greater the effort, the more quickly the firm will build requisite capabilities [18].
Show more

12 Read more

Automatic extraction of semantic relations between medical entities: a rule based approach

Automatic extraction of semantic relations between medical entities: a rule based approach

between medical entities. Their first method could extract 68% of the semantic rela- tions in their test corpus but if many relations were possible between the relation argu- ments no disambiguation was performed. Their second method [16] targeted the precise extraction of “ treatment ” relations between drugs and diseases. Manually writ- ten linguistic patterns were constructed from medical abstracts talking about cancer. Their system reached 84% recall but an overall 48.14% precision. Embarek and Ferret [17] proposed an approach to extract four kinds of relations (Detect, Treat, Sign and Cure) between five kinds of medical entities. The patterns used were constructed auto- matically using an alignment algorithm wich maps sentence parts using an edit dis- tance (defined between two sentences) and different word-level clues. SemRep [18], a natural language processing application, targeted the extraction of semantic relation- ships in biomedical text through a rule-based approach. SemRep [19] obtained a 53% recall and 67% precision in identifying risk factors and biomarkers for diseases asserted in MEDLINE citations. An enhanced version of SemRep [20] was proposed to identify core assertions on pharmacogenomics and obtained an overall 55% recall and 73% pre- cision. Domain-independent relation extraction methods are not directly applicable to the medical domain due to the lack of domain independent markers that may help to recognise medical entities (e.g. capital letters, regular grammatical structure) and to the variety in the expression of domain concepts (e.g. Amoxicillin = amoxycillin = AMOX). To bypass these problems, medical relation extraction approaches often rely on domain knowledge such as the UMLS Metathesaurus and Semantic Network. But the post-use of extracted relations is not always taken into account in the extraction procedure. For instance, if the extracted relations are to be used in keyword querying systems, we should either give priority to recall or give the same priority for recall and precision, while, if the final application is a question answering system for practi- tioners, priority should be given to the precision of extraction. Medical relation extrac- tion approaches sometimes also do not care about extracting the arguments of a relation (e.g. [16]), or evaluate their approaches by counting relations extracted with only one argument as correct (e.g. [21]), considering that recall is the most important measure. In our context we are interested in medical question answering systems as back-end and give priority to precision, considering the correct extraction of argu- ments as mandatory to validate the identified relations.
Show more

11 Read more

Measuring Explicit and Implicit Knowledge: a Psychometric Study in SLA

Measuring Explicit and Implicit Knowledge: a Psychometric Study in SLA

Building on the study of Han and Ellis (1998), Ellis (2005) sought to develop a battery of instruments that would make available moderately distinct measurements of explicit and implicit knowledge and incorporate a measure of target structures in natural, unplanned language use. Ellis first hypothesized behavioral measures differentiating the two knowledge types. Three criteria hypothesized to translate into how the tests could be created so as to probabilistically obtain indications of the degree of the two knowledge types were: the amount of time available, with time pressure (implicit) vs. no pressure (explicit), the focus of attention, with primary focus on meaning (implicit) vs. primary focus on form (explicit) and the utility of metalanguage, not required (implicit) vs. encouraged (explicit). Additional conditions were hypothesized to provide supporting evidence that the test was in fact measuring what it purported to measure. These were: the degree of awareness, responses by feel (implicit) vs. responses by rule (explicit); systematicity, consistent responses (implicit) vs. variable responses (explicit); and the degree of certainty in response, high (implicit) vs. low (explicit). Learnability, related to the notion of a maturational factor in SLA that is age dependent (Long, 2007; Singleton & Ryan, 2004), was also cited as an observed tendency, with early learning favored (implicit) vs. later form-focused instruction favored (explicit). These criteria are summarized in Table 1.
Show more

9 Read more

184202.pdf

184202.pdf

Given the solution with four groups, an external cluster validation approach is employed to compare the GMM results with previous classification schemes based on domain-expert knowledge [r]

16 Read more

Impact of explicit instruction on EFL learners’ implicit and explicit knowledge: A case of English relative clauses

Impact of explicit instruction on EFL learners’ implicit and explicit knowledge: A case of English relative clauses

The present study sought to investigate the effect of explicit instruction (direct proactive explicit instruction) on the acquisition of English passive objective relative clauses. Two groups of participants were involved in the study; a group of advanced EFL learners (n = 16) and a group of intermediate EFL learners (n = 37) who were randomly divided to two groups of experimental (n = 22) and control (n = 15). The experimental group received 4 sessions of explicit instruction on the target structure. The control group, however, did their routine activities in a writing class. There were three test times, namely a pre-, post-, and delayed post-tests. Two separate measures of explicit and implicit knowledge were applied; an offline test of metalinguistic knowledge (an error correction task) and two online speeded tests of implicit knowledge (a self-paced-reading task and a stop-making sense task). The findings revealed a positive effect of explicit instruction for both implicit and explicit knowledge for the treatment group. Durable effects of explicit instruction were found based on the results obtained from the delayed post-test. The advanced group performed very closely to the treatment group, indicating the effect of explicit instruction in accelerating language learning, as well as the necessity of explicit instruction for some language forms to be acquired in EFL contexts.
Show more

17 Read more

Show all 10000 documents...