Reach Us +44-175-271-2024
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Towards A Business Intelligence Framework For Healthcare Safety

Dominique Ferrand, Ph.D.
Associate Professor and Director Master & MSc. in E-Business Technologies, Telfer School of Management, University of Ottawa
Postal Address: Desmarais Building, 55 Laurier Avenue East, Ottawa, ON, Canada, K1N 6N5
Author's Personal/Organizational Website: http://www.grad.uottawa.ca
Email: dferrand@uottawa.ca
Dr. Ferrand is Associate Professor of Information Technologies at the School of Management of the University of Ottawa, where he is director of the Master and Master of Science in E-Business Technologies, and of Graduate Certificates in E-Commerce, EBusiness and Internet Technologies. His research focuses on electronic business strategies, global supply chain and international trade. He studies in particular the links
between managerial culture, organizational innovation and organizational performance. He has interests in research methodology. He has published in journals such as Mathematics and Social Sciences; Philosophy of Social Science, INFOR, Économies et Sociétés, IEEE Transactions on Engineering Management, Medical Care Review, Health Care Management Review, Journal of Medical System.
Daniel Amyot, Ph.D.
Associate Professor, SITE, University of Ottawa
Postal Address: 800 King Edward Avenue, Ottawa, ON, Canada, K1N 6N5
Author's Personal/Organizational Website: http://www.site.uottawa.ca/~damyot/
Email: damyot@site.uottawa.ca (please use to correspond with the authors)
Dr. Amyot is the Software Engineering program coordinator at the University of Ottawa, which he joined in 2002 after working for Mitel Networks as a senior researcher. His research interests include scenario-based software engineering, requirements engineering, business process modeling, aspect-oriented modeling, and healthcare informatics. He is also Associate Rapporteur for requirements languages at the International Telecommunication Union, where he leads the evolution of the User Requirements Notation standard. He has published in journals such as Requirements Engineering, Int. J. of Electronic Business, Transactions on Aspect-Oriented Software Development, Electronic Commerce Research, Journal of the American Medical Informatics Association, Int. J. of Intelligent Systems, and Computer Networks.
Carlos Villar Corrales
School of Information Technology and Engineering (SITE), University of Ottawa
Postal Address: 800 King Edward Avenue, Ottawa, ON, Canada, K1N 6N5
Author's Personal/Organizational Website: http://www.site.uottawa.ca/
Email: odiseocu@yahoo.com
Carlos Villar Corrales is a Master’s student in the e-Business Technology program of the University of Ottawa. He has worked for Bell Canada as a Software Developer and has international experience working for the Cuban healthcare system. His areas of interest include software engineering design and architecture, project management and healthcare informatics.

Visit for more related articles at Journal of Internet Banking and Commerce

Abstract

In Canadian hospitals, literature estimates the number of adverse events that are preventable to be 70,000 annually. As a first step towards controlling and reducing such undesirable safety outcomes, it is necessary to quantify and understand their causes. In this paper, we present a Business Intelligence framework to support the definition and reporting of metrics in healthcare. We tailor the Goal-Question-Metrics framework to the specifics of adverse event monitoring in a teaching hospital, and prototype a solution using the IBM Cognos 8 tool.

Keywords

adverse events; business intelligence; goals; healthcare; metrics.

INTRODUCTION

One of the top priorities of Canadian healthcare organizations is to provide “high-quality” care. This objective however, is hard to achieve since thousands of patients experience the consequences of so-called “adverse events” (AE), which are undesirable outcomes from a safety point of view (Baker et al., 2004).
Many methodologies have been developed to identify adverse events such as mandatory and voluntary reporting systems, chart reviews, and prospective clinical surveillance (Michel et al., 2004). One commonality of these approaches is that they aim to create new knowledge and assess processes to discover the “why”, “when” and “how” of adverse events. The reason is that without proper quantification of the collected data, assessing any stage of progress is unreliable. “Quality-of-care literature is full of discussions about performance measurement” (IOM, 1998) and many quality models have been implemented for such purpose. Regardless of the many models that have been developed elsewhere, there are still many issues surrounding adverse events measurement, including:
• Healthcare providers fear for their reputations, or potential lawsuits (Thomas & Petersen, 2003);
• Physicians may lack motivation in face of repeated adverse events reports (Walsh, 2000);
• Reliable definitions of adverse events are hard to find; some are discovered through methods that are not always reliable or valid themselves (Walsh, 2000);
• Measurement lose focus and objectivity, e.g., when organizations only use guidelines instead of developing their own goals;
• Measurements and systems may experience increased complexity that affects performance and quality goals, thereby undermining the measurement exercise.
These difficulties indicate that there is still much work to be done in this field. In this paper, we first introduce our objectives and background research. We then propose a new methodology that tailors an existing quality framework to healthcare safety, while integrating Business Intelligence concepts, with application to adverse event monitoring.

Research Objectives

Many existing adverse events measurement frameworks adopt guidelines from organizations like the World Health Organization (WHO) or the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) as their measurement driver. What motivates this research is the discovery of methodologies that provide guidance by developing measurement goals. Also, it is promising to find a way to apply measurement methodologies originally used in the software industry to benefit the healthcare system.
Diverse objectives are targeted in this research:
1. The creation of a framework that allows the development, collection and analysis of measures exposing in a quantifiable way the progress of well-defined measurement goals;
2. The development of a dimensional model for storing the collected data; and
3. An assessment of the use of dimensional modeling and Business Intelligence (BI) tools for the analysis and reporting of generated measures.
The result will enable organizations to focus on the development of measurement goals in order to ultimately find well-validated metrics.

Background Research

Our literature review enabled us to gather an understanding on methodologies that help with the development and creation of measurements. It started by reviewing healthcare frameworks. Over the past years many approaches have been developed to measure performance, as well as the quality of care in hospitals. These tend to specialize on determined clinical areas or diseases. An example is the methodology developed by Spertus et al. (2005) to select performance measures geared towards quantifying the quality of cardiovascular care. Similarly, Greenberg et al. (2005) developed a methodology to create strategy-based system-level cancer care performance indicators.
Conversely, other methodologies such as IQIP and PATH are more general and widely used across the healthcare system. IQIP stands for International Quality Improvement Program, established in 1985. Its main objective is the provision of defined sets of indicators capable of providing insights on quality and performance. IQIP engages in activities like searching for “the most valid indicators, the most reliable methods of data gathering and the optimal clarity of analysis presentation” (Kazandjian et al., 1995), which makes the reliability and relevance of the proposed indicators quite high. Today this methodology “serves the performance measurement and safety improvement needs of healthcare organizations worldwide” (IQIP, 2010). It is also considered the largest dataset of quality indicators (Thomson et al., 2004). What makes IQIP interesting is the support and training provided to its users, like hospitals’ coordinators, on how to use, evaluate and understand the outcomes of the indicators (Thomson et al., 2004). Although widely used, IQIP is the object of criticism. Concerns include the validity of indicators in certain circumstances and the fact that not all actions yield the expected results (Kazandjian et al., 1995).
PATH stands for Performance Assessment Tool for quality improvement in Hospitals. The objective of this project is to provide a tool to assess hospitals’ performance by analyzing their results and using this information towards actionable improvement. PATH conceptual model is based on six inter-related dimensions that describe performance in the best way and therefore offer a good guide to its measuring. The dimensions are “clinical effectiveness, efficiency, staff orientation, responsive governance, safety and patient centeredness” (Veillard et al., 2005; Groene et al., 2008). Some of the positive outcomes reported by PATH include the discovery of specific dimensions of performance and their relationship, the development of indicator selection criteria, the development of a set of indicators and their relationship to the dimensions of performance, and the development of a strategy to benchmark the results of the project to other hospitals (Veillard et al., 2005). One important limitation of PATH is that the indicators created using this approach are tightly coupled with their respective dimensions, leaving researchers with little room for new developments.
The biggest issue found with IQIP and PATH is the lack of guidance in obtaining very concrete indicators capable of addressing particular situations. By undergoing a more extensive literature review, it was found that many measurement methodologies created for the software development industry specialize on providing tools to offer the guidance that many healthcare approaches were missing. Examples of those are the Goal- Question-Metric (GQM) approach and the framework created by Fenton & Pfleeger.
GQM was created by Basili and Weiss to measure software development processes (Berender et al., 2006). This methodology focuses on the development, collection and analysis of a set of variables (or indicators) that are capable of addressing the progress of defined measurement goals in a quantifiable way. GQM is based on a top-down hierarchical structure formed by three levels: Conceptual (Goals), Operational (Questions), and Quantitative (Metrics). Within this structure, some metrics can be used to answer different questions under the same model, and different models can have some questions and metrics in common. Specifically, business goals and associated measurement goals are developed first to provide guidance and structure for the project; then questions are posed to define the goals in a qualitative way. Finally, measures are specified to answer the questions in a quantifiable manner (Basili et al., 2009).
It has been suggested that the implementation of the GQM approach does not have to be as sequential as stated by Basili. Instead, steps’ inputs and outputs should vary depending on the context of the implementation and the scope of the project (Van Solingen et al., 1999). GQM adapts to different organizations and environments and is applicable to all life-cycle products, processes or resources (Van Solingen et al., 2002). Nowadays it is already considered a “de facto standard for the definition of measurement frameworks” (Berender et al., 2006). This ‘standard’ can also be translated and used in a different sector, such as the healthcare industry. By implementing it in hospitals, it may allow for the discovery of new metrics that will track processes as they occur.
Some weaknesses have been reported. The most outstanding issue is the risk of identifying more measurements than are possible to collect or analyze (Berender et al., 2006). This could also translate into “a top down approach [that] ignores what is possible to measure at the bottom” (Bache & Neil, 1995). To solve this difficulty, an extension of the GQM approach is proposed by Berender et al. (2006) where prioritization tools are used for limiting the number of measurements identified and categorization tools are proposed for the balancing of different dimensions.
The other example of a potential software measurement methodology to be used in the healthcare area is the framework created by Fenton & Pfleeger (1997). This framework shows how measurement requires the definition of entities and their attributes as well as the relation of these attributes to values, units and scale types (Oman & Pfleeger, 1997). This approach makes use of GQM to make the methodology goal-driven (Fenton, 1994). In order to address the large number of metrics that can be found by using GQM, a process maturity framework is also added. The process maturity framework elaborates on the structure of the Capability Maturity Model (CMM) created by the Software Engineering Institute, which takes into account not only the maturity of the process but also the maturity of its outcomes (Curley, 2006). In this way, it is capable of providing information about the availability of metrics depending on the project’s state. By combining GQM with CMM, the methodology addresses some of GQM’s reported issues such as the extensible number of measures created.

Tailoring GQM to the Healthcare Sector

A teaching hospital in Ontario, Canada, is currently implementing a prospective clinical surveillance methodology to support the discovery of adverse events (Behnam et al., 2009). An Adverse Event Management System (AEMS) is used to collect the events that occur and to further categorize them, depending on type, location, severity, error type, and patient demographics. In 2008-2009, this AEMS was used in several pilot projects at that hospital. One of the major problems with these earlier initiatives was the lack of reporting capabilities and clear ideas on what exactly was to be reported upon. As it turned out, in the final phases of the project, stakeholders still did not have automated reports available which were capable of showing the results of the investigation. Neither were there templates on what, when or to whom this information should be reported. It seems that the process needed further structure and automation.
This scenario provided the perfect grounds to conduct a case study and tailor GQM to the healthcare sector. This approach was selected because of its simplicity and ease of adjustment to other types of industries. Moreover, its first three steps provided the guidance and structure necessary to develop the measures needed to report on the collected data for the adverse events project.
A new pilot implementation was conducted in a specific clinical unit of that hospital from December 2009 to February 2010, and we took a closer look at the process used. The Adverse Events Management System was used as the main data source of information. We created a dimensional model of the data that was used for report generation based on IBM’s Cognos 8 Business Intelligence tool (Volitich, 2008). After the case study ended, it was possible to pinpoint some issues that were believed to negatively influence the outcomes of the measurement exercise. They were grouped in three main categories: data sources, methodological issues, and people issues.
Regarding methodological problems, it was found that:
• The stakeholder team selected was inadequate for the task at hand. Given that the clinical personnel was seldom available, only two perspectives were used to describe goals, questions and measures.
• No action plan was conceived. Stakeholders did not have a guiding document with deadlines and steps. They were neither led to formally present the steps’ outputs which resulted in disagreements and unnecessary delays.
• Goals, questions and measures were developed without considering information hosted in the data sources; therefore not all measures could be collected.
• Users lacked incentives and a visual aid of the exercise’s results. Reports were created after developing all measures instead of been laid out using GQM.
• Poor performance during the identification of questions negatively influenced the generation of measures.

An Improved Methodology

The previously mentioned problems led to modifications to the methodology in order to still use GQM while meeting the requirements and constraints specific to healthcare.
image
This new approach includes three phases: Metrics Development, Project Planning and Report Generation, which are concurrently executed to obtain more flexibility, agility, and openness to change.
As seen in Figure 1, the methodology starts by carefully choosing the team members.
This step includes personnel with different roles in order to consider different responsibilities in the measurement exercise. A tentative plan of measurement starts to be developed at this point which includes deadlines and deliverables. This plan is allowed to change and evolve depending on the circumstances.
The selection of entities takes place in order to focus the research on specific areas, processes, or situations. This is necessary to delimit the project’s boundaries. Business and measurement goals are then defined to concretely state what needs to be measured and what is expected to be accomplished by the end of the project. Questions should be posed at this moment in order to describe the goals as precisely as possible. This process might lead to the discovery of new goals or refinement of existing ones. Therefore, this can change the implementation plan previously drawn.
Report mock-ups can be started at this stage. This helps with the process of understanding the questions and goals, besides providing a visual prototype of what is going to be obtained. Metrics are then developed in order to provide the quantitative information needed to answer the stated questions in a satisfactory way. A literature review process can help at this point. It should not be performed before in order to avoid bias in the definition of goals. Measures should be defined by the team’s bio-statisticians or other experts in the measured field.
At this point in time, we are about to validate this improved methodology with two new pilot studies at the teaching hospital. Our hope is to determine whether these refined steps are sufficient to obtain valuable metrics and reports for healthcare stakeholders.

Conclusion

This research draws on the applicability of GQM, a framework traditionally found in the software sector, to the healthcare industry. It was introduced because there was an obvious lack of guidance in the measurement methodologies commonly found in the healthcare system, such as IQIP and PATH. These methodologies tend to guide the measurement process through a set of restrictive ‘best practices’ or standard guidelines instead of being allowed to change these, depending on other objectives that may come up through the process.
Tailoring GQM to healthcare is an idea that resulted from several attempts to generate useful metrics systematically that had been piloted previously and that had not yielded satisfactory results. The introduction of GQM to this scenario may help healthcare organizations follow a very precise guideline, where individual steps are laid out to lead the user through the process. The result is that a manageable set of useful metrics are discovered by developing certain goals, which provide scope and context.
This is a goal-driven approach, in which the goals can differ from one scenario to the next. Objectivity and room for diversity are thereby much expanded. Furthermore, this methodology may also encourage popularity in the reporting of adverse events, thereby implementing a way to pinpoint adverse events in Canadian institutions, across clinical specialties and provincial jurisdictions. The combination of the measurement methodologies from both the software and the healthcare industries also shows how one can be greatly improved by the lessons learned in the other.

References

  1. (UK QIP) and the UK independent health care sector: a new development. Int. J. for Quality in Health Care, Vol. 16, Supp. 1, i51–i56.
  2. Van Solingen, R., Basili, V.R., Caldiera, G., & Rombach, D.H. (2002) Goal Question Metric (GQM) Approach. Encyclopedia of Software Engineering, Marciniak, J.J. (ed.), Wiley Interscience.
  3. Veillard, J., Champagne, F., Klazinga, N., Kazandjian, V., Arah, O.A., & Guisset, A.-L. (2005) A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int. J. for Quality in Health Care, 17(6):487-496.
  4. Volitich, D. (2008) IBM Cognos 8 Business Intelligence: The Official Guide. McGraw-Hill. Walsh, K. (2000). Adverse events in health care: issues in measurement. Quality and Safety in Health Care, 9:47-52.
  5. Baker, G.R., Norton, P.G., et al. (2004) The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ, 170(11):1678-1686. Canadian Medical Association.
  6. Basili, V.R., Heidrich, J., Lindvall, M., Münch, J., Seanian, C., Regardie, M., & Trendowicz, A. (2009) Determining the Impact of Business Strategies Using Principles from Goal-oriented Measurement. Wirtschaftsinformatik (1), 545-554.
  7. Behnam S., Amyot, D., Forster, A., Peyton, L. & Shamsaei, A. (2009) Goal-Driven Development of a Patient Surveillance Application for improving Patient Safety. 4th Int. MCeTech Conf. on eTechnologies. LNBIP 26, Springer, 65-76.
  8. Berender, P., & Jönsson. P. (2006) A goal question metric based approach for efficient measurement framework definition. ACM/IEEE Int. Symp. on Empirical Software Engineering. ACM, New York, USA, 316-325.
  9. Curley, M. (2006) An IT value based capability maturity framework. Centre for Information Systems Research. Sloan School of Management. MIT, USA.
  10. Fenton. N.E., & Pfleeger, S.L. (1997) Software Metrics: A Rigorous & Practical Approach. PWS Publishing Company. Second Edition.
  11. Fenton. N.E. (1994) Software Measurement: A Necessary Scientific Basis. IEEE Transactions on Software Engineering, Vol. 20, No 3, 199-206.
  12. Greenberg. A., Angus, H., Sullivan, T., & Brown, A.D. (2005) Development of a set of strategy-based system-level cancer care performance indicators in Ontario, Canada. Int. J. for Quality in Health Care, 17(2):107-114.
  13. Groene, O., Klazinga, N., Kazandjian, V., Lombrail, P., & Bartels, P. (2008) The World Health Organization Performance Assessment Tool for Quality Improvement in Hospitals (PATH): An Analysis of the Pilot Implementation in 37 Hospitals. Int. J. for Quality in Health Care, 20(3):155-161
  14. IOM—Institute of Medicine Board on Health Care Services (1998) Measuring the Quality of Care. Washington, DC: National Academies Press.
  15. Kazandjian, V.A., Wood, P., & Lawthers, J. (1995) Balancing Science and Practice in Indicator Development: The Maryland Hospital Association Quality Indicator (QI) Project. Int. J. for Quality in Health Care, 7:39-46.
  16. Michel, P., Quenon. J.L., de Sarasqueta, A.M., & Scemama, O. (2004) Comparison of three methods for estimating rates of adverse events and rates of preventable adverse events in acute care hospitals. British Medical J., 328(7433):199-203.
  17. Oman, P., & Pfleeger, S.L. (1997). Applying Software Metrics. IEEE CS Press.
  18. Spertus, J.A., Eagle, K.A., Krumholz, H.M., Mitchell, K.R., & Normand S.T. (2005) ACC/AHA methodology for the selection and creation of performance measures for quantifying the quality of cardiovascular care: a report of the ACC/AHA Task Force on Performance Measures. J Am Coll Cardiol 2005;45:1147–56.
  19. Thomas, E.J. & Petersen, L.A. (2003) Measuring Errors and Adverse Events in Health Care. Journal of General Internal Medicine, 18:61-67.
  20. Thomson, R., Taber, S., Lally, J., & Kazandjian, V. (2004) UK Quality Indicator Project®

Copyright © 2024 Research and Reviews, All Rights Reserved

www.jffactory.net