Guest Column | May 8, 2024

A Critical Analysis Of FDA Human Factors IFU Guidance For Drug Delivery Devices

By Charles L. Mauro, CHFP, and Christopher Morley, MSc., Mauro Usability Science

compliance-GettyImages-1180014956

The medical drug delivery device landscape is changing at a rapid rate: Drug administration by lay users (e.g., patients or caregivers) is becoming more common. Patients self-administering may have clinical manifestations impacting their ability to utilize a drug delivery device. Our population is aging, leading to decreased physical and cognitive capabilities for lay users and healthcare professionals (HCPs). Pharmaceutical companies are developing larger molecule, higher viscosity, and higher volume drugs that are more difficult to inject based on the force required for full dose delivery. We have been analyzing the progression of these factors, as well as the increase in FDA complaints for drug delivery devices that passed FDA approval but exhibit failures in the marketplace. We have determined that solely following FDA guidance may be insufficient for user success in the marketplace given the changing drug delivery device landscape. In this article, we discuss current shortcomings of the general FDA human factors (HF) guidance and FDA guidance specifically for instructions for use (IFU). We provide high-level recommendations to help resolve the problems with current FDA guidance. We understand that some proposed resolutions may increase the cost to bring a drug delivery device to market; however, such costs pale in comparison to costs and consequences associated with a drug delivery device failing once in the marketplace.

The design of products that deliver high levels of HF performance is a challenging task often not well understood by product development teams, including drug delivery device development teams. By drug delivery device we mean a medical device used in combination with a drug, also commonly referred to as a combination product. Hereafter, we will simply refer to a drug delivery device as a “device”. One primary factor is development teams have historically relied on industry standards and governmental regulations for demonstrating HF performance. This is a complex problem in the medical device industry, where various associations and governmental agencies put forth directives for establishing HF performance. The primary resource in this domain is the FDA's guidance document "Applying Human Factors and Usability Engineering to Medical Devices",1 which references AAMI/ANSI HE75,2 ISO/IEC 62366-1:2015,3 and ISO 14971:2019.4 The FDA’s “Human Factors Studies and Related Clinical Study Considerations in Combination Product Design and Development”5 is also often followed by device developers when navigating the FDA approval processes. Such standards offer recommendations and specifications for the design of medical devices, which sometimes conflict and lack conceptual depth.

Ultimately, most decision-making relies on guidance provided by the FDA. However, reliance entirely on FDA HF guidance may be insufficient for user success in the marketplace. This comes from understanding the relationship between FDA guidance and best practices in the field of HF science. It is possible to meet FDA HF guidance and still deliver a device with serious usability defects. How is this possible?

It is often surprising to development teams that HF guidance may not provide sufficient direction for ensuring usability of drug delivery devices, and we have written about this topic elsewhere.6

What Happened To Usability Performance Between FDA Approval And The Marketplace?

Over the past decade, we tested devices that passed FDA HF review but exhibited serious usability problems when used in the marketplace. Importantly, these studies were stimulated by complaints filed with the FDA by users of the devices in clinical practice. The FDA requires a formal response to usability complaints when the device is approved and in the marketplace. These studies are costly and time-consuming, demanding in-field investigation of complaints AND execution of additional unplanned HF studies on identification and mitigation of observed HF performance problems.

The Message is Simple: Development teams who push through suboptimal devices in terms of HF performance risk serious problems when the device reaches the marketplace. Development teams should understand that meeting FDA HF guidance is not a final measure of overall usability and compliance with industry best practices. This is critical when defending against FDA challenges, recalls, product market removals, and litigation related to product design defects, including usability problems. The current market and legal system have a higher standard than simply meeting FDA guidance. In an environment focused on cost reduction, we are seeing device usability defects tied to drug administration failure, patient safety, or drug overdose and underdose problems. This can lead to life-threatening drug loss and/or need for second dosing procedures. Take a new on-body injector that fails to deliver the dose of a costly new drug due to usability problems. Is the patient expected to pay for the second dose? How does the physician guide the patient with partial dose management? With new large molecule drugs with high viscosity, this becomes a liability for the drug delivery device developer. This is but one of many examples. Why would devices approved by the FDA exhibit serious problems in the marketplace?

Somewhere between FDA HF guidance fulfillment, device approval, and use of device in real-life scenarios, usability performance can be seriously compromised. How? Is FDA guidance viable for demonstrating safe and effective use and user acceptance for critical devices and systems? The surprising answer is no, and data supporting this view has been in plain sight for more than a decade. The problem is becoming more apparent with the previously mentioned changes in the drug delivery device landscape.

Examples of how FDA guidance is not sufficient for producing drug delivery devices that are safe and effective to use in the marketplace are found in the FDA HF guidance published on the design of IFU.7 This guidance is supposed to provide teams with direction for design and testing of labeling for devices that meet state-of-the-art HF performance requirements.

Historically, IFU have been an afterthought for most device teams. This is common not only in the drug delivery device space but in other product verticals, including consumer products. As drug delivery devices have become more complex and even utilized by patients, IFU can be a critical component of mitigating use errors. We understand that design of usable IFU documentation for drug delivery devices can be a vexing HF science problem. Our team has confirmed the extent of this problem via hundreds of usability studies over the past three decades. FDA IFU guidance shows how complex this problem is: not based on what the guidance includes, but what it leaves out.

Defects In Current FDA HF Guidance For IFU Systems

The following is a summary of 10 major failures of FDA IFU guidance based on our team's extensive experience with the design and testing of drug delivery device product labeling content. Based on this analysis, one can see that relying only on FDA HF guidance is not workable in today's world of increasingly complex devices, legal liability, and marketplace competition.

1. Failure to Provide Guidance on How to Validate HF Performance of IFU Systems

There is no mention of how to validate an IFU or execute user testing on representative user profile(s), accounting for relevant physical/cognitive ability distributions. The lack of objective study design and statistical criteria for establishing HF IFU performance is a fundamental failure. Without formal statistical criteria for determining acceptable performance, current FDA-accepted procedure allows utilization of methods that do not meet industry best practices in terms of study design and scientific rigor. The FDA-proffered 15 respondents per user group sample size may not allow proper statistical analysis due to low statistical power. This sample size guidance may be influenced by cost efficiency; however, if these small sample studies do not capture critical user errors that are observed once devices enter the market, are such studies effective? Is this really validation?

2. Failure to Address Impact of IFU Configuration on Usability

Our team has conducted large-sample studies examining how configuration of IFU content impacts user engagement, comprehension, and error rates. Interestingly, whether IFU content is presented on a single sheet, two-sided sheet, short-form booklet, long-form booklet, or combinations of these has a statistically significant impact on information transfer. The same content presented in different IFU configurations has a dramatic impact on information uptake, processing, decision-making, and error mitigation. Despite this, FDA guidance fails to include any discussion of this factor.

3. Failure to Require Development Teams to Validate All Respondent Data-Scrubbing During Data Analysis

A well-kept secret in the filing of HF FDA research data is use of overly aggressive risk mitigation, often coming in the form of removing respondents from study data analysis who have failed the study objectives. Some development teams supported by internal compliance staff routinely scrub out respondents from summative usability studies to improve the overall acceptability of HF performance data prior to submission to the FDA. There are devices that are simply unusable by an acceptable range of user profiles, even with the most extensive IFU documentation. Yet somehow such devices are approved by the FDA based on submitted summative testing data. We have tested devices that clearly failed baseline usability criteria. We reported such findings to the device development and compliance teams and, some months later, saw our same study data submitted to the FDA. For the most part, we did not recognize the data in terms of usability performance ratings. To be clear, there are legitimate reasons for scrubbing respondents from a study sample, but the FDA should require all raw data video recordings from summative studies, rationale for any respondents removed, and description of how removal impacted overall HF performance of the device.

4. Failure to Identify the Proper Skill Set for the Design and Testing of IFU Content

It has long been assumed that IFU is a graphic design problem. This is fundamentally incorrect. Modern neuroscience indicates that how we process information is based on our prior experience, training, and current mental state, all of which sum to a staggering number of perceptual biases. The role of graphic design creation and validation of an effective IFU is one of visual integration based on cognitive task analysis, cognitive modeling, and function allocation analysis. Personnel required for development and validation of an effective IFU are HF scientists, usability engineers, compliance experts, product design experts, graphic designers and illustrators, and, of course, representative users of the device. Yet, in the recent FDA guidance for IFU development, the focus of the IFU development is overwhelmingly graphic design. Physical and cognitive limitations of the intended user (e.g., due to advanced age or disease state) must be considered when designing an IFU. This is a major oversight.

5. Failure to Identify How Instructional and Warning Content Should be Distributed Across the Device, IFU, Product Package Content Design, Quick Reference Cards, and Even Online and Call Center Support Systems

Drug delivery device design and IFU development is a systems design problem that must account for all user interaction points and relevant risk-based use cases. Yet current FDA guidance fails to include any direction on how to approach development of IFU from a systems perspective. This results in device development teams working in siloed domains that fail to consider the total user experience and how supporting content integrated into IFU leaflets, package content design, etc., should be distributed, designed, and validated. This is a major shortcoming.

6. Failure to Identify How IFU Documentation Design Requirements Differ for HCPs Compared to Patients

Our research team has conducted hundreds of formative and summative usability studies on drug delivery devices and related IFU content. We often observed that core usability problems for patients are rarely the same for HCPs. Whereas patient interactions with devices often result in commission errors, HCPs (like most experts) routinely exhibit omission errors. IFU design for these two groups presents different problems, even though many development teams still believe that one IFU will serve both groups. Understanding the relationship between domain expertise and errors is critical for creating and validating IFU systems. The FDA makes no mention of this variable.

7. Failure to Address Core Function Allocation Design Performance

Based on formal HF science, the design of IFU should always start with detailed task analysis of the device user interface to understand which use cases and interaction modes lead to common user errors. Then, the team should determine which types of use errors are best mitigated by updating the physical device design versus IFU documentation. This process is known as function allocation analysis. Current HF best practices call for this approach based on a long history of mitigating risk during device and system development. Despite this, FDA IFU guidance does not mention function allocation as a means of determining how to mitigate risk via device or IFU optimization.

8. Failure to Address How to Define the User Population for IFU Design and Validation

Whether a given IFU is intelligible and reduces critical errors is a complex usability problem. However, at the very core of the problem is the need to define and recruit the proper respondent testing profiles that reflect the intended user population. Population profile development is a complex aspect of professional usability testing, often poorly understood by device teams. FDA IFU guidance makes no meaningful attempt to define how to identify and screen for representative respondents. For example, should a device team design to accommodate three standard deviations or two when setting criteria for language, physical coordination, cognitive capability, reading level, prior experience, and knowledge transfer? There is no mention of any of this in the FDA guidance.

9. Failure to Define How Physical Package and Content Design Impacts Utilization of IFU

Based on execution of dozens of complex IFU/package design usability studies, physical design of the device package has a profound impact on how the IFU is utilized. This is often surprising to development teams who may silo package, IFU, and drug delivery device design within their corporate structure. We have learned from professional usability testing that the most effective approach for developing functionally excellent IFU systems is to view the device + package + IFU + insert card as a unified system of content display and communications for improving patient outcomes. The formal concept behind this idea is known as designing for the total user experience or TUX.8 Yet, again, the FDA guidance makes no mention of this critical aspect of IFU development and validation.

10. Failure to Provide Guidance Related to Design of Package Opening Mechanics and Forces for Patients and HCPs

Patients and HCPs often cannot physically open the packages in which devices and drugs are provided. Research on package opening mechanics and related biomechanical forces has been an essential aspect of our advanced drug device usability testing methodology. Objective measurement of Newtonian forces required to open packages combined with physical design of opening tabs, security labels, tear strips, and other closure technology has demonstrated that many patients cannot open their device packages. In usability studies utilizing 3D spatial tracking, we found patients and HCPs engage with product packaging in complex and highly unpredictable ways that have a major impact on whether they can open a given package and how they engage with IFU. Again, the FDA guidance fails to mention this critical component.

Previous 10 Points Summarized

We make the case that relying on even the most recent FDA guidance may be insufficient to create IFU and package design solutions yielding high levels of professional HF performance, especially given the changing landscape of drug delivery devices. Adhering to best practices in the field of HF science is likely to produce better IFU content that will improve clinical outcomes and reduce the possibility of post-FDA approval complaints. Fixing a usability problem during early development is 10X less costly than fixing it after the device has been committed to production. Fixing a usability problem once the device is in the marketplace is 100X more expensive than fixing it during early formative development. Realize that the FDA will respond aggressively to complaints from real users. Any complaints that impact drug delivery will lead to audits involving validation of all original submission materials, methods, risk audits, and usability testing to date. This process is expensive and time-consuming. In the worst-case scenario, the FDA will issue a warning letter directed at the usability of the device. This is a device executive's nightmare. This all leads to the question: what should industry and the FDA do to improve the usability of drug delivery devices that reach the marketplace after obtaining approval?

What Needs To Be Done To Fix The Problem?

These are some high-level recommendations that would significantly help resolve core problems with current FDA HF guidance:

  1. Device teams should never rely entirely on FDA HF guidance when designing and testing devices but also should employ best practice professional HF science, including robust testing methods and statistical analysis.
  2. Do not assume that because your device has passed FDA approval, your team will be free of usability complaints from the marketplace. Be prepared to answer tough questions from the FDA regarding why data reported during the application process does not match use in the marketplace.
  3. The FDA must rethink how the primary committee for drafting HF guidance is structured, so that it is not controlled by industry associations whose membership is based on membership fees/political impact.
  4. The FDA HF approval process must require professional HF research and validation. In the future, the FDA may require summative usability study submissions to include unedited video recordings of all respondent sessions. Such videos may be time-stamped for use error events and information on root cause of error to increase efficiency of review for FDA staff. The rationale for elimination or modification of any observed errors, confusions, or failures seen in the video recordings should be provided. The FDA should not approve any device without internal validation of all observed user behaviors, as captured in summative user testing.
  5. When the FDA receives complaints from users in the marketplace, the FDA should open a structured investigation and audit the company of all submission materials provided by the device team in seeking approval. If fault is found with the usability of the device, the FDA should require a complete audit of all user testing data validated by independent outside experts, based on provision of all raw video testing sessions from the summative study.

Conflict Of Interest

The authors have no conflicts of interest with regard to funding or sponsorship of the research reported in this paper. This paper was not funded or sponsored by any pharmaceutical or medical device company or government agency.

References

  1. U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health Office of Device Evaluation. (2016). Applying Human Factors and Usability Engineering to Medical Devices - Guidance for Industry and Food and Drug Administration Staff. https://www.fda.gov/media/80481/download
  2. AAMI. (2009). ANSI/AAMI HE75:2009 Human Factors Engineering - Design Of Medical Devices. https://webstore.ansi.org/standards/aami/ansiaamihe752009r2018
  3. ISO. (2015). IEC 62366-1:2015 Medical Devices Part 1: Application of usability engineering to medical devices. https://www.iso.org/standard/63179.html
  4. ISO. (2019). ISO 14971:2019 Medical Devices - Application Of Risk Management To Medical Devices. https://webstore.ansi.org/standards/iso/iso149712019
  5. U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health, Center for Drug Evaluation Research, Center for Biologics Evaluation and Research, and Office of Combination Products in the Office of the (2016). Human Factors Studies and Related Clinical Study Considerations in Combination Product Design and Development - Draft Guidance for Industry and FDA Staff. https://www.fda.gov/media/80481/download
  6. Mauro, C., Pirolli, P., & Morley, C. (2019). A Critical Analysis of FDA Guidance for User Percentile Device Design Criteria versus Currently Available Human Factors Engineering Data Sources and Industry Best Practices. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3408117
  7. U.S. Department of Health and Human Services Food and Drug Administration Office of Combination Products (OCP) CDER Center for Biologics Evaluation and Research (CBER). (2022). Instructions for Use — Patient Labeling for Human Prescription Drug and Biological Products — Content and Form — Guidance for Industry. https://www.fda.gov/media/128446/download
  8. Xu, W. (2012). User Experience Design: Beyond User Interface Design and Usability. In Ergonomics - A Systems Approach (p. 172). Intech Open.

About The Authors:

Charles L. Mauro, CHFP, is president and founder of Mauro Usability Science (MUS). He holds a BS in industrial design and an MS in ergonomics. He is a Certified Human Factors Engineering Professional (CHFP) and has managed over 4,000 major research projects over his 40-year career. He has received numerous high-profile awards for research, including citations from the Human Factors and Ergonomics Society (honored as Titan of HFE in 2024), NASA, Association of Computing Machinery, and the Industrial Designers Society of America. Mauro has been accepted at the Federal Court level as an expert in product design, product design methodology, and human factors research. He has testified in over 75 major intellectual property cases on such matters. He can be reached at cmauro@maurousabilityscience.com.

Christopher Morley, MSc., is director of research at Mauro Usability Science, where he has managed numerous complex usability and UX optimization research programs examining a variety of product categories and user profiles. This includes many high-criticality drug delivery device research programs involving levels of device/methodology/user group complexity, ranging from traditional usability testing to advanced. He obtained his master’s degree in experimental psychology from Old Dominion University, where he honed his background in experimental design, advanced statistical analysis, human factors psychology, and human-computer interaction. He can be reached at cmorley@maurousabilityscience.com.