With more sophisticated therapeutics comes larger and more complex data sets. The complexity is heightened with the advancements in personalized medicine as it introduces new data points at apheresis (the extraction and infusion of patient blood, cells, tissue and/or regenerative medicinal compounds) and during enhancements of cells and genes.
While data integrity is critical to building confidence in the supply chain and product quality, as well as meeting FDA compliance obligations, many organizations remain highly reliant on spreadsheets, manual data entry, paper records and email. This creates numerous opportunities for error and can result in FDA warning letters, fines or recalls. While data “capture” may start early in biopharmaceutical R&D, oftentimes, a variety of disparate IT systems are installed without a view towards data coherence throughout process development and clinical and commercial manufacturing.
These challenges are compounded by the universal reliance on external partners for significant process development and manufacturing operations.
To attenuate the risks of delayed, incomplete and inconsistent data, biopharma companies must establish a solid data management approach early in product development. Especially for startups that may not have a lot of IT experience or staff, this can be daunting.
The following items should be prioritized in order to better address and mitigate enterprise risks around data integrity and reliability:
- Creation of a digital data backbone throughout the product and process lifecycle and across internal and external teams, sites and partners
- Interdepartmental review of Quality and Supply Agreements with CDMOs [contract development and manufacturing organizations] to ensure data visibility, IP ownership and process oversight
Establish a single digital data backbone early
There are new business demands for information to be processed faster. Building a digital data backbone early supports key activities further downstream – late-stage process development, scale-up and tech transfer, and manufacturing where quality assurance and compliance requirements come into the picture.
New digital data systems retain or establish the context and relative importance of data collected from the IT infrastructure. By implementing a cloud-based data backbone, data can be gathered and organized in a central platform without compromising context. It can scale as product and IT infrastructure matures, and remains relevant as it integrates with systems like LIMS [laboratory informatics managements systems], historians, MES [manufacturing execution systems] and eBRs [electronics batch records software], to serve as your single verifiable source of truth for data critical to monitoring process control and conducting analysis and reporting.
With the increasing demand for accelerated tech transfer, FDA filings and commercialization, creating a data backbone early generates significant time and cost benefits: fewer PPQ [process performance qualification] runs, right first time tech transfer, streamlined investigations and production and earlier batch release.
Though a cloud-based data management solution is the first step, companies must also be vigilant when partnering with manufacturers.
Data visibility in quality and supply agreements
With the acceleration of new drug and therapy development, complex manufacturing requirements, and associated capital investments, the growth in outsourcing is predicted to continue for the foreseeable future.
Despite outsourcing manufacturing, the drug owner (sponsor) remains liable for meeting the FDA’s standards for product quality, demonstrating control over the contract manufacturer and the drug manufacturing process, and establishing an inscrutable, high-integrity process, product and quality data set. The near-universal reliance on contract manufacturers, and the FDA’s focus on data integrity issues in drug manufacturing, have generated unprecedented scrutiny into manufacturing operations by the FDA, strategic acquirers and the SEC. As the supply chain continues to expand in complexity, process development and manufacturing, data management is an area that demands new approaches/innovation.
While data integrity challenges can lead to quality and operational issues, they can also create legal risks, such as loss of manufacturing intellectual property and failure to demonstrate control over the CDMO, which can affect the company’s enterprise value.
Although these challenges affect large and small companies alike, data visibility is a key pain point for small biopharma companies, as most are 100% reliant on CDMOs but often lack the expertise and/or negotiation power against well-established CDMOs.
Despite mandates from the FDA for managing their CDMOs – and the manufacturing processes, drug owners struggle to meet these requirements, being physically remote and often lacking IT systems designed for data sharing between owner and contract partners. Failure to meet this requirement can result in the issuance of FDA warning letters. In fact, approximately 50% of all FDA warning letters in 2019 were related to data integrity issues.
Supply agreements must anticipate data needs and emphasize data visibility and ownership of critical information, including process control parameters.
Fortunately, an increasing number of CDMOs realize the compliance burden on their drug sponsors and that the future of biopharma is dependent upon collaboration and visibility in their manufacturing workflows. With state-of-art data management solutions and collaboration with CDMOs, biopharma companies can become more confident in their product quality and more prepared to satisfy strict compliance requirements.
Cloud-based data management solutions help the industry meet its business and compliance challenges. These platforms need to replace traditional data management methods and workflows for biopharma companies and CDMOs that seek competitive advantages.