In partnership with

Life Sciences CIO Weekly Digest – Week of Jan 19-25

(Tight 6–8 minute read, with source links for deeper exploration.)

Research conducted with Perplexity AI | Content compiled with Claude AI | All sources cited and verified

📊 Executive Summary

FDA and EMA issued the first transatlantic AI governance framework for drug development with 10 guiding principles covering lifecycle management, data governance, and traceability—setting concrete expectations for CIO architecture and cross-functional oversight.

Lilly–NVIDIA's $1B co-innovation lab and parallel infrastructure deals (Servier, GSK, Pfizer) validate AI platforms and foundation models as core enterprise infrastructure, forcing build-vs-buy and OT/IT convergence decisions.

Health-ISAC quantified 455 ransomware incidents in 2025 with supply-chain attacks as the primary vector and AI-enabled threats the top 2026 concern, while Oracle/Cerner and vendor breaches exposed gaps in third-party monitoring.

FDA expanded enforcement discretion for AI-enabled clinical decision support and broadened general wellness policy to include certain AI-interpreted physiologic measurements, opening new integration paths if governance separates wellness from diagnostic functions.

New operating-model research shows AI scaling requires persistent cross-functional pods, embedded liaisons, and behavioral capabilities—not just technical readiness—shifting the CIO mandate toward orchestration platforms and context-graph architectures that move AI outputs into governed workflows.

🤖 AI & Data

FDA–EMA Publish 10 Guiding Principles for Good AI in Drug Development

On January 14–16, FDA and EMA released their first joint statement on AI in drug development, articulating 10 principles that cover human-centric design, risk-based approaches, adherence to standards, clear context of use, multidisciplinary expertise, strong data governance, robust model design, lifecycle monitoring (including data drift), and transparent communication of performance and limitations. The agencies framed AI broadly—spanning non-clinical research, clinical studies, manufacturing, and post-market surveillance—and emphasized that AI must support, not weaken, patient protections while drugs still meet core quality, efficacy, and safety requirements.

Why it matters for CIOs: This is the first transatlantic regulatory framework that explicitly sets expectations for how life sciences CIOs must architect AI lifecycle management, from design through continuous monitoring. It elevates data governance, model traceability, and documentation from IT hygiene to regulatory compliance, and assumes cross-functional governance spanning R&D, manufacturing, and quality.

Moves to consider:

  • Map your current AI inventory (discovery, clinical, manufacturing, pharmacovigilance) against the 10 principles to identify gaps in data governance, lifecycle monitoring, or documentation

  • Stand up or formalize a multidisciplinary AI governance committee with representation from R&D, regulatory affairs, quality, IT, and data science to interpret and operationalize the principles

  • Pilot automated data-drift detection and model-performance tracking for at least one production AI system in the next quarter, with audit-ready logs

Lilly–NVIDIA $1B AI Co-Innovation Lab and Wave of AI Infrastructure Deals

Eli Lilly and NVIDIA announced a $1 billion, five-year co-innovation lab in South San Francisco at JPM 2026, combining Lilly's drug discovery expertise with NVIDIA's AI infrastructure—including a pharma-scale supercomputer, multimodal foundation models, and "agentic" wet-lab integration—to build closed-loop discovery and clinical trial models. In the same week, Servier signed large AI discovery deals with Insilico (up to $888M) and Iktos (potentially >€1B), GSK licensed Noetik's oncology foundation models for $50M, and Pfizer partnered with Boltz on biomolecular foundation models. Lilly CEO David Ricks framed the goal as transforming small-molecule discovery "from artisanal craft to scalable engineering."

Why it matters for CIOs: These moves set a new order-of-magnitude benchmark for AI infrastructure investment and validate foundation models and AI platforms as core enterprise infrastructure, not side projects. They force CIOs to confront strategic questions about build vs. buy vs. co-build, multimodal data unification (text, image, genomic, chemical), and OT/IT convergence as AI increasingly touches lab robotics, manufacturing execution systems, and clinical workflows.

Moves to consider:

  • Conduct a strategic AI architecture review in Q1 to determine where your organization needs proprietary models, where licensed foundation models suffice, and where co-innovation partnerships make sense

  • Inventory data silos across R&D, manufacturing, and clinical that would need unification to support multimodal AI, and prioritize integration work accordingly

  • Engage facilities, lab automation, and MES teams to assess readiness for "agentic" AI that can issue commands to physical systems under governance—define control boundaries and traceability requirements now

🔒 Cybersecurity & Risk

Health-ISAC Quantifies 455 Ransomware Incidents and Flags AI-Enhanced Attacks

Health-ISAC released its 2026 Global Health Sector Threat Landscape report on January 21–25, documenting 455 ransomware incidents targeting health organizations in 2025, with a small set of dominant groups (Qilin, INC Ransom, SAFEPAY) and supply-chain/third-party compromise as the primary attack vectors. The report, based on the Health-ISAC Ransomware Events Database and more than 1,200 targeted alerts distributed in 2025, also surveyed nearly 250 health executives and cybersecurity experts, who identified AI-enabled attacks as their top concern for 2026. Health-ISAC CSO Errol Weiss emphasized that healthcare is targeted "not due to ease of attack, but because the repercussions of disruption can be catastrophic."

Why it matters for CIOs: The report provides quantitative backing for viewing ransomware and vendor compromise as persistent, systemic risks—not outlier events—and explicitly flags AI-enhanced attacks as a near-term reality. It supports board-level conversations about sustained cyber investment, vendor risk programs, and threat-intelligence alignment specific to life sciences and healthcare ecosystems.

Moves to consider:

  • Brief your board or audit committee on the 455-incident baseline and Health-ISAC findings to calibrate risk appetite and investment in vendor risk management, threat intelligence, and incident-response readiness

  • Join or deepen engagement with Health-ISAC (or a similar ISAC) to access real-time threat intelligence, indicators of compromise, and peer benchmarks

  • Commission a tabletop exercise focused on AI-enhanced social engineering (e.g., deepfake vishing, automated spear-phishing) to test detection and response readiness

Oracle Health/Cerner and VillageCareMAX Breaches Expose Third-Party and Legacy-System Risk

Munson Healthcare notified approximately 100,000 patients in mid-January about a data breach stemming from legacy Cerner (now Oracle Health) servers that were awaiting cloud migration; an unauthorized third party accessed the systems in January 2025, and Oracle Health attorneys indicated up to 80 hospitals nationally may be affected. Separately, VillageCareMAX reported a breach via TMG Health in which attackers maintained unauthorized access to a vendor system for approximately 10 months, exposing extensive PHI while internal systems remained secure. Michigan Attorney General Dana Nessel paired her consumer alert with a call to strengthen breach-notification laws.

Why it matters for CIOs: These incidents illustrate how deeply third-party and legacy-system risk can impact organizations even when core internal systems are hardened. They highlight operational gaps in vendor monitoring, post-M&A system migrations, breach-notification coordination, and business associate agreement (BAA) enforcement—all areas life sciences CIOs must tighten for their own ecosystems of CROs, CMOs, cloud vendors, and EHR/EDC providers.

Moves to consider:

  • Audit all legacy systems (especially those in migration queues) for access controls, monitoring, and decommissioning timelines; accelerate cloud migration or implement compensating controls

  • Require quarterly attestations and evidence of security-control effectiveness from all business associates and critical vendors; escalate BAA enforcement for non-compliance

  • Map your extended ecosystem (CROs, CMOs, logistics, cloud, SaaS) and run a supply-chain risk scoring exercise to identify concentration risk and monitoring gaps

⚖️ Regulatory & Compliance

FDA Revises Clinical Decision Support Guidance to Expand AI Enforcement Discretion

On January 6, FDA updated its Clinical Decision Support guidance to expand enforcement discretion to certain AI-enabled CDS tools, including those that output a single clinically appropriate recommendation, provided they support (not replace) clinician judgment and maintain transparency regarding data inputs, underlying logic, and how recommendations are generated. The 2026 guidance places heightened emphasis on explainability and independent review to mitigate automation bias, and clarifies that software identifying patients within an indicated population for a chemotherapeutic agent (previously considered a device function) is now treated as a non-device CDS function.

Why it matters for CIOs: This revision opens new possibilities for deploying AI-driven clinical decision support without triggering full medical-device regulation—if intent, UI, and messaging stay within decision-support boundaries and tools maintain explainability. CIOs must ensure data flows, user-experience copy, and governance clearly separate decision support from diagnostic or autonomous therapeutic functions, and that CDS tools enable independent clinician review rather than black-box outputs.

Moves to consider:

  • Review all AI-enabled CDS tools (in production or development) against the updated criteria, focusing on whether recommendations are explainable, support independent review, and avoid automation bias

  • Work with product, legal, and regulatory teams to audit UI copy, user workflows, and training materials to ensure they frame AI outputs as decision support, not autonomous diagnosis or treatment

  • Establish a pre-deployment checklist for CDS tools that includes explainability testing, clinician-review workflows, and documentation of data inputs and logic for regulatory inspection

FDA Broadens General Wellness Policy to Include Certain AI-Interpreted Physiologic Data

In parallel with the CDS update, FDA revised its General Wellness policy on January 6 to include some non-invasive, AI-interpreted physiologic measurements—such as optical blood pressure, oxygen saturation, heart-rate variability, and glucose-related markers—as wellness products when framed strictly for wellness purposes, not diagnosis or treatment decisions. Experts noted the guidance was issued without the usual public comment period and leaves open questions about how patients and clinicians will navigate a growing pool of unregulated wearable data.

Why it matters for CIOs: This policy change creates new opportunities for integrating wearable and consumer-health data into clinical trials, real-world evidence programs, and patient-engagement platforms without full device oversight—provided governance, data flows, and user-facing messaging remain firmly in wellness territory. Blurring the line between wellness and diagnostic use carries regulatory and liability risk.

Moves to consider:

  • Define clear data-governance policies for wellness vs. diagnostic/therapeutic data, including distinct data lakes, consent workflows, and use restrictions

  • If integrating wearable or AI-interpreted wellness data into trials or RWE studies, ensure protocol language, informed consent, and analysis plans explicitly frame the data as wellness/exploratory and not a diagnostic endpoint

  • Partner with legal and clinical teams to establish bright-line UI and messaging standards that prevent wellness tools from being perceived or used as diagnostic or treatment-decision aids

FDA Issues Draft Guidance on Bayesian Methodology for Clinical Trials

On January 12, FDA issued draft guidance on Bayesian methodology for primary inference in drug and biologics clinical trials, fulfilling a PDUFA VII commitment. The guidance allows Bayesian designs to combine trial data with prior information (prior clinical studies, real-world evidence, external or nonconcurrent controls) when scientifically justified, with applications including determining futility/success earlier in adaptive trials, informing dose selection, and supporting subgroup analyses. Designs must be justified through explicit success criteria, thoughtful priors, prospective operating-characteristic evaluation (often via simulation), and computational transparency suitable for regulatory review.

Why it matters for CIOs: This raises the bar for clinical data platforms and analytics environments: they must integrate external/RWD sources in a validated way, support large-scale simulation workloads, and produce transparent, reproducible statistical workflows suitable for regulatory review. It also reinforces that data governance and architecture are now directly linked to trial design flexibility and timeline reduction.

Moves to consider:

  • Assess whether your clinical trial management systems and statistical computing environments can integrate external data sources (RWD, prior trials) with proper validation and auditability

  • Provision computational resources (cloud or HPC) capable of running large-scale trial simulations for operating-characteristic evaluation

  • Partner with biostatistics and regulatory affairs to establish standards for documenting Bayesian priors, borrowing frameworks, and simulation code in a regulatory-ready format

🧭 Leadership & Operating Model

From AI-Ready to AI-Actionable: The Execution-Layer Gap

New framework research published January 22–25 distinguishes "AI-ready" organizations (clean data, working models) from "AI-actionable" ones (AI embedded into governed, GxP-compliant workflows). The analysis argues that many life sciences organizations have solid data platforms and trained models but lack an execution layer providing shared operational ontologies, workflow state management, context graphs, and controlled execution—so AI outputs stall at dashboards instead of driving controlled actions in LIMS, MES, QMS, or clinical systems.

Why it matters for CIOs: This explains why AI pilots succeed in labs but fail to scale: the architecture between data platforms and frontline workflows is missing. CIOs need to invest in orchestration platforms and context-graph architectures that allow AI recommendations to move into operational systems under governance, with full traceability of "what happened, when, under which constraints, and why."

Moves to consider:

  • Map your AI pilots to identify where outputs are consumed: if they end in dashboards or manual handoffs, prioritize building execution-layer APIs and workflow integrations

  • Define a reference architecture for "AI-actionable" workflows that includes operational ontologies (shared vocabularies across systems), state management, and audit trails compatible with GxP requirements

  • Pilot one closed-loop AI workflow (e.g., AI-recommended protocol amendment automatically routed for review in CTMS, or AI quality alert triggering investigation workflow in QMS) to learn orchestration and traceability patterns

Bain Research: AI Pods, Embedded Liaisons, and Behavioral Capabilities Drive Scale

Bain analysis [published January 11 and January 25](https://www.bain.com

The Tech newsletter for Engineers who want to stay ahead

Tech moves fast, but you're still playing catch-up?

That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.

Here's what you get:

  • Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.

  • Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.

  • Research papers and insights decoded - We break down complex tech so you understand what matters.

All delivered twice a week in just 2 short emails.

Keep Reading