top of page

Does HealthTech have an “AI washing” problem in 2025?

  • Writer: Lloyd Price
    Lloyd Price
  • Apr 30
  • 9 min read


Does HealthTech have an “AI Washing” problem in 2025?


Yes, in 2025, healthtech has an "AI-washing" problem. This refers to the practice of companies exaggerating or falsely claiming that their products or services utilise artificial intelligence when the actual AI component is minimal, non-existent, or not meaningfully contributing to the claimed benefits.


Several factors contribute to this phenomenon in the HealthTech sector:


Drivers of AI-Washing in HealthTech:


  • Investor Hype and Funding: AI remains a highly attractive buzzword for investors. Companies may overstate their AI capabilities to attract funding, even if their technology relies more on traditional software or basic automation. The significant venture capital investment in digital health over the past five years, particularly in AI-related areas, creates an incentive to label products as "AI-powered."


  • Marketing and Competitive Advantage: Claiming to use AI can provide a significant marketing advantage, positioning a company as innovative and cutting-edge compared to competitors. This can be particularly tempting in a crowded digital health market.


  • Lack of Clear Definitions and Understanding: The term "AI" encompasses a broad range of technologies, from simple algorithms to complex machine learning models. This lack of precise understanding can allow companies to loosely apply the "AI" label to features that don't genuinely involve advanced AI.


  • Pressure to Innovate: There's considerable pressure within the healthtech industry to demonstrate innovation and adopt new technologies. AI is often seen as the pinnacle of this, leading some companies to prematurely label their offerings as such.


  • Difficulty in Verification: For non-technical users, including healthcare professionals and patients, it can be challenging to discern the actual level and sophistication of AI within a healthtech product. This creates an environment where unsubstantiated claims can persist.


Evidence and Concerns in 2025:


  • Continued Emphasis on AI in Marketing Materials: Many healthtech companies heavily feature "AI-powered" claims in their marketing, often without providing clear explanations of how AI is being used or the specific benefits it delivers.


  • Focus on Basic Automation as "AI": Some solutions that primarily automate simple tasks or use basic rules-based systems are being marketed as AI. For example, a chatbot with pre-programmed responses might be labeled as an "AI-powered virtual assistant."


  • Lack of Transparency in Algorithms: Many companies do not provide sufficient transparency about the algorithms they use, making it difficult to assess the validity and impact of their "AI." This lack of explainability is a broader concern in AI in healthcare.


  • Overstated Capabilities and Benefits: Claims about AI's ability to diagnose, predict, or treat conditions may be exaggerated, lacking robust clinical validation or evidence.


  • Focus on "AI" as a Feature, Not a Core Function: In some cases, AI might be a minor feature within a broader product, yet it's disproportionately highlighted in marketing.


Why AI-Washing is Problematic in Healthtech:


  • Misleading Healthcare Professionals: Overstated AI capabilities can lead clinicians to have unrealistic expectations or to trust systems that are not as sophisticated or reliable as claimed, potentially impacting patient care.


  • Deceiving Patients: Patients may be drawn to "AI-powered" solutions based on perceived advanced capabilities that don't actually exist.


  • Hindering Genuine Innovation: The noise created by AI-washing can make it harder to identify and invest in truly innovative and impactful AI-driven healthtech solutions.


  • Erosion of Trust: If users discover that a product's AI claims are exaggerated, it can erode trust in the company and potentially in the broader adoption of AI in healthcare.


  • Regulatory Scrutiny: As AI-washing becomes more prevalent, regulatory bodies may increase scrutiny and potentially implement stricter guidelines around AI claims in healthtech marketing.


In 2025, the HealthTech sector faces a significant risk of "AI-washing." The strong incentives for companies to capitalize on the AI hype, coupled with the complexity of the technology and the difficulty in verifying claims, create an environment where exaggerated or misleading statements about AI capabilities can thrive. This trend poses risks to clinicians, patients, and the overall credibility and progress of genuine AI innovation in healthcare.

Nelson Advisors > HealthTech M&A


Nelson Advisors specialise in mergers, acquisitions and partnerships for Digital Health, HealthTech, Health IT, Healthcare Cybersecurity, Healthcare AI companies based in the UK, Europe and North America. www.nelsonadvisors.co.uk

 

We work with our clients to assess whether they should 'Build, Buy, Partner or Sell' in order to maximise shareholder value and investment returns. Email lloyd@nelsonadvisors.co.uk


Nelson Advisors regularly publish Healthcare Technology thought leadership articles covering market insights, trends, analysis & predictions @ https://www.healthcare.digital 

 

We share our views on the latest Healthcare Technology mergers, acquisitions and partnerships with insights, analysis and predictions in our LinkedIn Newsletter every week, subscribe today! https://lnkd.in/e5hTp_xb 

 


Nelson Advisors

 

Hale House, 76-78 Portland Place, Marylebone, London, W1B 1NT

 

Contact Us

 

 

Meet Us

 

Digital Health Rewired > 18-19th March 2025 

 

NHS ConfedExpo  > 11-12th June 2025

 

HLTH Europe > 16-19th June 2025

 

HIMSS AI in Healthcare > 10-11th July 2025


Examples of AI Washing in HealthTech


“AI-washing” in healthtech refers to companies exaggerating or misrepresenting their use of artificial intelligence to attract funding, partnerships, or market traction. In 2025, this issue persists in the healthtech sector, driven by the hype around AI and competitive pressures.


Below are specific examples of AI-washing in healthtech, drawn from recent trends, regulatory actions, and industry analysis. These examples illustrate how companies overstate AI capabilities, the consequences, and the broader context.


1. The "AI-Powered" Chatbot for Basic Appointment Scheduling


The Claim: A clinic advertises its new "AI-powered virtual assistant" that can handle appointment bookings, answer FAQs, and provide pre-appointment instructions.


The Reality: The chatbot primarily uses pre-programmed responses and keyword recognition. While it might handle simple queries, it lacks the natural language understanding and reasoning capabilities of true AI. It essentially functions as a more sophisticated interactive voice response (IVR) system or a decision tree. The "AI" label is used to create an impression of advanced functionality and efficiency.


2. The "AI-Driven" Wellness App with Personalised Recommendations


The Claim: A wellness app boasts "AI algorithms" that analyse user data (sleep patterns, activity levels, mood entries) to provide highly personalised recommendations for diet, exercise, and mindfulness practices.  


The Reality: The app's recommendations are primarily based on simple correlations and rule-based logic. For example, if a user logs less than 7 hours of sleep, the app suggests an earlier bedtime. While data is being used, the "AI" doesn't involve complex machine learning models that learn and adapt to individual nuances in a meaningful way. The personalisation is superficial and doesn't go beyond basic pattern matching.


3. The "AI-Enhanced" Medical Imaging Software for Anomaly Detection


The Claim: A medical imaging software company markets its product as having "AI-enhanced" capabilities that help radiologists detect subtle anomalies in X-rays and MRIs with greater accuracy.  


The Reality: The software might have some basic image processing filters or pre-set thresholds for highlighting potential areas of interest. However, it lacks the sophisticated deep learning models trained on vast datasets that can truly identify complex and subtle pathological patterns with high sensitivity and specificity. The "AI" claim exaggerates the software's analytical capabilities.


4. The "AI-Optimised" Electronic Health Record (EHR) System for Clinical Decision Support


The Claim: An EHR vendor advertises its system as using "AI" to provide clinicians with real-time, evidence-based clinical decision support, including suggesting diagnoses and treatment plans.


The Reality: The system's decision support primarily relies on static rule sets and alerts triggered by specific data entries. While these can be helpful, they don't involve AI that can dynamically analyse complex patient data, learn from new information, and provide nuanced, personalised recommendations. The "AI" label implies a level of intelligence and adaptability that the system doesn't possess.


5. The "AI-Powered" Drug Discovery Platform


The Claim: A biotech startup claims to have an "AI-powered platform" that can rapidly identify novel drug candidates by analysing vast amounts of biological and chemical data.  

The Reality: While the platform might use computational tools for data analysis and simulation, the core of its discovery process might still rely heavily on traditional methods and human expertise. The "AI" component might be limited to basic data filtering or visualization, without the advanced machine learning models needed for truly novel and efficient drug design.


Key Indicators of Potential AI-Washing:


  • Vague Language: Marketing materials use terms like "AI-powered" or "AI-driven" without explaining the specific AI techniques used or the tangible benefits they provide.


  • Lack of Transparency: The company doesn't provide details about the algorithms, data used for training, or validation processes.


  • Focus on Features, Not Outcomes: The emphasis is on having "AI" as a feature rather than demonstrating significant improvements in accuracy, efficiency, or patient outcomes directly attributable to AI.


  • Unrealistic Claims: Promises of AI solving complex medical problems with seemingly simple solutions.


  • Absence of Technical Expertise: The company's leadership or advisory board lacks individuals with significant expertise in AI and machine learning.


AI-washing in HealthTech can mislead healthcare professionals, patients, and investors, potentially leading to the adoption of ineffective or overhyped solutions. It also dilutes the credibility of genuinely innovative AI applications in the field. As AI continues to evolve, it will be crucial for stakeholders to critically evaluate claims and demand transparency to distinguish true AI-driven advancements from marketing hyperbole.


AI Washing in UK HealthTech


While the UK healthtech sector and the NHS are making genuine strides in adopting AI for various beneficial applications, the risk of "AI-washing" is significant in 2025.


The pressure to innovate, attract funding, and align with national agendas could lead to the overstatement of AI capabilities in products and services. Critical evaluation, transparency regarding AI algorithms, and a focus on demonstrable improvements in patient care and efficiency will be crucial to differentiate true AI innovation from misleading marketing in the UK healthcare landscape.


Organisations like the Health Foundation and UCLPartners are actively researching and reporting on the realities of AI adoption in the NHS, which will be vital in identifying and mitigating the risks of "AI-washing."


Drivers of AI-Washing in the UK and NHS


  1. NHS AI Investment and Policy Push


    • The NHS’s ambition to be a “world leader in AI” (NHS Long Term Plan, 2019) and investments like £300 million for the NHS AI Lab (2019–2025) create high expectations. The 2025 AI Opportunities Action Plan, including a National Data Library, further incentivises startups to label products as “AI-driven” to secure NHS contracts or funding.


    • The government’s £600 million Health Data Research Service and tools like “Humphrey” amplify pressure to align with AI-driven transformation goals.


  2. Competitive Funding Landscape


    • The UK healthtech sector attracted £1.5 billion in VC funding in 2024, with AI-focused startups commanding 40% of deals. Investors prioritise “AI-first” companies, pushing startups to exaggerate capabilities to compete.


    • UK healthtech founders note the pressure to use “AI” buzzwords to attract Seed or Series A rounds, even for basic software.


  3. Regulatory and Validation Gaps:


    • The MHRA’s AI Airlock pilot scheme (2024–2025) is still refining standards for AI medical devices, leaving room for unvalidated claims. The EU AI Act, applicable in Northern Ireland, imposes stricter rules but is not fully harmonised across the UK.


    • Lack of standardised AI definitions in healthcare allows companies to market basic algorithms as “AI,” as noted in a 2024 Health Foundation report.


  4. Public and NHS Expectations:


    • A 2024 Health Foundation survey found 54% of the UK public and 76% of NHS staff support AI in patient care, creating demand that startups exploit with overstated claims.


    • The NHS’s digitisation narrative, amplified by leaders like Keir Starmer, fuels expectations that all healthtech must be “cutting-edge AI.”


    It is important to note not all AI claims are AI-washing. Legitimate UK healthtech AI include:


    Qure.ai: Deployed by Peninsula Imaging Network for chest CT scans, improving lung cancer detection by 15%, validated in NHS pilots.


    Newton’s Tree: Haris Shuaib’s platform ensures safe AI adoption in NHS hospitals, with peer-reviewed deployment protocols. These successes, backed by rigorous evidence, contrast with AI-washed tools, highlighting the value of validation.


AI-washing in UK healthtech and the NHS in 2025 is evident in overstated claims by mental health chatbots, diagnostic tools, workflow platforms, and wearables, driven by NHS AI investments (£300 million Lab, £36 million diagnostics), competitive funding (£1.5 billion in 2024), and regulatory gaps.

Consequences include wasted NHS resources (£500,000+ on failed pilots), investor skepticism, clinician distrust, and regulatory warnings (MHRA, EU AI Act). Mitigation via stricter regulation (AI Airlock, NICE), NHS education (Digital Academy), and public engagement (Health Foundation surveys) is curbing the issue. While genuine AI innovations like Qure.ai shine, AI-washing undermines trust and progress, necessitating vigilance to ensure the NHS’s AI ambitions deliver real value.


Nelson Advisors > HealthTech M&A


Nelson Advisors specialise in mergers, acquisitions and partnerships for Digital Health, HealthTech, Health IT, Healthcare Cybersecurity, Healthcare AI companies based in the UK, Europe and North America. www.nelsonadvisors.co.uk

 

We work with our clients to assess whether they should 'Build, Buy, Partner or Sell' in order to maximise shareholder value and investment returns. Email lloyd@nelsonadvisors.co.uk


Nelson Advisors regularly publish Healthcare Technology thought leadership articles covering market insights, trends, analysis & predictions @ https://www.healthcare.digital 

 

We share our views on the latest Healthcare Technology mergers, acquisitions and partnerships with insights, analysis and predictions in our LinkedIn Newsletter every week, subscribe today! https://lnkd.in/e5hTp_xb 

 


Nelson Advisors

 

Hale House, 76-78 Portland Place, Marylebone, London, W1B 1NT

 

Contact Us

 

 

Meet Us

 

Digital Health Rewired > 18-19th March 2025 

 

NHS ConfedExpo  > 11-12th June 2025

 

HLTH Europe > 16-19th June 2025

 

HIMSS AI in Healthcare > 10-11th July 2025



 
 
 

Comments


Nelson Advisors Main Logo 2400x1800.jpg
bottom of page