A study found that major AI language models, including those from OpenAI and Google, produced racially biased information when answering questions about medical care for Black or White patients. The AIs inaccurately offered race-based answers and fabricated equations when asked about health issues. The researchers suggested that these biases could potentially pose risks to patients and indicated that AI systems aren’t ready for clinical use.
![](https://healsecurity.com/wp-content/uploads/2024/07/amber-alert-as-nhs-in-plymouth-makes-urgent-plea-for.jpg)
‘Amber alert’ as NHS in Plymouth makes urgent plea for people with certain blood type
The NHS has issued an urgent call for O type blood donors, following increased demand after the recent cyber attack. The attack led to reduced