A study found that major AI language models, including those from OpenAI and Google, produced racially biased information when answering questions about medical care for Black or White patients. The AIs inaccurately offered race-based answers and fabricated equations when asked about health issues. The researchers suggested that these biases could potentially pose risks to patients and indicated that AI systems aren’t ready for clinical use.
UK: Woman charged after NHS patients’ records accessed in data breach
Today’s reminder of the insider threat comes to us from the National Health Service in the U.K. Craig Meighan and Billy Gaddi report: A woman

