A study found that major AI language models, including those from OpenAI and Google, produced racially biased information when answering questions about medical care for Black or White patients. The AIs inaccurately offered race-based answers and fabricated equations when asked about health issues. The researchers suggested that these biases could potentially pose risks to patients and indicated that AI systems aren’t ready for clinical use.

Smart Electric Vehicles Face Hidden Cyber Vulnerabilities Exposing Drivers to Risks
The rise of electric vehicles (EVs) has heightened cybersecurity risks, exposing vulnerabilities in charging infrastructure and vehicle systems. Insecure public charging stations and outdated protocols