Chatbots can be manipulated by attackers to yield malicious outputs, such as code for malware or illegal activities, through “prompt engineering” and “prompt injections”. Cybersecurity researchers have created a ‘Universal LLM Jailbreak’ that can thwart restrictions of major AI systems like ChatGPT, Google Bard, Microsoft Bing, and Anthropic Claude. As more organizations integrate these systems, susceptibility to these attacks rises. Although developers frequently update models, identifying malicious prompts remains challenging. Establishing trust boundaries and implementing security guardrails are necessary preventive steps.

Novo Nordisk Gets New Complement System Approach to Rare Diseases With Deal for Omeros Drug
Novo Nordisk is paying $240 million up front for global rights to zaltenibart, an Omeros drug that blocks the MASP-3 protein to treat diseases caused