As AI innovation rapidly progresses, experts are urging organisations to establish security standards and protocols by 2024 to mitigate risks. Large language models (LLMs) such as OpenAI’s GPT-4 and GPT-5, demonstrate potential for significant productivity but also pose security risks, including data leaks, misuse for malicious activity, and misleading outputs. Experts suggest combating these risks with AI-based security solutions, thorough understanding of AI capabilities, implementation of security policies pre-deployment, and a collective global effort to establish security standards for AI technology.

Microsoft to Offer Rewards Up to $30,000 for AI Vulnerabilities
Microsoft has launched a bug bounty program offering up to $30,000 for identifying critical AI vulnerabilities in Dynamics 365 and Power Platform. The initiative, part