95% of cybersecurity experts express low confidence in GenAI security measures while red team data shows anyone can easily hack GenAI models, according to Lakera. Attack methods specific to GenAI, or prompt attacks, are easily used by anyone to manipulate the applications, gain unauthorized access, steal confidential data and take unauthorized actions. Realizing this, only 5% of the 1,000 cybersecurity experts surveyed have confidence in the security measures protecting their GenAI applications, even though 90% … More

The post GenAI models are easily compromised appeared first on Help Net Security.

By

Leave a Reply

Your email address will not be published. Required fields are marked *