AI Application Security Tools
About 34% of organizations are using AI application security tools. This number is expected to grow significantly in the coming years, as organizations become more aware of the benefits of AI for application security, as per Gartner report.
AI application security tools can help organizations in a number of ways, including:
· Identifying vulnerabilities more accurately and efficiently: AI tools can analyze large volumes of code and data to identify vulnerabilities that traditional tools may miss.
· Prioritizing vulnerabilities based on risk: AI tools can help organizations prioritize vulnerabilities based on their likelihood of being exploited and the potential impact of an exploit.
· Automating security tasks: AI tools can automate many time-consuming and repetitive security tasks, such as scanning code for vulnerabilities and remediating known vulnerabilities.
Overall, AI application security tools can help organizations improve their security posture and reduce the risk of data breaches and other cyberattacks.
AI application security tools are still in their early stages of development, but they have the potential to revolutionize the way that organizations secure their applications. As AI tools become more sophisticated and affordable, they are likely to become more widely adopted by organizations of all sizes.
The report further outlines, while 93% of IT and security leaders surveyed, said they are at least somewhat involved in their organization’s GenAI security and risk management efforts, only 24% said they own this responsibility.
Among the respondents that do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organization’s governance, risk, and compliance departments owned the responsibility.
The risks associated with GenAI are significant, continuous and will constantly evolve. The Survey respondents indicated that undesirable outputs and insecure codes are among their top-of-mind risks when using GenAI:
· 57% of respondents are concerned about leaked secrets in AI-generated code.
· 58% of respondents are concerned about incorrect or biased outputs.
“Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” said an Expert. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI mal-performance can also cause organizations to make poor business decisions.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.