WEBINAR

Security and testing tools for ai developers

This webinar helps AI developers avoid common security pitfalls in AI applications and shows how automated testing can be used to identify and mitigate vulnerabilities.

Securing AI applications in practice

The use of large language models (LLMs) in applications has grown rapidly in recent years. More recently, sophisticated AI agents have been deployed to automate the development of commercial software. However, security is often overlooked or insufficiently addressed.

In this webinar, we explain how prompt injection can be used to exploit AI-based applications. We also demonstrate how AI testing tools can be used to automatically discover vulnerabilities, enabling developers identify risks early and implement effective mitigations.

Security expert Benjamin Salling Hvass will be joined by two AI-based startups who will share real-life insights into securing their AI-based products.

Who can participate?

Developers, founders and product owners working with AI applications.

Learning outcomes

In this webinar, you will gain an overview of the most common security pitfalls in AI-based applications. You will also learn how other companies have strenthened the security of their AI solutions.

As a result, you will leave the webinar with practical tools and actionable insights that you can apply in your own work.