Thoughtworks’ Principal Data Scientist, Emily Gorcenski, shares her tips to mitigate the risk of leaking information and introducing code vulnerabilities when using GenAI.
Generative AI tools, particularly Large Language Models (LLMs) such as ChatGPT, offer immense potential for solving all kinds of business problems, from creating documents to generating code. They can also introduce security risks in two novel ways: leaking information and introducing code vulnerabilities. This article explores the ways these challenges often arise across organizations and provides mitigation strategies to minimize negative outcomes.
Offered Free by: Thoughtworks
See All Resources from: Thoughtworks
This download should complete shortly. If the resource doesn't automatically download, please, click here.