Code Patrol
Contrast open sources Its Generative AI Policy to keep us all safe
Are your employees madly chatting with ChatGPT? Are you even aware of what company or customer data they might be feeding into the proliferation of generative AI gullets? Does your company still need to put a leash on the wild dog of generative AI? Because that’s exactly what Contrast Security has done: We’ve put up electric fencing around the precocious puppies that are rapidly growing into a gaggle of all-data-consuming, non-regulated, slobbery St. Bernards. On July 11, Contrast announced the launch of the Contrast Responsible AI Policy Project, a pioneering initiative in the realm of AI use. In our commitment to democratizing responsible AI practices, we open-sourced our company’s internal AI policy under the Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0) license. Why? Because it’s essential that we keep our data and our customers’ data safe, and we know that other organizations are in the same boat. In this Code Patrol episode, we invited the authors of the policy — Contrast Chief Information Security Officer David Lindner and Sharron Reed Gavin, Contrast’s vice president of Operational Risk and Data Privacy Officer — to join us to discuss why they thought we needed a policy around responsible use of this exciting, rapidly evolving technology … and why they think you need one, too.