Couverture de AI Data Privacy, Data Security, Rule 1.6 Compliance & Shadow AI

AI Data Privacy, Data Security, Rule 1.6 Compliance & Shadow AI

AI Data Privacy, Data Security, Rule 1.6 Compliance & Shadow AI

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

What really happens to client data when you use tools like ChatGPT, especially if you click “delete” or disable training? In this episode, I’m joined by Cathy Miron, CEO of eSilo and a nationally recognized expert in data protection and cybersecurity, to unpack the privacy, security, and governance realities behind modern LLMs. We discuss litigation and vendor policies can complicate “private” chats, why backups, logs, and engineering choices often outpace contract language, and practical ways lawyers and regulated organizations can use AI without compromising confidentiality or privilege. We cover de-identification workflows, BAAs and vetted tools (think FedRAMP/CMMC contexts), API nuances around ZDR, and the human firewall. Governance for acceptable-use policies, training, and curbing “shadow AI” with sanctioned, enterprise subscriptions. It’s a candid, pragmatic guide to balancing innovation with risk.

Views expressed on Moral Machine are the author’s own and do not reflect those of the New Jersey Supreme Court Attorney Ethics Committee (District VI) or Falcon Rappaport & Berkman LLP.

adbl_web_anon_alc_button_suppression_c
Aucun commentaire pour le moment