What you need to know & how to comply [ ref: www.BigID.com ]
Companies leveraging AI will have to meet compliance obligations in risk management, data governance,
transparency, and IT security. The Key take aways from the article.
- Follow the Data: The Business and Organization looking for AI solutions, LLMs/SLMS need to Catalog AI Use Cases: Document the AI systems, the data processed by these systems, the purposes of processing, and the business processes involved. Categorize AI Systems: Inventory your AI systems and understand what you have, what data it’s accessing, and what access rights it’s got. Classify Data that AI Uses: Classify your data and put controls around it based on risk,
context, content, and type – so that you can easily manage, track, and report on what data
AI has used for training, what data it’s able to access, and how sensitive and high-risk that
data is. - Data at the Core of Risk: According to Gartner, the key to managing these risks is to “follow the data.” The AI Risks needs to be classified as Unacceptable, High, Limited or Minimal Risks. Identify what data is safe for AI use. Determine which data falls under specific risk policies. Classify data to ensure compliance with confidentiality and regulatory requirements
- Data Visibility and Control: Inventorying both AI models and data across the organization Maintaining clear records of the data that AI systems can access and have accessed. Monitoring what data AI systems have accessed to ensure compliance and security – what the sensitivity is, what policies it falls under, and what risk it represents. Implementing security controls around both data and models,