What are we getting ourselves into?
Hopefully you have paused to ask this question if you’re using generative artificial intelligence (AI) tools in your business. Despite promises that AI tools will transform the way we work, they remain relatively unknown quantities. Unlike with traditional enterprise software, when a business adopts one of these tools, they are not just equipping their employees with new ways of performing tasks; they are handing them infinite possibilities.
This quality also creates uncertainty. As our understanding of what large language models (LLMs) can do is rapidly evolving, it is impossible to predict all the ways these tools will be used in the workplace. This presents challenges when adopting security measures to control legal and business risks. While businesses can and should put in place guardrails to control the use of AI, they will be unable to predict and prepare for all eventualities. Ultimately, they will have to exercise oversight as employees explore the capabilities of these tools and unearth both their risks and rewards.
Thankfully, this does not mean standing over employees’ shoulders. AI audit logs can provide visibility and traceability of AI use. AI audit logs are records of activities and events that occur within an AI system. Some businesses are required by law to maintain these logs, although their benefits far exceed legal compliance. A comprehensive audit trail can inform businesses how their employees are using AI, including their prompts, any data they shared, and any security policies that were triggered. This information can help businesses analyze risks, review and enhance their security measures, and provide valuable insights on how their employees are most effectively using AI.
Enterprises leveraging AI tools must navigate a complex landscape of legal requirements, potentially across various jurisdictions. These include data protection laws, industry-specific laws, and emerging legislation directly regulating AI use. They must also comply with the terms of use set by LLM providers and third party service providers. All this while managing their own business risks, including safeguarding sensitive or proprietary information. Visibility into how AI is being used becomes an important internal line of defence against potential breaches of these obligations.
Audit logs are a valuable asset in this regard. In fact, simply informing employees that audit logs exist has the potential to enhance compliance with policies governing AI use. This is because when people know they are being observed, they are more likely to follow the rules.[i] In one study of this concept (termed the “Hawthorne effect”), researchers looked at how much people paid for drinks when using an honesty box (rather than interacting with a human). They found that people paid more when a set of eyes was displayed on the box, when compared to a neutral image.[ii] This tells us that even indications that people are being watched enhances compliance. Extending these findings to the context of AI use, employers can bolster adherence to company policies by telling their employees that their audit logs are being reviewed.
Of course, there are benefits to actually reviewing the logs. Legal documents, such as acceptable use policies, can be tricky to comprehend. And combining them with a new genre of technology leaves a lot of ground for misunderstanding. It is easy to imagine that employees may need direction on how to implement these policies in practice. By reviewing audit logs, companies can pinpoint areas of non-compliance and identify where policies could be modified or where further training is needed. For example, if employees are repeatedly triggering the same acceptable use policies, this may be a signal that they don’t adequately understand them.
Through this feedback loop, enterprises can make intelligent use of audit logs to gradually improve their security over time.
Audit logs are not just a security tool, they are a treasure trove of information that can help businesses to refine their AI strategy and empower their workforce with uniquely relevant insights.
In the world of knowledge work, precedent is a familiar concept. Rather than starting from scratch each time we begin a new task, we often use a template or an exemplar to guide us. This practice, which has created huge value for many enterprises, leverages expertise and work product accumulated over time.
Use of AI systems should evolve in a similar way. Given the novelty of these tools in the workplace, the best and most effective use cases are still being discovered. Experimentation by end users—rather than top down mandates—will typically generate the most helpful and uniquely relevant insights on how AI can be leveraged within a business. Therefore, tapping into end users’ perspectives and practices will be an important way to refine AI strategies. As AI systems become more integrated into workflows, judicious enterprises will harness the knowledge, expertise, and best practices developed by their workforce and use them as precedents for the future.
Audit logs are rich sources of information on AI use. Analysis of log data can produce insights into metrics such as user trends, use cases, and how a company’s use stacks up against the wider industry.
By analyzing log data, organizations can:
Thus, audit logging allows decision-makers to make data-driven choices regarding AI adoption and share knowledge on the best use cases and practices among employees.
Rather than analyzing audit logs, organizations may turn directly to their employees to find out how they are putting AI tools to work. While conversations or surveys can be efficient ways to collect information, they have drawbacks. Responses may be coloured by memory or the availability bias, whereby we over-index on information that readily comes to mind, rather than responding in the most helpful or accurate way.[iii] The result is that employees will likely report on their recent, or perhaps most memorable, experiences with AI tools, rather than necessarily the most useful. The good news is these constraints can be overcome by referencing relevant data.[iv] So if employees are instructed to review their audit logs before responding, they will likely provide a better answer.
Consider an everyday example: If a friend asks you to recommend some good books, chances are you might talk about something you read recently. The answer could be very different to the one you would have given six months ago or even the one you would give next month. However, if you review your bookshelf, kindle, or audible library, you will probably come up with a better answer. Likewise, equipped with their AI activity library, employees and businesses will have the best information to determine what’s practices are working for them.
By embracing AI audit logs, organizations can make informed decisions, optimize AI usage, and cultivate a culture of shared learning and progress. Businesses who understand these benefits will be best positioned to capture and harness the benefits of AI on an enterprise-wide level.
[i] See T. Eckmanns et al, Compliance With Antiseptic Hand Rub Use in Intensive Care Units: The Hawthorne Effect, Infect. Control Hosp. Epidemiol., 2006 (study of hand-washing among medical staff found that when the staff knew they were being watched, compliance with hand-washing was 55% greater than when they were not being watched).
[ii] See M. Bateson, D. Nettle and G. Roberts, Cues of being watched enhance cooperation in a real-world setting, Biol Lett, 2006 2(3).
[iii] See e.g. A. Tversky and D. Kahnaman, Judgment Under Uncertainty: Heuristics and Biases, Science, New Series, Vol. 185, No. 4157 (Sep. 27, 1974); Drew Erdmann, Bernardo Sichel, and Luk Yeung, Overcoming obstacles to effective scenario planning, McKinsey & Company, June, 2015.
[iv] See e.g. J. Nikolic, Biases in the Decision-Making Process and Possibilities of Overcoming Them, Econ. Horizons, 2018, Vol. 20:1; J.Eward Russo and Paul J.H. Schoemaker, Decision Traps: Ten Barriers to Brilliant Decision-Making and how to Overcome Them (New York: Simon & Schuster, 1989); J. Hammond, R. Keeney, H. Raiffa, The Hidden Traps in Decision Making, Harv. Bus. Rev. 1998.
Credal gives you everything you need to supercharge your business using generative AI, securely.