Lattice's Allen Jeter on Building Practical AI Assistants That Transform Enterprise Operations

March 5, 2025

Lattice's Allen Jeter on Building Practical AI Assistants That Transform Enterprise Operations

Introduction and Background

Q: Tell the audience who you are and what you do.


Allen: My name is Allen Jeter and I've been with Lattice for about a year and a half. I'm the director of IT. I've been in IT for the last 12 to 13 years, really managing Enterprise security and helping companies kind of scale their internal IT operations. My sweet spot in companies is kind of that 200 to 500 kind of scale when they're really going through Rapid growth.

Q: For those that don't know, what does Lattice do?


Allen: We are a people powered platform where we're trying to drive better Performance Management as well as just building really awesome teams using human feeling products. So enhancing performance reviews, driving better one-on-one conversations, and generally just kind of managing that human layer of interactions that every company goes through.

AI Adoption Journey

Q: What do you do when the CEO comes to you and says we should be doing more with AI?


Allen: One thing that I should give context in - I love one of my passions is where technology meets business problems and solves business problems. If you do it right, the process or system you put in place will outlive you. The Spotify will probably be around long after I'm gone, and so being able to drive that kind of change and Innovation internally in organizations is what I thrive on.

When the CEO did come to me and asked how we are kind of building an AI strategy to kind of uplift our operations and really uplift where we are as a company, I was like "put me in coach." I definitely agree with what you're saying - initially, let's call it 12 months ago, there was a lot of hype around AI. AI is just even very simplistic in general - there's LLMs, machine learning, and all of these other kind of interfaces through AI.

For us, the ability to leverage kind of NLP Dynamics where I can talk to a system and it can give me information back was a very powerful value prop. Being able to bridge the non-technical workers on our team to be able to extract specific data and insights the same way a data engineer or a machine learning scientist would be able to was very intriguing.

Early Experimentation with AI Assistants

Q: What kind of experiments were you running? How were you thinking about even choosing what experiments to run?


Allen: Initially we tried to build our own chatbot. We saw a lot of offerings - the product offerings were dense, we're talking thousands of AI Solutions. So we tried building one internally, and we did. I wouldn't consider it scalable, but we created essentially a front end where you could ask questions, it would pass that on to OpenAI, it would then give you an answer. That's kind of V1. V2 was adding RAG, so adding a vector database into it - taking your business data, converting it into your vector database, and then providing parts of that Vector database as context to your query to OpenAI.

Q: What were the sources of data? What departments, what type of problems, what types of questions were you hoping people could ask of that system?


Allen: Our two primary use cases were a people panel. Our people team answers a lot of questions for our employees and it's kind of the life force in terms of supporting them. There's going to be critical life questions that people need answers to, and if you don't have an HR team that's ready and available to service those, then it can be a true detriment to people's lives.

The thought behind building an AI chatbot for the people team was this 24/7 service that could consume all of the data and internal FAQs that we have about how we operate, our benefits, all of that, and be this kind of pseudo people Ops person that can answer questions 24/7.

Selecting Initial AI Use Cases

Q: How did you go through the process of picking that people ops use case?


Allen: One of the initial metrics in our very early days was driving down manual work and kind of building a metric of essentially response time - how long it took a human to respond versus an automation or an AI to respond. Really that was our main goal, to free up the team to do other work while this bot answered questions. We applied that to the people bot as well as the security bot.

Q: Did you guys pick the people bot because you believed that would be the place where you could get the biggest kind of reduction in manual effort?


Allen: We did. We were right on the target with that. Since then we've actually expanded bots to other teams as well and we've seen the same kind of benefit. The IT team is now using the Credal bot for first responses as well, and other parts of the business are kind of following suit. It's really this paradigm shift with operating from kind of a reactive state as a team to moving towards more of a proactive state.

Building the Initial Proof of Concept

Q: Am I remembering correctly that you built that initial POC on OKTA workflows?


Allen: We built it within OKTA workflows. I want to say our Vector database was in PineCone. So we connected some of the pine cone APIs into OKTA workflows, API connected that to OpenAI APIs. What we found was it works, but it doesn't work at scale, which is fine. That kind of led us into looking for a third party solution.

Along that development process, we started learning some really cool things. One was using Slack data. About 12 months ago, Slack AI was introduced, and their sales team reached out championing Slack AI. They said it will index all of your Slack data from your channels, you will get the answer that you need wherever it is, it will solve all of your problems. Then the price tag moment came, and it was essentially 200-100% what we're paying for current Lattice. So my procurement team was like, "that's not really an option at this point."

That actually kind of widened our requirements on what we were looking for in terms of this Enterprise AI solution. We don't want to go with Slack AI, but we do want to tap into that Slack data. So whatever solution this is, we need to be able to have that integration.

AI Vendor Selection Process

Q: How do you cut through the noise and figure out which of these vendors are actually people that you should speak to versus the long tail of a million people telling you that they have the AI solution?


Allen: That's a great question. I would say a lot of it is having high requirements just off the jump and knowing those requirements, a little bit of luck, and equal parts just passion and curiosity from people looking for the products.

I happened to stumble onto Credal based on a blog post that you posted. Funny enough, you were describing the journey of most enterprises in their AI adoption curve. In it, you kind of have this graph which goes to a hockey stick trajectory. In the beginning, you say enterprises are starting to experiment, dabbling with some AI Solutions but haven't really done much. Fast forward about 6-7 months, you're starting to actually implement it into some use cases. Then kind of the hockey stick curve - you are actually fully productionizing it, it is within the operational workflow.

That was spot on - that was exactly what we were doing. We were experimenting, I think a lot of teams were. Then I realized the author had an AI company called Credal where you're solving some of these problems. I wanted to set up a demo as soon as I could.

I think the first demo that I saw of the platform of co-pilots being able to integrate into Google, being able to integrate into Jira and Slack - it was just that aha moment where I said "we found it, this is it."

Why Credal Was the Right Fit

Q: What made Credal stand out from other solutions you evaluated?

Allen: We tried building our own solution, we looked at Slack AI but the price was prohibitive, and we evaluated Stack AI as well. With Stack AI, what we found challenging was deploying it to non-technical teams - who was going to own the management of it. We saw this problem quickly starting to get bigger as more teams would utilize it. That was kind of a limiter to us fully adopting Stack AI, and we paused on that evaluation and started pursuing other products. That's when we landed on Credal.

One of the things I want to call out on Credal was the ability to deploy co-pilots into tools that we already use - communication tools. You're able to not only consume Slack data but also push and publish these co-pilots into a Slack channel for usage, which is just incredible from an end-user consumption perspective. They don't even need to leave the tools that they're already comfortable using. That was just a huge awesome checkbox that we didn't even know we were looking for until we saw Credal.

Being able to deploy to public and private channels and also consume that channel history - it was pretty awesome. The beautiful thing about publishing a co-pilot into a Slack channel is it can just look at the messages, decide if it can answer one, and then if it can, it just gives you the answer. The end user's experience is literally unchanged - there's no education or workflow management or change management you need to do. You just keep doing the same thing that you were doing before, and now you get an answer from an AI.

Another win for Credal is that it respects existing permissions. We had an existing permission structure within Google Workspace that was really robust, and so we were able to integrate within and be rest assured that this was not going to interfere or we're not going to have any edge cases where people have access to sensitive data where they shouldn't.

Implementing the Security Bot

Q: How did you land on the security bot as a use case?


Allen: Our security team is lean but mighty, and what they were experiencing was a lot of sales folks coming to them, passing down questions from customers about various security requirements or questionnaires. A lot of the time these were repetitive questions where a week ago someone on security staff answered something exactly the same, to the point where they would just share their response from a week prior.

So it kind of clicked in our head that if we can reduce the time for our sales team to come back to the customers with the right answers from the security team, we can drive more deals and close deals quicker. That was primarily the success criteria for us deploying the security bot.

Quantifying Value with AI

Q: One thing you guys did really well was think through especially for the security bot - how do I quantify this in a way that the CFO will really care?


Allen: That's kind of coming back to this idea that there's no one-size AI solution that fits all. You're really going to have to understand your business needs and the problems that your business is experiencing, pinpoint those, and then work backwards from that.

That was a problem that I quickly identified with the security team. They're a small team, we partner with them a lot, and so it was a little bit in our interest to support them. If they could work more efficiently, then they could support us a little bit more efficiently as well.

Q: Did they come to you with being like "I wish I could use AI for this" or did you sort of see this process in their workflow and you were like "I wonder if you could use AI for this"?


Allen: It was us kind of coming to them. It was us saying "We spotted this opportunity, we see a lot of the same questions getting answered by your team and this is something that we think we could line up and hopefully solve in perpetuity." They were bought in immediately - "say less" is what they exactly said. It then moved to the POC, there was rigorous testing and collaboration with them during the POC, some fine-tuning, but I think with that it perfected the end result that much better.

Refining the AI Solution

Q: One of the things that is important here is that the tooling has to help people do this right?


Allen: Absolutely. What you were mentioning just around that process was getting familiar with prompts. I think we all kind of became novice prompt engineers over using this product, and it's very important. I would say prompt engineering is critical to any chatbot success.

Coming back to the NLP Dynamic of AI - rather than having code influencing the way an app works, now we have literal English which will influence the way your app works. At first it doesn't really click with people - they get it in theory but not in practice. Through training the security bot - "don't respond to Sock spelled S-O-C-K versus SOC 2" - once they were comfortable with that, they quickly were able to gear it exactly to the responses they wanted. But it was definitely a learning curve for a lot of the teams going through that, but just very critical.

Data Security and Compliance

Q: How do you think about approaching the obvious productivity gains to be realized with AI in the context of having a responsible AI program that protects your customers' privacy and data security?


Allen: Compliance is paramount. Our customers give us trust in their data, and so it's really on us to maintain that trust and maintain privacy through our compliance systems and the usage of that.

We have a VP of compliance at Lattice, and we take it very seriously. That individual was actually part of the early discussions as we rolled out Credal. We were very intentional about using models that weren't trained on the data that we put through it. We were very intentional about what attributes of the data we're using would actually flow into some of these co-pilots.

Overall, we had that approach before AI and before LLMs, which actually benefited us even more. We kind of had this segregation around sensitive data already in place. So when we rolled out Credal, it was really just maintaining that same perimeter and framework and then just scaling that over to Credal as well.

To answer your question plainly, it's really focusing on the compliance, having a specific subject matter expert in your organization that's owning that and is leading that. This idea of "move fast and break things" - maybe it's for a startup, but once you start working at an Enterprise scale or you're in HIPAA kind of regulated industries, you don't want to move fast and break things. You want to kind of move fast, ensure privacy, and scale.

The cherry on top that Credal had is you have some limited DLP functionality where you can identify various internal or sensitive information through your platform and just stop that from going anywhere. We do leverage the defaults in place which I think cut down on specific PII. If someone was to maybe drop their social security number there or something like that, we do have those protections in place as a default just to protect folks as well.

Future Plans and Metrics

Q: How are you thinking about 2025? What are you guys trying to get done this year?


Allen: It's a little bit more complex than our initial metrics where we're driving down essentially human work hours. Now we're trying to focus on other metrics where we are reducing time to answers across the business. By reducing the inherent time to answer regardless of what team is using it, we are improving the overall business efficiency. That is kind of a metric we're currently tracking on how teams are using AI to solve their problems and getting the time to answer lower for those cross-partner stakeholders.

Q: What's an example of a use case that you guys are imagining?


Allen: We have accounting as a team that interacts a lot with our CFO. They're formulating a lot of weekly reports, so distilling that, automating that so that they aren't manually going through Excel and publishing all of these rows. That's kind of been a focus as well.

Others might be insights into procurement status of software. We leverage Zip (shout out to Zip), and getting visibility to employees on the status of their SaaS software request - what stage is it in, "oh it's in a Security review right now." Rather than going into Zip and clicking through the Zip request and seeing where it's at, we've plugged in the export of that Zip data to a co-pilot so people can just interact with a co-pilot and say "what's the status of this?" 15 seconds later they get the answer, and it's reduced click ops in a sense. They haven't left their tool that they're in, now they can kind of jump to the next conversation and say "hey it's approved, we're ready to go."

That use case of taking business data from a data source, plugging it into a co-pilot, and creating this NLP layer to reduce that time to answer for many parts of the business is something we're exploring with a lot of different teams.

Thoughts on Agentic AI

Q: How are you guys at Lattice thinking about how agents fit into the Lattice productivity ecosystem?


Allen: We're still kind of in an experimentation phase. If the problem earlier of inputting AI into your business was understanding the specific need, I think it's even more complex for agentic AI. You need to be very aware of the operations of your business and where you want to insert agentic AI because there's a lot of opportunities.

On your sales outreach, you can have agentic AI kind of reaching out and talking to specific potential customers. Maybe it's doing research, automated research for you about your prospects. Moving over to the people side on the recruiting side, we have a job description and it can kind of 24 hours a day scour through LinkedIn, find the candidates that you're looking for, and augment your human resourcing on your recruiting team.

One of the ways we're sort of using it in OKTA workflows is we are enriching the data of our tickets that come in without manual intervention. What that does is it allows us to get a lot more data insights about our tickets than we would just at face value. We can categorize what type of request this is, what app it's related to, what time of day this is. The end result is every ticket that gets created gets enriched with this metadata that we're able to look back on and build out a whole data analysis and say "wow we've really been getting a lot of these types of requests around these types of apps, maybe we should focus on solving problems around those things."

Keep Listening to more AI Adoption Playbook

Building blocks towards secure AI agents

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get a demo