Optimal AI banks $2.25M to deploy privacy-first AI agents for code review and compliance
Raising money for a new startup in the current environment can feel like juggling flaming torches — while riding a unicycle — through a downpour. But repeat founders Iba Masood and Syed Ahmed, best known for their previous venture, Tara AI, are at it again. Their new company, Optimal AI, just secured $2.25 million in pre-seed funding co-led by Act One Ventures and Chingona Ventures — and they’re determined not to let any flaming torches drop this time around.
The company’s goal is to build AI agents that manage the constantly growing mound of code reviews, security checks, and compliance intricacies that plague modern engineering teams — all while claiming “zero data retention.” In other words, the company aims to help developers maintain privacy and compliance without giving away sensitive IP.
In my recent conversation with Masood, she shared the scrappy story of going from the heartbreak of winding down her previous startup to feeling the “excitement of what’s possible” with a brand-new approach in generative AI. If the gambit works, Optibot, their first code-focused AI agent, might become a little coworker who quietly checks for security risks and compliance pitfalls — plus fixes them before you can say “pull request.”
From heartbreak to high hopes
It’s not every day you see founders shut down one venture, turn around, and gear up for another. For Masood, that transition wasn’t exactly a cakewalk.
“I had a lot of tears,” she tells me. “If it wasn’t for Syed, I don’t think I would have gone back into doing it again. But this time around, I’m trying to be more deliberate. I want to be more stoic — even though startups can feel like a rat race.”
She’s referring to the years she and Ahmed spent at their previous startup, Tara AI, which sought to streamline software development workflows. Tara eventually ran out of runway. The founders said enough was enough.
That experience taught Masood an important lesson: even if a startup doesn’t work out, you learn from it, you pick yourself back up, and if the right opportunity strikes, you dive in again.
“I did really try to be more thoughtful this time,” she says. “I realized it’s more about the excitement of what’s possible. Otherwise, everything else — like compliance or security — can be so tedious, but we see the potential for AI to actually help here.”
Why code review and compliance?
Software is gobbling up the world, but ironically, software developers are among the first to burn out on the demands placed on them. In 2025 alone, an estimated 25% (and climbing) of Google’s code is auto-generated by AI, with the GitHub CEO saying it could hit 80% for some companies soon. That’s a lot of new lines of code, and developers themselves — human ones — still need to review, secure, and maintain all of it.
At the same time, compliance obligations aren’t getting any lighter. Global compliance costs are headed toward $72 billion this year. So how do you ensure that the code you’re shipping — from large enterprises down to scrappy startups — is robust, secure, and meets every ticking box in your compliance checklist? That’s where Masood and Ahmed see a massive pain point.
“The volume of code is exploding,” Masood notes. “Engineers are now overwhelmed by code reviews, but also by questions from compliance teams and security teams. So we’re building agents that can actually handle both the generation of code and the compliance side of things — before that code hits production.”
Meet Optibot
The flagship product, Optibot, is an AI agent that sits on top of your codebase — like a virtual senior engineer who never sleeps and always has a copy of the regulation guidelines. If your code triggers an alarm for potential vulnerabilities, the bot flags it. If there’s a compliance standard to uphold, the bot can step in before the relevant lines of code make it into production.
“We’re building an agentic workforce of domain-specific AI assistants,” Masood explains. “Optibot is basically the first AI agent with a deep understanding of security and compliance considerations. It can even push its own pull requests with recommended fixes.”
This kind of domain-driven approach contrasts with the large language model mania that tries to do everything, often sacrificing nuance. Optimal AI wants to train and fine-tune specialized AI that can do one thing really well: keep engineers and compliance teams from wanting to tear their hair out.
Early adopters, including MongoDB and Prometric, have put Optibot to work analyzing engineering workflows, creating new pull requests, and scanning for code vulnerabilities. The lure for big-name companies? The zero-data-retention approach. If a developer at a major financial institution is merging sensitive code, the last thing they want is for that code to be feeding an AI somewhere in the great GPU-laden clouds.
“We actually use ephemeral processing,” Masood explains. “Data is never stored anywhere. Enterprises can adopt these AI agents within their own infrastructure, or a virtual private cloud, so it’s secure by design.”
A fresh war chest… with domain expertise
This approach wasn’t lost on investors. Act One Ventures — fresh from a 2024 exit with AuditBoard, a compliance software heavyweight — co-led the round. That means they’ve already had a front-row seat to big returns in the compliance space, and they’re betting that domain-specific AI is the next wave.
“The AI agent era is just beginning,” says Alejandro Guerrero, Managing Partner at Act One Ventures, in the company’s press release. “We invested in Optimal AI because Iba, Syed, and their team deeply understand the challenges facing engineering and compliance teams today. We’ve seen how compliance solutions become essential infrastructure — AuditBoard proved that.”
Chicago-based Chingona Ventures, led by founder Samara Mejía, also joined the ride. In Masood’s telling, it took eight weeks from initial pitch to term sheet to get this round locked in — lightning-fast, by normal standards, and certainly quicker than the months-long fundraising struggle at her previous company.
“We actually set out to raise only $1 million,” Masood admits. “But because it was so competitive, we ended up at $2.25 million. A big difference from the last time around.”
It also helps that Optimal AI is already generating revenue: an unusual situation for a pre-seed round. By focusing on enterprise security from the get-go, the team was able to secure paying customers earlier than most code-related AI startups.
The personal toll — and choosing to do it again
Of course, raising money doesn’t magically wipe away the day-to-day stress of running a young company. Shutting down Tara AI was emotional for Masood, and the journey to launching Optimal AI required a shift in perspective. She cites meditation and a “dash of stoicism” as the coping mechanisms that make it more sustainable this time around.
“We’re all human,” she says. “This can feel so all-consuming, but if you realize you’re resilient, you can pick yourself back up. That’s the biggest difference for me now.”
She recalls that after months of not drawing any salary, the fear of trying again was real. But Masood found the promise of building specialized AI agents too compelling to ignore.
“We learned so much from building on GitHub with Tara,” she points out. “That experience is what let us quickly build an AI that can spot vulnerabilities, push its own code, and keep big teams happy. This time, we’re not just automating the same old workflows, we’re tackling an entirely new challenge around ephemeral, domain-specific AI.”
So, how does Optibot actually work?
Picture this scenario:
1. A developer opens a pull request for a new feature.
2. Optibot scans the lines of code, referencing both your internal compliance guidelines and any relevant external frameworks (think HIPAA if you’re in healthcare, or GDPR if you’re dealing with European customer data).
3. If it spots a potential vulnerability or a mismatch with compliance rules, it flags the issue — and can even propose a rewrite.
4. The developer or security lead can approve or dismiss the change.
5. The agent never retains that raw data, so it can’t leak it to a third party.
The ultimate idea is to reduce the time engineers spend doing mundane checks and to help compliance teams close that pesky communication gap with devs. By automatically generating compliance-ready code, they’re hoping to mitigate risk in an era where AI is spitting out code at unprecedented speed.
“Most automation solutions today aren’t agentic — they do one step here or there,” Masood says. “But we think you need an AI that’s actually acting on your behalf, combining domain knowledge in compliance with actual coding capabilities. That’s how you spot issues right at the source.”
What’s next?
Having closed the pre-seed round, Optimal AI is ramping up hiring. They’ve jumped from two co-founders to a team of seven since closing the round, with more growth on the horizon. They’re also gearing up for an open beta of Optibot. That means smaller startups — at first, by invitation — can integrate the AI agent into their GitHub or GitLab environments without the friction of endless onboarding calls.
“We found that smaller teams have the same compliance headache, but with fewer resources,” Masood says. “They can’t afford a dedicated compliance manager or even a senior security engineer. If we can help them get code review done in half the time, that’s exciting.”
The big question is whether dev teams, sometimes known for their healthy skepticism of “AI wizards,” will trust an agent to do the heavy lifting. Security professionals, too, might want deeper transparency. But with multiple enterprise pilot customers already on board, Optimal AI is betting that the promise of ephemeral data processing — where your code is never stored or used to train outside models — will smooth that path.
A new wave of domain-specific AI
While the generative AI space is swarming with solutions that can churn out anything from marketing copy to Python scripts, Masood and Ahmed are focusing all their energy on code review, compliance, and risk management. Yes, they face competition from a wide range of code-focused tools, but Optimal AI is leaning heavily on specialized domain knowledge, privacy-by-design, and the hands-on experience gleaned from their first foray into developer tooling.
“I think the reason it resonated with investors,” Masood says, “is that a domain-specific AI agent can become infrastructure. Like how AuditBoard became essential for compliance. If we get this right, companies will treat Optibot as part of their core development workflow — something that you install on day one, especially in regulated industries.”
For now, they’re forging ahead, buoyed by a round that was oversubscribed and by the promise of a brand-new wave of AI. If the hype is real and the demand for domain-specific compliance solutions is as hot as Masood predicts, Optimal AI might just wind up in the eye of the storm, helping developers keep their code bases squeaky clean — and themselves out of the auditor’s crosshairs.
In the end, it’s not just about analyzing code. It’s about building an entirely new class of AI “workers” that can shoulder specialized tasks and free up humans to do the creative and strategic work we actually want to do. If that can happen without anyone’s sensitive data ending up in the wild, and with a founder who’s found her own sense of stoicism? Well, that’s a story worth watching.
Optimal AI’s Optibot is currently in closed beta for code review and compliance tasks. For those curious about letting an AI agent join your dev team — sans fear of data leaks — keep an eye on their open beta roll-out in the coming weeks. Masood and Ahmed are betting big that this is the next generation of code reviews, and if they’re right, the era of painstakingly manual compliance checklists might finally come to an end.