Alexander Lampaert on building secure, accurate, and enterprise-ready legal AI

Read time
5
min
Written by
Alexander Lampaert
Published on
December 9, 2025

Enterprise legal teams tend to test AI on three things:

  1. Can it be trusted with sensitive information?
  2. Can its reasoning be explained and justified?
  3. And will it behave consistently at scale?

Those questions shape Alexander Lampaert’s work. As LEGALFLY’s lead AI engineer and data scientist, he defines how instructions are interpreted, how data flows through the platform, and how large language models are applied to real legal tasks.

“Everything we build has to be anchored in data, domain knowledge and real insight,” he says.

From regulated banking into legal AI

Before joining LEGALFLY, Alexander worked at KBC, one of Belgium’s largest banks, where he collaborated with legal, data and security teams on early generative AI use cases.

“I was responsible, together with group legal, to look into how we could create value with generative AI when GPT 3.5 first emerged,” he says.

That work made the limits of internal solutions clear.

“There are so many external data sources. If every organisation had to parse that volume of information, it would require a full division of its own. UX and UI are also key selling points. Large organisations can’t always prioritise that. They don’t have the flexibility or focus.”

At the same time, Alexander was introduced to LEGALFLY.  “Seeing how quickly LEGALFLY progressed, and how closely it matched the platform we were trying to build internally, convinced me to move. Joining a team whose entire focus is advancing legal intelligence every day was the logical step.”

His experience in a regulated institution now shapes how LEGALFLY is built for the standards that enterprise legal teams need.

Trust with sensitive information

Confidentiality is non-negotiable in legal work, and Alexander’s background in regulated finance is reflected in LEGALFLY’s secure design.

“Banks work with very sensitive information,” he says. Personal data is not limited to obvious identifiers. “A combination of job and a medical condition is also personal data for a bank insurer.” These nuances inform how anonymisation, metadata, retention, and internal controls are built.

All documents are fully anonymised before any processing happens. Names, roles, locations, signatures, and other identifying details are removed at ingestion. The model analyses only anonymised content, with granular control over fields, entities, and contextual identifiers.

Trust also depends on how the system learns. Alexander is clear. “We are not training on your data. We are not fine-tuning models on it.”

LEGALFLY learns from instructions, actions and verified legal sources. Client material never becomes training data.

For organisations with strict confidentiality obligations, this ensures protection of internal knowledge, supports audit and compliance, and keeps behaviour predictable at scale.

Explainable reasoning grounded in law

Explainability begins with context. For Alexander, accurate legal reasoning means the model has to start with the right assumptions: jurisdiction, policies, risk thresholds, and drafting standards.

“A large language model absorbs everything it is shown at once. That has consequences for how you brief it.”

With tools like ChatGPT, negative examples can produce confusing outputs. “If you show a clause and say ‘do not draft this,’ the model still treats it as relevant. Clear, positive instructions work better.”

Identity matters for the same reason. In generic tools, every question begins without context. Unless the user restates their organisation, jurisdiction, and appetite for risk, the model defaults to generic patterns.

That’s not something users have to think about with LEGALFLY because it embeds identity at platform level. “Users upload their company information. We extract that to build a picture of who the company is and who the user is. That context is always present.”

With the correct identity, sources, and jurisdiction in place, reasoning is traceable. Outputs can be linked to a statutory requirement, an internal policy, a clause library, or a recognised publisher source.

The underlying principle is the same as it has always been in machine learning. “Garbage in, garbage out,” Alexander says. LEGALFLY’s architecture exists to make the input clear and structured.

Reliability and consistency at scale

Enterprise legal teams need tools that behave predictably across tasks, users, and regions. Alexander designs for that expectation.

LEGALFLY’s intelligence layer is built around controlled data flows, vetted legal sources, structured context learning, and prompting frameworks that reflect real legal work. 

In-house teams work with shared templates, recurring clause patterns, defined risk positions and approval processes. The platform mirrors those patterns rather than individual habits.

As a result, outcomes are repeatable. A review completed in one jurisdiction is handled the same way when repeated in another. A matter run by one lawyer follows the same logic when run by a colleague.

For legal teams, that consistency means AI-assisted decisions can be defended, audited, and reproduced. Reliability is built-in, not left to chance.

Legal intelligence for enterprise reality

Much of Alexander’s work lives beneath the surface, but it defines the experience on top. Confidential data handled responsibly. Reasoning tied to standards that can be traced. Outputs that behave consistently across the organisation.

“Everything we build has to be anchored in data, domain knowledge and real insight,” he says.

It’s the foundation of LEGALFLY’s design philosophy. Legal intelligence engineered for security, explainability, and reliability, shaped by how legal teams actually work, and built to meet the standards they need.

To find out more about LEGALFLY, speak to the team.