AI and the Legal Profession : general trends in adoption.

AI and the Legal Profession : general trends in adoption.

Propos recueillis par Ariane Malmanche, pour le Village de la Justice

In this interview, Sergio Liscia, General Manager at Wolters Kluwer Legal Software, discusses AI trends on a European scale (Sergio is based in Italia, which is why this interview was conducted and is published in English).
Some key points are presented in French at the end of the article. Traduction automatisée en ligne ici

-
Vice-président et directeur général de l’activité Logiciels juridiques, Wolters Kluwer Legal & Regulatory.

Village de la justice : How do you see law firms and legal professionals using AI in their daily work today, and what are the general trends in adoption ?

Sergio Liscia : Let me start with some data, because we regularly ask the market and our clients how they use these tools. We recently released the Legisway Benchmark, a survey conducted across several corporate legal departments throughout Europe, and this year, also in the U.S.

What we’ve understood from this survey is that interest in GenAI is growing significantly. Last year, adoption was at 40% ; this year, it’s already around 60%. And when we say “adoption", we mean frequent, recurring use, AI being used structurally within departments to improve productivity.

"Only around 14% are using these specialized tools. This shows that while interest is high, most of our customers are still in an experimentation phase."

However, most legal departments refer to AI with generalist tools in mind,
ChatGPT, Gemini, Claude, tools that are very useful in the consumer world. But when we narrow the question to specific legal applications, like predictive tools, contract drafting, or data extraction, the numbers change. Only around 14% are using these specialized tools. This shows that while interest is high, most of our customers are still in an experimentation phase. The adoption is still in its early stages.

One reason for this slower adoption is concern around privacy and data security. That’s a key point. At Wolters Kluwer, we’ve been working on this for over ten years, long before GenAI, with machine learning and natural language processing. These technologies laid the groundwork for GenAI. Our goal has always been to create what we call “Expert AI”, which means integrating AI into our expert portfolio.

We’ve built a shared platform across Wolters Kluwer that serves as the backbone for our AI features. This platform combines technology with standard safety practices, guardrails, logging, tracing, and governance. We also apply what we call “expert in the loop”, ensuring that the data we deliver is as safe and reliable as possible. We want to make sure AI is delivered in the most convenient and trustworthy way.

AI introduces a non-deterministic dynamic. In traditional software, if the input is A, the output is always B. With AI, the output might be B+, B−, or even C. So we work closely with customers to help them understand and feel comfortable with this shift. We also design the right user experience to show that this non-deterministic world can be trusted, under certain conditions.

VJ : How do you see AI transforming legal work in five years ?

Sergio Liscia : My personal opinion is that there will be a clear distinction between professionals who know how to use AI and embed it into efficient workflows, and those who are not yet properly trained or ready for this change.

The impact will be massive. I also think it’s interesting that you asked about five years, not six months, because we tend to overestimate short-term impact and underestimate long-term impact. We saw this with the internet. In 2000–2001, there was a lot of hype and expectation of disruption. But by 2002–2003, there was disillusionment. Twenty years later, we see it was just a matter of timing. I believe AI will follow a similar cycle, but faster.

Today, we see many use cases in the legal space. The most common is contract work, automated review, drafting, redrafting. These practices are widespread, but there’s no golden standard yet. For example, some use precedent contracts from their repository to create new ones, but the approaches vary. We’re still in an experimentation phase, both on the supply and demand sides.

So while we can identify the most used cases today, it’s not guaranteed that these will be the dominant ones in five years.

VJ : Why do you think contract-related tasks are the most common use case for AI in legal today ?

Sergio Liscia : I link these contract use cases to a very tangible need. If you analyze the work of a corporate legal department today, it’s spread across many workflows, contracts, privacy, litigation, IP, compliance. But one of the main areas with a lot of manual work and time spent is contracts.

On average, no corporate legal department spends less than 20% of its time on contract lifecycle management. So AI is being applied to an immediate pain point. Clients see a lot of effort in this area and an opportunity to make it more efficient.

Over time, AI will shift to other areas to improve efficiency. Today, it’s impacting areas with routine and repetitive tasks. As we identify similar patterns in other domains, automation will follow.

VJ : How do you think the EU AI Act will affect the development and deployment of AI for legal professionals ?

Sergio Liscia : I’ve been discussing this with our legal software team recently. I believe the EU is pioneering regulation around AI. We’re the first to deliver a framework to govern AI production.

"The EU AI Act has a clear impact. On one hand, it imposes more work on vendors and customers, (...) On the other hand, it increases trust."

If we look closely at the EU AI Act, it’s still about practices and frameworks, the clear ruling isn’t fully delivered yet. It’s more about creating ethical production, transparency, and guardrails.

This has a clear impact. On one hand, it imposes more work on vendors and customers, we have to comply and implement new practices. On the other hand, it increases trust. It creates a framework where customers can operate in a perceived safe way.

Sophisticated customers are not only aware of the EU AI Act, they’re creating internal project organizations or task forces to adopt it. Some large law firms are even offering services to help others implement the framework. Over time, I believe we’ll see certifications emerge to demonstrate compliance.

VJ : How do you gain the trust of legal professionals when it comes to AI ?

Sergio Liscia : I think about the difference between the two data points I mentioned earlier, 60% of customers using general AI tools like ChatGPT or Gemini, but only 14% using proper legal tech AI tools.

The difference lies in trust. We need to communicate that there’s a productivity gain and that it’s safe.

We do this by applying all the necessary frameworks, and even going beyond. We define internal policies to safeguard data, technically, through logging, tracing, anonymization, and through governance practices like expert oversight.

But it’s also about communication and usage. We’re increasingly focused on training, webinars, and promoting ethical, conscious use of AI. This combination of product, service, and communication is essential.

VJ : How does Wolters Kluwer differentiate its AI tools from other providers in the European market ?

Sergio Liscia : Internally, we always start from the value we offer to our customers. We see the market as a dialogue, where we learn from our clients what truly brings value. Having been in the legal tech space for many years, we know that the value doesn’t lie in delivering AI alone. The real value begins with understanding the workflow, the data, and the domain expertise that surrounds our customers.

It’s not about AI in isolation, it’s about AI embedded in the workflow and in the expert solutions we already provide. Many of our customers want to use the products they already have, but enhanced with AI. They’re not just looking for software ; they want deeper insights from the data they’ve stored and worked with in our products. They also want to make the legal content we provide more actionable.

So, our approach is to deliver AI at the point of use, within existing workflows. That’s how we see AI, not as a standalone tool, but as part of a portfolio of solutions that already delivers value.

VJ : One of the main concerns legal professionals have with AI is the reliability of the data. How do you ensure that your tools provide accurate and trustworthy results ?

Sergio Liscia : That’s a very good question, and I appreciate it because it touches on a strong internal discussion we’ve had. We’re working hard to give our customers as much certainty as possible.

To do this, we start with the platform we’ve built, which I mentioned earlier. This platform acts as a filter for whichever large language model we use, whether it’s OpenAI, Anthropic, or Gemini. We don’t just use these models as-is ; we apply them through our own infrastructure.

In most of our products, we use what’s called retrieval-augmented generation (RAG). This means that while the algorithm is trained on a large language model, the data it applies to is a pool defined by us. This helps minimize the risk of hallucinations and ensures that the information is relevant and accurate.

"We’re not building AI for consumers, we’re building it for professionals. That means it has to be reliable, accurate, and safe."

There are many other technical practices we apply, some of which are quite advanced and handled by our engineering teams. I’m not a technician myself, but I’m very curious and closely involved in these discussions. What I can say is that the internal work we do to protect data, ensure safety, and reduce hallucinations is substantial. It’s a key part of how we differentiate our legal AI from generalist tools.

We’re not building AI for consumers, we’re building it for professionals. That means it has to be reliable, accurate, and safe.

VJ : Pricing is a major issue, especially for smaller firms. How do you approach pricing for AI tools given the wide range of firm sizes and budgets ?

Sergio Liscia : Thank you for raising that, it’s a fundamental point. Internally, we refer to this as segmentation, which means there’s no one-size-fits-all model.

From a market perspective, we’re still in an experimentation phase. There’s no standard pricing model yet. We’ve mapped a wide variety of approaches : some providers price AI as an add-on, others embed it in existing products, some offer standalone tools, and others integrate it with separate packages.

One reason for this variability is the cost of tokens and the consumption of AI. These factors directly impact the cost for providers. As a result, many vendors adjust their pricing frequently to reflect their own operational costs.

So yes, pricing models are still evolving. What we’re doing is trying to define the model that delivers the best value to both our customers and to us. And that means adapting our offering to different segments of the market.

VJ : Do you think legal professionals are aware that using AI tools comes with a cost, even when some platforms offer free access ?

Sergio Liscia : Absolutely. That’s a key point. Many people don’t realize that every time you ask an AI tool a question, it consumes resources, and those resources have a cost.

Even platforms like ChatGPT or Mistral offer limited free usage, but behind the scenes, there’s real infrastructure and token consumption involved. This creates instability in pricing models, because providers have to constantly monitor and adapt to usage patterns and costs.

That’s why I believe pricing will eventually be segmented, not just by firm size, but also by usage and domain. A top-100 law firm has very different needs and pricing power compared to a small firm. And the kind of legal practice matters too.

VJ : Can you elaborate on how the type of legal practice affects AI adoption ?

Sergio Liscia : The domain makes a big difference. For example, in criminal law, even if you’re a small firm, you deal with cases that are always different. That makes it harder to automate or templatize the work.

But if you’re working in insurance law, you often deal with recurring cases. That repetition allows you to create templates and apply AI more effectively.

So yes, the kind of legal practice you have influences how AI can be used and the value it can deliver.

Have you noticed any differences in how the French market is adopting AI compared to other European countries ?

Sergio Liscia : I think the French market has a key characteristic : it’s very dynamic. You mentioned Mistral earlier, and that’s a great example. Mistral was born in an environment that’s very forward-thinking, not just in tech, but in legal tech specifically.

The legal tech ecosystem in France is very lively. That’s partly because the government has supported the sector over many years, including funding initiatives. As a result, we’ve seen the emergence of strong legal tech startups and providers.

This supply-side dynamism has had an impact on the demand side as well. The availability of sophisticated, cutting-edge technologies has encouraged the market to experiment and to expect high-quality solutions. Compared to other European markets, France stands out in terms of both innovation and adoption.

VJ : What advice would you give to legal professionals who are still hesitant to use AI tools ?

Sergio Liscia : I’ve seen many different cases, and my advice is simple : experiment, but do so responsibly.

Use specialized providers. The risk of using general AI tools without understanding their limitations is real.

So yes, experiment, but within a safe and well-defined framework. Professional providers can help you gain productivity in a secure environment. That’s the key : responsible experimentation with the right partners.

"AI should be embedded in the existing practices of legal professionals. That’s where it delivers the most value."


VJ : Is there anything you’d like to emphasize before we wrap up ?

Sergio Liscia : Yes. AI is a great opportunity for legal professionals and for the market as a whole. I believe it will continue to evolve over the coming years.

But the real power of AI is unleashed when it’s used in combination with the customer’s content, data, and workflows. That’s what I want to highlight : AI should be embedded in the existing practices of legal professionals. That’s where it delivers the most value.


Quelques points clés en français...
Voici 5 points abordés dans cette interview choisis pour leur intérêt général :

  • "My personal opinion is that there will be a clear distinction between professionals who know how to use AI and embed it into efficient workflows, and those who are not yet properly trained or ready for this change."

Cette citation souligne l’impact transformationnel de l’IA, suggérant qu’elle créera une division nette entre les professionnels du droit en fonction de leur capacité à intégrer ces outils dans leur travail.

  • "Only around 14% are using these specialized tools. This shows that while interest is high, most of our customers are still in an experimentation phase."

Ce point met en évidence le décalage actuel dans l’adoption de l’IA : bien que l’intérêt pour l’IA en général soit élevé (60%), l’utilisation des outils spécialisés pour le droit reste faible, indiquant que le marché en est encore à ses débuts.

  • "We’re not building AI for consumers, we’re building it for professionals. That means it has to be reliable, accurate, and safe."

Cette citation définit l’exigence fondamentale de l’IA spécialisée pour le droit. La fiabilité, la précision et la sécurité sont essentielles pour gagner la confiance des professionnels, contrastant avec les outils grand public.

  • "AI should be embedded in the existing practices of legal professionals. That’s where it delivers the most value."

Il s’agit d’une conclusion essentielle sur la manière de maximiser les bénéfices de l’IA. Pour Sergio Liscia, l’IA ne doit pas être un outil isolé, mais doit être intégrée aux flux de travail, aux données et aux contenus existants des clients.

  • "On average, no corporate legal department spends less than 20% of its time on contract lifecycle management. So AI is being applied to an immediate pain point."

Cette citation explique clairement pourquoi les tâches liées aux contrats sont actuellement le cas d’usage le plus courant pour l’IA juridique : elles représentent un domaine exigeant et répétitif, permettant une application immédiate pour résoudre un problème tangible d’efficacité.

Propos recueillis par Ariane Malmanche, pour le Village de la Justice

Recommandez-vous cet article ?

Donnez une note de 1 à 5 à cet article :
L’avez-vous apprécié ?

3 votes

L'auteur déclare ne pas avoir utilisé l'IA générative pour la rédaction de cet article.

Cet article est protégé par les droits d'auteur pour toute réutilisation ou diffusion (plus d'infos dans nos mentions légales).

Village de la justice et du Droit

Bienvenue sur le Village de la Justice.

Le 1er site de la communauté du droit: Avocats, juristes, fiscalistes, notaires, commissaires de Justice, magistrats, RH, paralegals, RH, étudiants... y trouvent services, informations, contacts et peuvent échanger et recruter. *

Aujourd'hui: 157 175 membres, 29186 articles, 127 367 messages sur les forums, 2 700 annonces d'emploi et stage... et 1 400 000 visites du site par mois en moyenne. *


FOCUS SUR...

• Ouverture des inscriptions pour les RDV des Transformations du Droit 2025, votre RDV annuel.

• Parution de la 2nde édition du Guide synthétique des solutions IA pour les avocats.





LES HABITANTS

Membres

PROFESSIONNELS DU DROIT

Solutions

Formateurs