Skip to main content
All Posts By

Lightkeeper

Lightkeeper Lumina Layers AI Intelligence into the Portfolio Analytics Platform

By Press Release

Lumina marks the next chapter in Lightkeeper’s AI journey, embedding intelligence into the platform so investment teams can surface deeper insights and focus on decisions, not data

BOSTON, MA — April 2026 Lightkeeper, a leading provider of data and analytics solutions for investment managers, today announced the release of Lightkeeper Lumina (“Lumina”), a context-aware AI layer embedded directly within the Lightkeeper platform.

Investment teams spend too much time aggregating data and navigating complex interfaces, and not enough time on the analysis that actually drives decisions. Lumina changes that equation, allowing users to simply ask questions in natural language within Lightkeeper, and get insights in context without ever having to leave what they are doing.

AI That Works Where You Already Work

Lumina is a context-aware AI tool embedded within the Lightkeeper interface. Unlike standalone AI tools that require users to switch platforms or re-explain context, Lumina understands where a user is within the platform, which stats they’re looking at, which date ranges are active, which views are open, and surfaces guidance, answers, and insights in real time.

In practice, this means a user who, for example, wants to know when in their fund’s history have they experienced certain levels of portfolio performance, drawdown or trading activity; the kind of cross-sectional time-series analysis that typically requires manually changing date ranges, navigating multiple views, and assembling results by hand, gets the complete table back in seconds, along with qualitative context and key insights they hadn’t explicitly requested, ready to analyze rather than compile.

Users can ask questions about the data directly in front of them, get instant explanations of statistics and methodologies, uncover cross-sectional and time-series insights without manually reconfiguring views, and request qualitative analysis that contextualizes the numbers they’re reviewing, all without leaving the platform or interrupting their workflow. Users can also ask Lumina to identify gaps in their current Lightkeeper configuration, surfacing statistics they aren’t leveraging that could provide additional insight given their portfolio context

How Lumina Differs from Lightkeeper Beacon

Lightkeeper Beacon (“Beacon”), which became available to all clients in February 2026, enables investment professionals to ask questions about portfolio data in plain English from outside the platform, via large language models such as Anthropic’s Claude, and receive answers backed by Lightkeeper’s validated, institutional-grade analytics.

Lumina complements Beacon by solving a different problem. Where Beacon is designed for broad access, allowing users across the firm to interact with portfolio analytics from anywhere, Lumina is designed for depth and privacy, enhancing the experience of users who are already inside Lightkeeper doing active analytical work without sending data outside the platform.

Beacon expands who can interact with portfolio data and enables powerful cross-platform workflows that leverage the graphical and third-party data capabilities of leading LLMs. Lumina accelerates and enriches the work already happening within the platform. Together, they form a complementary AI capability that meets investment professionals where they are, whether that’s inside or outside the system.

“Lumina doesn’t ask investment professionals to change how they work; it meets them where they already are by providing a best-of-breed intelligence layer within their platform.”  Danny Dias, Co-Founder and Chief Product Officer, Lightkeeper

Replacing Busy Work with Better Analysis

Lumina is purpose-built around the most common points of friction in a user’s day: answering questions that require navigating multiple views, assembling data by hand, or reading through documentation for methodology explanations. Lumina handles these tasks instantly, in context.

One example from beta testing illustrates the point: a user logs in each morning and asks what happened in the portfolio yesterday. Lumina surfaces a concise summary of portfolio performance, notable moves within the book, and metrics worth attention in seconds, before the day has begun.

The result is the same across use cases: Lumina provides a base layer of aggregation and analysis so that investment professionals can focus on what requires their judgment.

Intelligence Grounded in Institutional Data

Lumina operates on the same validated data foundation as the rest of the Lightkeeper platform. Calculations are performed by Lightkeeper’s analytics engine, not generated by the model, meaning every answer is accurate, reproducible, and traceable back to source data. This architecture ensures that speed does not come at the cost of rigor.

“Our clients need AI that accelerates their work without introducing any doubt about the numbers underneath. Lumina is built on the same data foundation clients have trusted for over 15 years. The speed is new. The trust is not.” — Dean Schaffer, CEO, Lightkeeper

Availability

Lumina is now available to Lightkeeper clients. It was developed in close partnership with clients through an iterative beta process in which participant feedback shaped both functionality and design.  Beacon remains available to all clients, and together the two products form a complementary AI toolset that meets investment professionals wherever they work. Firms interested in learning more can contact Lightkeeper at info@lightkeeper.com or visit www.lightkeeper.com.

About Lightkeeper

Lightkeeper is a trusted partner for investment managers, serving over 160 firms managing more than $600 billion in assets. Through purposeful innovation and tech-enabled service, Lightkeeper unifies data from multiple sources into a clean, reliable platform. Built by industry veterans and continually refined through client feedback, it helps teams across the organization unlock actionable insights and scale with confidence. Learn more at www.lightkeeper.com.

 

Media Contact:

Claudine Martin
VP, Head of Marketing
cmartin@lightkeeper.com
508 – 341- 2123

From the Other Side of the Table: Mary Viviano on What Investment Teams Can Learn From Their Own Data

By Articles

Named Data Science Professional of the Year at the Waters Women in Technology & Data Awards, Lightkeeper’s Managing Director of Analytics has spent a decade helping investment teams turn historical portfolio data into decisions that actually move the needle.

Before Mary Viviano ever helped a client understand their trading patterns, she was the one trying to figure them out herself.

Early in her career, Mary was doing the kind of portfolio analysis she now builds sophisticated tools for, only she was doing it in Excel, manually, with whatever data she could get her hands on. Later, at another firm, she discovered Lightkeeper and realized how much more was possible when the data was properly structured. She became a user before she ever became an employee.

That experience shapes everything about how she works today.

“I’ve sat in their seat,” she says simply. “I know what they’re trying to solve for.”

The Problem with a Single Number

Mary joined Lightkeeper in 2016 and has spent the decade since helping clients unlock something most investment firms are sitting on without fully realizing it: the story inside their own portfolio history.

The starting point, she explains, is recognizing what a standard P&L number doesn’t tell you.

“I made 300 basis points in Microsoft, but that doesn’t tell you anything. What would you have earned if you’d just invested in the market for that time period? What if you’d doubled down here, gone in more quickly or more slowly, exited differently?”

A single outcome number, she argues, can’t help you improve. To do that, you need to understand how the decision was made; the entry, the adjustments along the way, the exit, and whether each of those steps actually added value.

That’s the foundation of Trade Decision Analytics, the framework Mary developed at Lightkeeper that has become one of the platform’s most significant analytical advances. By decomposing trading activity into distinct phases and measuring how each contributes to long-term portfolio P&L, investment teams can start to identify consistent patterns in their own behavior and use them to make more informed decisions.

“If people understand that they have a pretty consistent bias in how they open positions or close positions, or how they react when certain things happen in the market, they can lean in,” she says. “Maybe: I’m really convicted in this name; I should put the full position on all at once and save myself money. Or: when a stock moves against me by 25% and I double down, that’s usually a bad idea. Understanding that and incorporating it into the investment process, that’s how you maximize returns.”

The Work Behind the Insight

What doesn’t come through in an award citation is how hands-on Mary’s work actually is.

When a client needs analysis, she goes into their data manually, works through the statistics she thinks will tell the most useful story, and builds out a deck they can review together. It’s time-intensive, deliberate work, less data engineering, more portfolio coaching.

“The most rewarding part is helping clients see patterns in their own data that they hadn’t noticed before,” she says.

New analytics at Lightkeeper usually start with a client conversation, not a specific feature request, but a question a client is wrestling with. Mary and her colleagues dig into it, develop an approach, and then take it back to clients for feedback.

“We can’t just create new analytics in a vacuum. It has to be based on discussions with clients, making sure it’s going to be useful, that it’s understandable, that clients can access it without it taking 35 minutes to get an answer.”

Often, one question becomes five. “Someone might ask for something specific and when you dig in, it turns into several things, because you realize someone else was asking a similar question, just in a different way. It’s iterative.”

Building for the Long Term

Beyond the analytics themselves, Mary has worked to make Lightkeeper’s knowledge more accessible to clients directly. About two years ago, she and colleague Stephen Scherock developed the Knowledge Center, a searchable resource library where clients can find detailed explanations of analytics and methodologies without needing to pick up the phone or send an email.

It grew out of a practical problem: Mary and Stephen had built up a library of explanatory documents they’d send out whenever a client asked a question. Putting them somewhere clients could find on their own just made sense.

It’s also, she notes, the kind of well-structured content that will matter more as AI plays a larger role in how clients interact with data. “In the age of AI, for that content to be searchable and usable, not just a stat definition but a real explanation of how something works, I think that’s really valuable.”

On AI more broadly, Mary is measured. She sees real potential for it to make her own work more efficient, particularly the manual, server-by-server portfolio analysis she currently does by hand. But she’s clear-eyed about what won’t change.

“People still need to be able to understand it and translate the data to other people. If you’re in a position where you can do that, that’s a real benefit.”

Recognition She Didn’t See Coming

When asked what it felt like to win the Data Science Professional of the Year award at the Waters Women in Technology & Data Awards, Mary is characteristically understated.

“It’s not something I would ever even think about. The fact that colleagues took the time to think about me, to put my name in and do all the work for the nomination, that’s what I’m really appreciative of.”

She’s not someone who seeks the spotlight, she admits. But for those who work with her, the recognition landed exactly right.

“Mary has a unique ability to bridge the gap between sophisticated analytics and real-world investment decisions,” says Dean Schaffer, CEO of Lightkeeper. “She understands the technical complexity of the data, but just as importantly, she understands how investment teams actually work. That combination allows her to turn analytics into insights that genuinely help clients improve their investment process. One of the greatest value-adds that Lightkeeper can provide to our clients is access to and insights from Mary.”

Greg Johnson, Senior Managing Director, Client Solutions of Lightkeeper, puts it this way: “She doesn’t just build analytics, she works closely with clients to understand their challenges and helps them apply the insights in meaningful ways.”

She also has a message for young women considering a career in data science. “If you like math and turning data into actionable insights, I’d definitely suggest it,” she says. “It’s market adjacent, which means it’s constantly changing and exciting. And with AI making such a difference, if you’re in a position to understand it and translate it to other people, that’s a real benefit.”

For Mary, the work and the recognition point to the same thing: data is only as useful as what you do with it.

“The data is already there,” she says. “The real value is helping teams understand what it’s telling them.”

AI Is Hedge Funds’ Top Priority — But Data May Be the Real Bottleneck

By Articles

According to Hedgeweek’s Q1 2026 Global Outlook Survey of more than 100 hedge fund managers, 41% now rank AI integration as their biggest priority for the year, surpassing both cost optimization and talent acquisition. Nearly a third report significant AI integration already underway across research and trading.

But a closer look reveals a critical blind spot.

The promise of large language models is straightforward: ask a question in plain English and get an answer in seconds. The challenge is that the questions that matter most to a portfolio manager, positions, attribution, and risk exposures, live in proprietary systems that general-purpose AI tools cannot easily access or verify.

When AI works from disorganized or inconsistent data, the result isn’t just an inconvenience. A plausible but incorrect number in a risk report or investor letter is a material, and potentially career, risk.

This highlights an important distinction discussed in the article: generative AI can produce useful insights, but investment teams still rely on deterministic analytics when it comes to their own portfolio data. For market commentary or macro analysis, approximate answers may be acceptable. For your own book, they are not.

The firms best positioned to benefit from AI may not be the ones deploying the most sophisticated models, but those that have invested first in clean, validated, well-structured data infrastructure.

Read the full Hedgeweek article: Hedge funds rank AI as their number-one priority — but experts say they may be ignoring this blind spot.

 

Lightkeeper Launches “Lightkeeper Beacon” To Deliver Verifiable AI Answers to Institutional Investment Data

By Press Release

Lightkeeper, a leading provider of data and analytics solutions for investment managers, today announced that Lightkeeper Beacon (“Beacon”) is available to all clients. Beacon enables investment professionals to ask questions about their portfolio data in plain English via large language models (LLMs) and receive answers based on Lightkeeper’s validated, institutional-grade data and analytics, complete with full audit trails.

Read More

Feed Your AI Well: Why Data Foundations Compound the Benefits of AI Models

By Articles

Most conversations about AI in asset management begin with capability, what the models can do, how quickly they are improving, and where productivity gains might appear. Those discussions are understandable, but they skip the more determinative question: what kind of data environment the technology is being asked to operate in?

AI is only as effective as what it is fed. And in many firms, that data diet has been shaped by years of manual processes, informal corrections, and workarounds that made traditional workflows workable, but fragile.

For a long time, that fragility was tolerable.

AI Does Not Like Band-Aids

Historically, most data environments were imperfect but serviceable. Information flowed through a mix of systems, spreadsheets, reconciliations, and institutional knowledge. Reporting cycles allowed time for review, adjustment, and interpretation. Teams knew where the gaps were and how to manage them.

That buffer is becoming more costly.

AI collapses the distance between questions and data, removing much of the time and human intervention. That is a huge benefit, but it also means you lost a buffer that previously absorbed inconsistency. When AI is introduced, answers arrive immediately, whether the foundations are ready or not.

AI doesn’t reconcile ambiguity. It magnifies it. Inconsistent definitions, unclear lineage, or fragile dependencies show up directly and inconsistently in the outputs. What once required experience to detect now appears in front of users, often without warning.

This is why AI can feel unreliable on enterprise data sets. Not because the technology is immature, but because it exposes environments that were never designed to operate without human arbitration.

What AI Reveals About How Firms Actually Operate

Most firms rely on more operational workarounds than they tend to acknowledge. Manual reconciliations embedded in workflows, spreadsheets that standardize data after the fact, and unwritten rules known only to certain teams are common features of day-to-day operations. In many cases, these practices were developed for good reasons. Often, they were the only practical way to keep things moving.

As firms try to capture the benefits of AI, those dependencies become visible. The technology has no awareness of which data source is considered authoritative, which adjustments are provisional, or which exceptions apply only in specific contexts. Logic that exists outside formal systems cannot be inferred.

Consider a simple example: A portfolio manager asks an AI tool for exposure across several strategies ahead of a client conversation. The system returns different numbers, each technically correct, but based on different definitions embedded across legacy systems of market value, notional value, and delta-adjusted exposures. It’s that “exposure” was never defined consistently in the first place.

Moments like this reveal how much reliable output has depended on people bridging gaps rather than systems providing consistency. The shortcuts that worked in the past become constraints on what is possible going forward.

Feeding AI Well Requires More Than Clean Data

When firms talk about preparing for AI, the conversation often defaults to data cleanliness. Records should reconcile. Fields should be populated. Numbers should tie. These things matter, but they’re not enough.

Much of the information firms use every day are computed data elements that are often too diverse to store. A manager may want their returns across an arbitrary data range or an average exposure across time. This requires correct data but also a system that can consistently provide those values without the models resorting to “computation,” which generally involves writing calculations in custom code.

So, reliability is not just about data quality, but data breadth to support the far-ranging inquiries natural language avails itself.

Feeding AI well isn’t about hygiene. It’s about readiness.

Data readiness is an architectural property. It reflects whether data is structured consistently, whether relationships are explicit, and whether the supporting platforms can answer the same question repeatedly without manual adjustment. Firms that focus only on cleanup often discover they’ve addressed symptoms rather than causes. AI simply makes that distinction harder to ignore.

Why Context Determines Whether AI Adds Value

Even well-structured data has limits without context. Metrics don’t exist in a vacuum. Exposure, performance, and risk take on meaning only when interpreted through strategy, mandate, and intent.

Human teams apply this context intuitively for their investment strategies. They know which comparisons make sense and which assumptions apply. AI models generically do not, unless that context is explicitly embedded in the data and workflows they rely on.

Without it, AI can summarize information without producing insight. Outputs may be accurate yet still fail to support decisions. Confidence erodes not because the answers are wrong, but because their relevance is unclear.

Context is what separates AI that reinforces existing workflows from AI that introduces more noise than clarity.

AI will Compound Operational Alpha

Operational alpha has always been about leverage: turning data, systems, and workflows into more valuable outcomes without proportional increases in effort or risk. Firms with strong foundations were able to scale complexity, respond faster, and operate with confidence long before AI entered the conversation.

That hasn’t changed. What has changed is how quickly benefits can be reaped and weaknesses exposed.

AI doesn’t create operational alpha but can compound it. Gaps that once appeared only under stress (growth, customization, heightened scrutiny) now show up earlier and more often. As a result, operational capability becomes a more immediate factor in whether AI becomes an asset or a liability.

Models will continue to improve. That part is largely out of a firm’s control. What firms can control is what they choose to feed these systems.

What to Ask Before Your Next AI Initiative

Before moving forward, operators should be able to answer a few basic questions:

  • Would two teams asking the same question get the same answer?
  • Are key definitions enforced in systems, or explained after the fact?
  • When outputs differ, do teams trust the data or start reconciling?

In the end, feeding AI well isn’t about ambition. It’s about process and systematic capability. And increasingly, it’s the underlying infrastructure, not the model, that will determine whether AI delivers leverage or exposes limits.

 

SMAs Are Easy to Launch but Harder to Scale Than They Look

By Articles

Over the past several years, many hedge funds have expanded their use of Separately Managed Accounts (SMAs) to meet growing investor demand for transparency, control, and flexibility. Early on, the implementation of SMAs works well. Investors have greater transparency, and client demand accelerates.

But as the number of accounts grows, operational complexity compounds quickly. What begins as a manageable level of customization can become a growing burden across data management, reporting, and oversight, particularly for teams built around commingled fund workflows.

Compared to launching a new commingled fund, SMAs often feel easier at the outset because investors are typically allocating to the manager’s existing strategy, which means the SMA is built using largely the same underlying securities.

Once an SMA is live, complexity compounds. Each account brings its own mandates, constraints, reporting expectations, and oversight requirements. Data aggregation challenges increase as the number of data sources grows with additional fund administrators being used by investors. Customization becomes the norm. And tolerance for delays or inconsistencies drops quickly, particularly in larger, more sophisticated mandates.

This is often where operational alpha becomes visible, particularly for firms trying to scale customization without introducing friction.

Why SMAs Are Gaining Momentum and What That Can Obscure

The appeal of SMAs is straightforward. Institutional investors increasingly seek transparency, control, and flexibility, whether around risk limits, liquidity, tax considerations, tax treatment, or governance. OCIOs, pensions, and multi-manager platforms favor SMA structures as a way to meet those needs while maintaining access to differentiated strategies.

For managers, SMAs can accelerate distribution and strengthen allocator relationships. The challenge isn’t the structure itself. It’s what the structure demands operationally as it scales.

What Actually Breaks as SMAs Scale

The investment strategy often scales.

What tends to break are the operational protocols that surround it.

Data Stops Lining Up Cleanly

Different administrators with different data delivery times, inconsistent security masters, and varying reporting formats are among the challenges. Teams spend more time reconciling numbers that aren’t wrong, just inconsistent. What was once a periodic cleanup becomes a constant effort.

Reporting Quietly Consumes More Time

What was once a single monthly fund report turns into a growing set of bespoke deliverables. Exposure is grouped differently. Risk metrics are calculated differently. Attribution must align with allocator-specific frameworks. Over time, senior team members find themselves producing reports rather than generating insight.

Reconciliation Becomes Ongoing

In pooled vehicles, reconciliation has a predictable rhythm. In SMA environments, it becomes continuous. Manual processes that once felt manageable begin to get stretched. Firms often describe reconciliation work that once took hours each month becoming a significant effort as SMA complexity increases.

Headcount Becomes the Default Fix

When operations teams fall behind, the instinct is to hire. Additional analysts and specialists help keep things moving, but firms often see margin pressure increase, staff-turnover risk rise, and responsiveness fail to improve in proportion to cost.

When Operations Enter the Investment Conversation

Operational execution is no longer invisible to investors.

As SMA mandates grow larger and more complex, allocators perform deeper operational due diligence than they would for commingled funds. They ask how data flows through the organization, how reconciliation is handled, how quickly ad-hoc requests can be answered, and what happens operationally when the next SMA comes onboard.

Operational fragility tends to surface early, through delays, inconsistencies, or uncertainty, long before it appears in performance.

And when operations rely heavily on manual work or institutional knowledge, that becomes clear.

This is no longer just an efficiency conversation. As SMAs proliferate, allocators increasingly evaluate how consistently firms deliver under customization pressure. In many cases, operational confidence has become a gating factor in mandate decisions alongside performance.

What Operational Alpha Really Means

In SMA-heavy environments, operational alpha shows up in a firm’s ability to support customization at scale without incremental headcount, inconsistent reporting, or growing operational risk. In practice, that often means teams can absorb additional mandates without slowing response times or increasing operational strain, even as SMA complexity grows.

At its core, this comes down to designing operations that flex as complexity grows rather than breaking under it. By building this discipline early, firms can deliver consistent reporting across customized mandates while preserving margins as SMA complexity increases. It reduces operational noise and risk, allowing teams to respond confidently during allocator due diligence rather than scrambling to explain inconsistencies. Over time, firms with stronger operational alpha tend to feel the benefits most clearly as complexity increases, while those without it often experience growing friction instead.

The Operational Reality of Scaling SMAs

As SMAs continue to grow, we’re starting to see meaningful differences emerge in how firms absorb that complexity.

Some firms built operations for a different era, when customization was limited, reporting cycles were slower, and complexity was easier to manage. For them, SMA growth increasingly feels constraining.

Others invested earlier in data discipline, integrated workflows, and operational leverage, not because it was fashionable, but because it made long-term sense. They understood that every manual process caps scale, and every workaround eventually becomes visible to investors.

The difference between those firms isn’t strategy or performance.

It’s whether SMA growth feels like momentum or like drag.

The question for managers increasingly isn’t whether to pursue SMA growth, but whether their operating model is built to support it.

Operational Alpha: The Resolution Asset Managers Can’t Afford to Postpone

By Articles

Why AI is the next test of operational foundations

The Resolution We Keep Postponing

Every year, asset managers tell themselves the same things.

We need to clean up our data.
We need to automate more.
We need to find ways to stay ahead.

And every year, most firms push those conversations just far enough to feel responsible, then get pulled back into the day-to-day reality of running portfolios, supporting teams, and keeping the infrastructure moving.

The problem isn’t a lack of awareness. It’s that operational change is easy to postpone when the data flows are still “good enough.”

But that cushion is getting thinner.

As investment alpha remains as hard as ever to generate and even harder to sustain, the operational foundation of a firm is increasingly shaping how effectively firms can adapt and scale. The firms pulling ahead aren’t simply running tighter operations; they’re deliberately building operational alpha: the ability to leverage data, integrated workflows, and scalable infrastructure into faster decisions and better outcomes.

And AI will only exacerbate the gap between firms with operational infrastructure to leverage technology versus those reliant on legacy manual processes.

That’s why 2026 feels different. There is a technology paradigm shift going on that is both exciting and uncertain.

This isn’t another cycle of incremental improvement, or a new system layered on top of old processes. While technology like AI provides exciting possibilities, it also pressures every shortcut, workaround, and fragile dependency firms have been carrying for years because they become the blockers. What used to be manageable friction is now a real constraint on growth.

The question facing asset managers isn’t whether operational transformation is necessary.

It’s whether it will be an asset or a liability.

When “Good Enough” Stops Being Enough

Most New Year’s resolutions fail for the same reason: they’re framed as things we should do, rather than commitments we are actually ready to make. In asset management, operational transformation often falls into this exact category. It’s acknowledged as important, discussed at off-sites, and revisited just often enough to keep it on the radar, but it fades into the background as other priorities take over.  But the risk versus reward equation is changing.

This technology cycle is tipping the scale towards action; the operational realm now stands in the prime position to benefit, fundamentally shifting the math on transformation. The potential upside has increased, as well as the cost of delay, turning inaction into a growing liability. The firms that can best leverage this new technology on the data that matters the most to their business will have a material and likely growing advantage.

AI provides a force multiplier on good data and well-architected systems and a force divider on bad data and poorly structured systems. Earlier tools made firms incrementally more efficient; AI-powered data sets open entirely new possibilities. This distinction matters because, until recently, the real bottleneck was not only access to data; it was having the time and capacity to turn that data into something actionable.

That latter bottleneck was almost always a human capital expense and is now being broken down. AI excels at synthesizing information and surfacing insight at speed, but it stumbles when data is fragmented, inconsistent, or difficult to retrieve. As a result, clean data that can be accurately accessed can become an immediately leverageable asset, enabling a level of speed, flexibility, and scale of insight that simply wasn’t possible before.

Another Year Over a New One Just Begun

As you close the books on 2025, ask yourself: what insights do you wish you had the time to pursue? What ideas are left unexplored because digging into them still requires too much manual effort?

In 2026, the landscape is changing. Insights that once required dedicated projects are increasingly answerable with a well-constructed prompt. The bottleneck is shifting from human time to system readiness.

But that shift won’t be evenly distributed. The firms able to turn questions into insight at speed will be the ones with data and workflows that AI can leverage and users can trust. For everyone else, the promise of productivity will remain just out of reach; limited not by the technology itself, but by the foundations beneath it.

It is going to be an exciting time around internal data sets and the evolving landscape of tools to make them more valuable to your firm. This is not the year to let operational alpha slip off your resolution list.