Skip to main content Skip to footer

How Business Intelligence (BI) is Evolving in a Web3 World

We don’t need a crystal ball to look into the future—we just need analytics. And the general consensus is that business intelligence is about to get even more intelligent, especially as we start to see the emergence of a Web3 world.

Web 1.0 was the first generation of the Internet, though at the time we just called it the World Wide Web. Back in the early ’90s, this iteration of the Web consisted of static webpages where we could search for information online and … well, read it.

Those were the days of Altavista, Netscape and Yahoo, when we could read data but not interact with it. ‘E-commerce’ consisted of finding a phone number on a static web page and then calling that phone number. From a landline.

This eventually evolved to Web 2.0, powered by distributed IT architecture and cloud computing. We saw the rise of true e-commerce, along with mobile phones, social media and apps. We’re still in the Web 2.0 era—one of centralization, economies of scale and user-generated content (albeit controlled by a relatively small group of companies).

Web3 is the next iteration of the Web—and while it’s still in its fledgling stage, it has the potential to flip Web 2.0 on its head and drive advancements in data analytics.


What exactly is Web3?

There’s a lot of confusion about what ‘Web3’ actually means. The term itself was coined by Ethereum co-founder Gavin Wood back in 2014, used in reference to blockchain and cryptocurrency. Since then, however, it’s evolved into a catch-all phrase that refers to the next generation of the Internet (hence, the confusion).

“The moniker is a convenient shorthand for the project of rewiring how the web works, using blockchain to change how information is stored, shared, and owned,” according to an article in the Harvard Business Review.

Unlike Web 2.0, Web3 takes a decentralized approach to the web. It’s built on blockchain and token-based technologies, including cryptocurrencies and non-fungible tokens. And it will make the Web much more immersive—like the metaverse. But it will also attempt to address the security and privacy issues that have plagued Web 2.0, such as the collection and misuse of user data.

Deloitte describes Web3 as the Spatial Web, “which will eventually eliminate the boundary between digital content and physical objects that we know today. We call it ‘spatial’ because digital information will exist in space, integrated and inseparable from the physical world,” according to a Deloitte article.

A number of technologies will contribute to the Spatial Web, including augmented, virtual and mixed reality (AR/VR/MR), artificial intelligence and machine learning (AI/ML), advanced networking (such as 5G), IoT devices and sensors, as well as distributed ledger technology (such as blockchain).

“By vastly improving intuitive interactions and increasing our ability to deliver highly contextualized experiences—for businesses and consumers alike—the Spatial Web era will spark new opportunities to improve efficiency, communication, and entertainment in ways we are only beginning to imagine today. For forward-thinking leaders, it will create new potential for business advantage—and, of course, new risks to monitor,” according to the Deloitte article.

Web3, or the Spatial Web, will also be powered by much faster networks, including 5G, LTE and Wi-Fi 7, which will increase the speed at which we can collect and sort through data.


Web3 and big data

So what could this mean for big data analytics?

Web3 could help to tackle the issue of “dirty” data, which is inaccurate, inconsistent or incomplete—or non-compliant.

“Digital consent, opt-ins, and privacy notifications are the new norm in an increasingly consumer-centric business landscape. For that reason, non-compliance with privacy regulations like GDPR or CCPA end up costing organizations more in the long run if ignored,” according to The Pipeline.

At the same time, third-party data is regularly bought and sold on the open market through data brokers. This data can come in many forms, from email lists to consumer profiles, but there’s no guarantee this data accurate or complete (or even compliant). Data brokers will often piece data together from a variety of unrelated sources, which often leads to poor data quality.

It’s costly to clean this data, but it can also damage customer trust—and it’s hard to put a price tag on that.

That’s where Web3 comes in. Web3 is based on blockchain, and a blockchain ledger is immutable, which means data can’t be changed, altered or tampered with.

“Blockchain eliminates third-party involvement, just like how cryptocurrencies have largely eliminated banks’ participation in monetary transactions. Similarly, Web3 is expected to eradicate third-party data brokers by enabling P2P connections and decentralized systems that allow businesses to connect with their customers directly,” according to an article in Entrepreneur.

With Web3, businesses will have direct access to accurate, reliable data—right from the source. This will also help to boost transparency and protect consumer privacy by eliminating data brokers from the equation.

Indeed, down the road we might even see synthetic data generation, which would help to combat data privacy concerns. “In the future, we may see large enterprises implementing projects to draw patterns and distributions from real data to generate a large volume of synthetic data for machine learning model training,” according to the Forbes Tech Council.

 

The evolution of business intelligence

In the early days of business intelligence, you required a background in IT, analytics or data science to use BI tools. For business users, it meant sending a query to the IT department and waiting days, sometimes weeks, to get a response.

Most business users relied instead on static spreadsheets to compile and analyze data—which was typically a painstaking process prone to error and duplication. And decisions were often based on old, sometimes outdated, data.

But BI has evolved along with the Web. Over the past decade, we’ve seen the rise of big data and distributed computing. Thanks to the cloud, organizations large and small have access to scalable computing power for near real-time processing and near instantaneous insights.

And, as static spreadsheets have evolved into interactive business dashboards and visualizations, business users are now able to answer their own queries—with minimal hand-holding from the IT department.

Self-service BI has emerged to empower those users with the ability to design and share interactive dashboards and reports. Today’s tools are intuitive and easy to use, and users don’t require a background in statistical analysis to make timely decisions on in-the-moment data.

We’ve also seen the rise of embedded analytics, which allows users to analyze data without leaving the applications they use every day, such as customer relationship management (CRM), enterprise resource planning (ERP) and human resources (HR) systems.

Embedded analytics is a game-changer for BI, since it’s seamlessly integrated into a user’s everyday workflow. And the market for embedded analytics is growing: It’s projected to reach $77.52 billion by 2026, according to Allied Market Research, making it one of the fastest-growing BI trends.

Not only does this democratize data, it helps to improve data literacy, create a data-driven culture and save IT resources.


So what’s next?

Artificial intelligence (AI) and machine learning (ML) will play a big role in the future of BI. An important component of AI/ML is natural language processing (NLP), which allows computers to ‘understand,’ interpret and learn from human language, either in written or spoken form.

Another component is computer vision, which uses cameras to gain a high-level understanding of visual information such as images and videos. Add adaptive AI into the mix, which can learn from patterns and adjust behaviour, and business intelligence is set to become even more intelligent.

But Web3 isn’t just about advanced technologies. Web3 has the potential to change the way we interact with data.

Imagine slipping on a virtual reality headset to meet up with your colleagues in the metaverse—colleagues that could be physically located in other cities or even other countries—where you could explore data sets together as a team.

While we’re starting to see the democratization of data, which makes analytics available to all—including non-technical users—through self-service tools, this will further permeate organizations and contribute to building data-driven cultures.

But we still have a long way to go.

In 2021, only 7 per cent of organizations in Forrester’s Data and Analytics Survey claim to be advanced, insights-driven businesses. “Our experts believe that this ‘pause’ is the calm before the storm, and that the decisions made in 2023 will fuel or extinguish a world of insights opportunity,” according to Forrester.

And in the coming year, “data teams that can configure their content and communications best will outperform (and outlast) their competitors in the long run,” says Forrester senior analyst Kim Herrington. For some, she says, “this ride will be a boring kiddie coaster; while for others, it will be a high-terror twister of a ride.”


BI and the metaverse

Perhaps one of the biggest game changers on the horizon is the metaverse. While still in its infancy, it holds a lot of potential for BI—not just in creating more data for analysis, but also in offering new spatial analytic techniques.

According to KPMG, the metaverse introduces a new era in big data analytics. “The integration of structured and unstructured data in 3D and application of advanced spatial analytic techniques will enable new customer and market insights, opening digital economies like never seen before,” according to Aranka Anema, director of science & innovation with KPMG in Canada.

The metaverse is a connected, three-dimensional environment that brings together physical and virtual worlds, accessible through devices such as virtual reality headsets and augmented reality glasses. While we’re still waiting to see exactly how the metaverse will evolve, there’s no doubt it will impact the way we collect data and use analytics.

“Today, it’s possible for companies to learn actionable insights surrounding swathes of customers as they browse online, but in the age of the metaverse, the sheer volume of data that individuals will produce will vastly multiply,” according to an article in VentureBeat.

But it’s not just about the sheer volume of data. As VentureBeat points out: “As individuals ditch their keyboards for virtual avatars in an immersive virtual environment, we’re likely to see far greater volumes of dependence on big data analysis in building predictive models and decision-making activities.”

Consider that “just 20 minutes of VR use can produce about 2 million unique data elements,” according to an article on Spiceworks. “Think about all the data that can then be generated by AR and MR use.”


Collaborative BI and augmented insights

The metaverse will give rise to more collaborative BI, which takes self-service to the next level. By combining collaboration tools with cloud-based BI tools, a business user will be able to connect to a larger set of users—not just to share insights from their own analyses, but to explore data together at various stages of work.

Augmented insights or experiences will allow us to interact with data in ways that aren’t possible in a two-dimensional world—like a computer screen. And this will eventually allow us to explore data in a virtual environment, like the metaverse.

“Whilst fully immersive virtual reality may be a step too far for everyday business users, it is not too difficult to imagine wearable tech interacting with your dashboard of choice to provide a depth of insight never seen before. The prospect of groups of people sharing the same augmented experience will take team working and even customer interactions to a whole new level,” according to a blog by Simon Livings, head of data & analytics for national markets with KPMG in the UK.

Dashboards will become more dynamic, generating insights anytime, anywhere.

Software engineer and data science enthusiast Mike Schnettler believes that the metaverse will advance data visualization techniques, since “virtual reality data visualizations are superior to the standard two-dimensional data visualizations that we use today,” he writes. “While traditional data visualization techniques are limited to an X-Y plane, data visualized in virtual reality can leverage the Z dimension.”


Data on the edge

While we’ll produce exponentially more data in the metaverse, we’re already producing exponentially more data via the Internet of Things (IoT) and Industrial Internet of Things (IIoT). It’s predicted that by 2025—as supply constraints ease—there will be 27 billion connected IoT devices around the world, according to IoT Analytics.

Various industries—from manufacturing and logistics to healthcare and defense—are already collecting data from connected sensors. But if you’re collecting a continuous stream of data from thousands of sensors in real time, you need a way to analyze that data, which is why we’ll see more BI dashboards connected to IoT solutions.

Today, legacy infrastructure makes it difficult to ingest, process and analyze data in real time, especially for intensive analyses. But vast networks of connected devices, capable of transmitting data and insights, will change this, according to McKinsey.

By 2025, McKinsey says even the most sophisticated advanced analytics will be “reasonably available to all organizations as the cost of cloud computing continues to decline and more powerful ‘in-memory’ data tools come online (for example, Redis, Memcached). Altogether, this enables many more advanced use cases for delivering insights to customers, employees, and partners.”

With IoT, data is typically collected at the ‘edge’ of the network and stored in the cloud rather than in on-premise data centers. That means computing power and storage are closer to the data, reducing latency and network-related issues. And that’s good news for big data analytics.

Still, 27 billion connected devices (and counting) will produce a lot of data, which is why automation is so critical.

In a Web3 world, we could start to see self-verifying data. “Whoever taps into the global data fabric has to know that data is authentic and that nobody tampered with it. The end result would be like the little SSL lock icon but for data—and actually secure,” according to an article in Forbes.

This data fabric—sometimes referred to as data mesh—is about connecting and interconnecting this data “in a way that machines can navigate and understand it unaided.” But, as the Forbes article points out, “we’re just barely getting started.”

A data fabric platform can help to automate ingestion, discovery and integration of data. And if the promise of self-verifying data in a Web3 world becomes reality, it will be a game-changer for BI.


Prescriptive, adaptive analytics

Predictive analytics uses both current and historical data to identify patterns and make predictions about the future, from inventory and maintenance to customer behaviour. But the next evolution of this will be ‘prescriptive’ and adaptive analytics based on machine learning and natural language processing.

While predictive analytics forecasts future probabilities, prescriptive analytics examines the data and offers options on how to achieve a goal or solve a problem using technologies such as simulations, neural networks, complex event processing and machine learning.

So it’s not just a prediction of what might happen, but a prescribed set of potential actions or solutions—helping business users be proactive instead of reactive.

Some of these actions or solutions could be automated through augmented analytics. While we typically glean insights from data through queries, augmented analytics uses machine learning and natural language processing to generate insights automatically.

This doesn’t mean machines will replace the human beings behind the queries; rather, it will make it easier and faster to generate real-time insights from operational data, while providing possible solutions that a business user could then share with their team as part of the decision-making process. And data scientists can focus on more complex tasks.

Augmented intelligence will allow users to do many of the tasks currently performed by data scientists—from augmented data preparation to guided analysis to smart predictions. It will also help to reduce tedious manual processes around extracting and transforming data.

This could potentially take the form of a BI data assistant—like Alexa or Siri, but for data analysis. By integrating this capability into a BI platform, users could query their AI-based ‘assistant’ to generate answers and insights. This will help to further democratize data, making it even easier for non-technical users to benefit from self-service analytics.


Hyper automation

Automation is already commonplace in the analytics space—in fact, it’s essential to finding the needle in a haystack of data. But the new buzzword, at least according to Gartner, is hyper automation. The analyst firm defines hyper automation as a “business-driven, disciplined approach that organizations use to rapidly identify, vet and automate as many business and IT processes as possible.”

Hyper automation goes beyond automating manual, repetitive tasks; it’s about automating business processes through advanced technologies, including artificial intelligence, machine learning, robotic process automation (RPA), intelligent business process management suites (iBPMS) and low-code/no-code tools, among others.

“Technologies to automate content ingestion, such as signature verification tools, optical character recognition, document ingestion, conversational AI and natural language technology (NLT) will be in high demand. Organizations will need such tools to automate the digitalization and structuring of data and content,” according to Gartner.

This is good for digital transformation, but it’s also good for the pocketbook: Gartner expects that by 2024, hyper automation combined with redesigned operational processes will help organizations lower their operational costs by 30 per cent.


Composable data and analytics

The events of the past couple of years, including a global pandemic, geopolitical strife and economic instability, mean that historical data isn’t necessarily reliable for predictive and prescriptive analytics.

Composable data analytics is a new architectural approach that makes analytics modular, like building blocks for tailored analytics, with the use of low- and no-code development platforms. This means they’re more flexible and easier to integrate into applications.

“Composable data analytics is a process by which organizations combine and consume analytics capabilities from various data sources across the enterprise for more effective and intelligent decision-making. Such tools can provide greater agility than traditional approaches and feature reusable, swappable modules that can be deployed anywhere, including containers,” according to an article in Forbes.

In 2023, Gartner predicts that 60 per cent of organizations “will compose components from three or more analytics solutions to build business applications infused with analytics that connect insights to actions.”


The rise of DataOps

DevOps is a methodology for agile software development that shortens development lifecycles, helping organizations rapidly deploy new applications. In a similar vein, DataOps is an architectural framework—part of a broader data fabric—that can help to drive faster insights.

As defined by Gartner, the goal of DataOps “is to deliver value faster by creating predictable delivery and change management of data, data models and related artifacts. DataOps uses technology to automate the design, deployment and management of data delivery with appropriate levels of governance, and it uses metadata to improve the usability and value of data in a dynamic environment.”

The concept of DataOps has been around for a few years. But combining DataOps with advances in composability and artificial intelligence will ensure data is always at the ready for analysis, providing faster time to value.

“Data-centric organizations will likely make their DevOps teams work with data scientists and engineers to provide the tools, processes and organizational structures to support the data business arm. Fundamentally, the goal of DataOps is to deliver new insights with increasing velocity and provide an observability framework to monitor the health of data and its usability by reducing data downtime,” according to the Forbes Technology Council.


Data-as-a-Service

At the same time, we’ll see collaboration expand beyond the organizational level, where data and insights will be shared with other organizations and industry groups—even competitors. In today’s world, data remains siloed, even within organizations, and data sharing with external stakeholders is limited. But this is changing.

“Organizations with homegrown intellectual property developed through decades of research and innovation, such as those in financial services or the energy sector, will now look to market their tools to their peers. This will prompt companies to build data-as-a-service (DaaS) platforms with a SaaS-like experience,” according to the Forbes Technology Council.

In the not-too-distant future, large organizations will use data-sharing platforms within and between organizations.

“Data-driven companies [will] actively participate in a data economy that facilitates the pooling of data to create more valuable insights for all members,” according to McKinsey. And by 2025, the global management consulting firm says data marketplaces will enable the exchange, sharing and supplementation of data, “ultimately empowering companies to build truly unique and proprietary data products and gain insights from them.”


Quantum computing

Quantum computing is very much in its infancy—it’s not expected to go mainstream for another decade, possibly longer. But organizations should keep it on their radar.

Indeed, according to Accenture, the industrialization of quantum computing has already begun to transform data analytics and AI, with benefits expected in two to three years. “The primary challenge will still be to identify the business problems and use cases that can be translated efficiently into quantum-computable structures leveraging quantum computable routines. This is where a clear advantage for first movers and early adopters will arise,” says Accenture.

By harnessing quantum mechanics, quantum computers can exist in more than one state at a time. So, while bits have a value of 0 or 1, a qubit (or quantum bit) can exist as both 0 and 1 at the same time.

So, unlike today’s computers, which handle one task at a time, quantum computers can handle multiple tasks at the same time. That means they’re capable of solving problems that nowadays would take an enormous amount of time and compute resources.

They’re also capable of storing vast amounts of information in qubits while using less energy—which could help to ease the power demands of storing the world’s exponentially growing data.

When it comes to data analytics, the possibilities are truly mind-bending. For example, quantum computing could easily handle big data across scattered data sets, rapidly locating patterns by viewing everything in a database or data lake simultaneously. What could potentially take years on a classic computer could be done in mere seconds on a quantum computer.

Quantum computing could also boost the speed and performance of natural language processing, machine learning and predictive analytics. But qubits are notoriously fragile, so there’s a long journey ahead.

The near future will likely see hybrid methods, which will be “orchestrated efficiently on a combination of quantum computers interfaced with classical computers,” according to Accenture. “This approach uses the best of both worlds, combining easy pre-processing and highly flexible cloud setup with new quantum algorithms.”


The evolution of data-driven organizations

The evolution of BI and data-driven organizations isn’t just about technology. Ultimately, it’s about improving the way we interact with data to make better informed, timelier decisions. It’s about processes and workflows.

But while there’s a lot of talk about becoming a data-driven organization and building a culture of innovation, few organizations are there yet.

In many cases, efforts to become data-driven are applied sporadically, in combination with more traditional approaches. Data is often siloed and isolated in different departments, which limits the ability of that data to contribute to data-driven analyses.

So what exactly is a data-driven enterprise? According to Accenture, it’s one that has integrated data analysis “into the core of its business processes” and “uses the insights it derives from this data to transform its business processes.”

Key characteristics include “a focus on automation, continual improvement and optimization, the ability to anticipate internal and external changes, an adaptive mindset, and, most of all, a culture that fully embraces data and its potential.”

This has tangible benefits: Accenture reports that, on average, data-driven enterprises generate more than 30 per cent growth per year. Plus, there are innumerable intangible benefits, such as the ability to rapidly respond to changes in the market, become more competitive and boost customer satisfaction.

By 2025, employees will “naturally and regularly leverage data to support their work,” according to McKinsey. “Rather than defaulting to solving problems by developing lengthy—sometimes multiyear—road maps, they’re empowered to ask how innovative data techniques could resolve challenges in hours, days or weeks.”

This will become increasingly important as data-driven organizations are able to act on business intelligence almost instantaneously, leaving their competitors in the dust.

“Business happens in real time. People buy things in real time, banking happens in real time, goods are shipped in real time, and bad players try to access your data in real time,” says IDC research manager Amy Machado in a blog for IDC.

“Enterprise intelligence means making decisions based on data, so don’t we want to use the freshest data available? Companies with higher levels of enterprise intelligence report greater revenue growth and customer acquisition,” writes Machado.

While it’s still early days, the evolution of business intelligence in a Web3 world will help to democratize data, so all users—regardless of their technical abilities—will be able to benefit from advanced analytics and business intelligence. And businesses can start the journey toward becoming truly data-driven.

Vawn Himmelsbach

Vawn Himmelsbach is a writer and editor specializing in enterprise IT, writing for national newspapers and technology trade magazines on everything from AI to zero-day threats. She also spent three years working abroad as an Asian correspondent, covering all things tech.

Ready to Learn More?

Request a demo with one of our embedded BI experts or get a free trial.