While OpenAI has focused its efforts on text and images, iGenius has been dedicated to advancing GPT in the realm of numerical data.

Just one week after its launch, ChatGPT, the AI-powered chatbot developed by OpenAI, attracted more than 1 million users, and within the first month, its user base grew to an impressive 100 million. This remarkable surge in popularity garnered significant attention from both the media and consumers alike. The software’s appeal lies in its capacity to provide human-like responses across a wide range of tasks, encompassing everything from generating long-form content and engaging in in-depth conversations to conducting document searches, performing analyses, and more.

Uljan Sharka, the CEO of iGenius, envisions the transformative potential of generative AI in the business world. For the first time, it enables true democratization of data. GPT, which stands for Generative Pretrained Transformer, represents a family of language models trained using a combination of supervised and reinforcement learning techniques, and in the case of ChatGPT, it leverages a massive 45 terabytes of text data to power its content creation capabilities.

However, Sharka believes that the potential of generative AI extends beyond content generation to addressing essential data-related queries in the business domain. Historically, data, analytics, and even the concept of “data democratization” have primarily been geared toward individuals with strong data-related skills. This has left business users facing barriers when trying to access the information necessary for data-driven decision-making. People don’t want just raw data; they seek actionable business insights. The opportunity today lies in shifting the user interface towards language-based interfaces that humanize data and make it more people-centric.

Nevertheless, creating an effective interface is only a small part of the equation for a comprehensive system aimed at integrating, certifying, safeguarding, equalizing, and facilitating access to business-critical information. This is where Composite AI comes into play, as it combines data science, machine learning, and conversational AI into a unified system.

Uljan Sharka likens Composite AI to the “iPhone of the category,” providing an integrated and safe experience. This approach is seen as the key to realizing the potential impact of generative AI within the enterprise.

As the divide between consumer-facing (B2C) and business-focused (B2B) applications has grown, business users have often been left behind. B2C applications receive significant investments to create user-friendly, intuitive experiences, while at work, valuable data remains locked away in complex and underutilized dashboards.

Generative AI has the capability to bridge this gap by connecting disparate data sources worldwide and indexing them into an organization’s “private brain.” Through algorithms, natural language processing, and user-generated metadata, also known as advanced conversational AI, data quality can be enhanced and elevated, a concept referred to as “conversational analytics” by Gartner.

This virtualization of complexity unlocks the potential to cleanse, manipulate, and serve data for a wide range of use cases, from cross-correlation to establishing a single source of truth for specific departments.

On the backend, generative AI streamlines system integration, leveraging natural language to create an AI brain composed of private data sources. With no-code interfaces, integration becomes optimized, and data science is democratized even before business users begin utilizing the information. This approach accelerates innovation and significantly reduces the time and cost involved in identifying and developing use cases.

On the frontend, business users can engage in natural language conversations with data, receiving actionable insights in plain language. The goal is to further enhance the user experience, making it more consumer-oriented. Instead of a reactive, single-task platform that responds to text-based queries, it can become multi-modal, presenting charts and visual representations to enhance data comprehension. Over time, the AI can adapt to user preferences and proactively provide the knowledge users need, similar to the personalized experiences offered by services like Netflix or Spotify.

Generative AI has the potential to transform the way businesses leverage data. By integrating data from various sources and enabling real-time analysis and decision-making, it empowers users across different departments to make impactful decisions and monitor performance in real time.

For instance, by connecting marketing data with sales data, organizations can monitor campaigns in real time and correlate their results with transactions, conversions, and sales cycles. This provides clear performance indicators and allows for immediate campaign adjustments based on real-time insights. Additionally, the interface can suggest further inquiries and questions to deepen users’ understanding of the data.

At companies like Enel, Italy’s leading energy company with a focus on sustainability, engineers use generative AI to consume real-time IoT data, combining it with financial data from production plants. They engage in real-time conversations with this data, facilitating preventive maintenance, activity planning, and budget comparisons. The synthesized information provided by the interface enables powerful operational analytics and immediate action.

Looking to the future of generative AI, it’s clear that ChatGPT has generated significant interest, but iGenius and OpenAI, both established in 2015, have taken distinct paths. While OpenAI focused on text-based applications, iGenius developed Crystal, a GPT for numerical data. Crystal’s private AI brain connects proprietary information to its machine learning model, allowing users to train it from scratch. It employs more sustainable small and wide language models, providing organizations with greater control over their intellectual property.

Furthermore, Crystal facilitates large-scale collaboration, enabling companies to leverage expertise and knowledge workers to certify data used for training models. This reduces bias at scale and results in more localized and hyper-personalized experiences. It also eliminates the need for prompt engineering, making it safer and easier to work with the data generated by these algorithms.

Uljan Sharka envisions a future of human-machine collaboration, where the knowledge possessed by individuals and traditional IT systems can be harnessed to reduce bias and create safer, more equitable, and more efficient virtual co-pilots in various domains. This approach aims to make data not just more abundant, but also more meaningful and actionable, ultimately driving impactful decision-making across organizations.