By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Todd Mostak
Feb 21, 2024

Introducing HeavyIQ: Conversational Analytics with your Data at Scale

Try HeavyIQ Conversational Analytics on 400 million tweets

Download HEAVY.AI Free, a full-featured version available for use at no cost.

GET FREE LICENSE

The story of HEAVY.AI, from its origin days as MapD, has always been about making it faster and easier to get answers from large datasets. By providing interactive GPU-accelerated analytics capabilities over multi-billion record tables, and surfacing that capability not only via SQL but also an easy-to-use, no-code visual analytics interface, we allowed users to dive deeper into their data than previously thought possible.

 

The advent of the conversational AI revolution, embodied in Large Language Model (LLM) technology, now means we can make asking questions and getting answers from large datasets even more effortless, and accessible to an even broader audience, than ever before. To this end we’ve been hard at work integrating the power of conversational AI deep into our platform, and today we are excited to announce HeavyIQ, a new capability in the HEAVY.AI platform allowing users for the first time to ask questions and get answers from their data using natural language. Rather than rely on a third party Large Language Model (LLM), we’ve trained our own to handle a wide set of analytics use cases with state-of-the-art accuracy, including text-to-sql and text-to-visualization, and can run it locally on the same GPU infrastructure we use to run the HEAVY.AI platform, meaning no concerns about privacy or metadata leakage.

HeavyIQ has the following advantages over using third-party LLMs like GPT4 or Gemini Advanced to generate SQL:

  1. As mentioned above, HeavyIQ leverages a local LLM, and as such, means users can maintain full control and sovereignty over their metadata, and even can be deployed in a completely air gapped environment if desired.
  2. By fine-tuning on tens of thousands of open and custom prompts for not only the text-to-SQL use case but other related analytics workflows, such as natural language summary of results, we can achieve state-of-the-art accuracy (i.e. significantly higher than even GPT4) for a wide variety of use cases, as measured both on standard benchmarks like Spider as well as our own evaluations of more advanced SQL concepts like spatial and temporal joins, window functions, generating histograms, etc.  These use cases are generally not found in open text-to-SQL datasets but are vital for advanced analytics use cases, and so particular care was taken to imbue our model with expertise in these areas.
  3. The fine-tuning process in #2 not only enables higher accuracy, but allows for smaller models, which when run on dedicated hardware means significantly lower latency than possible with large foundation models like GPT4. Whereas a full chain of prompts using GPT-4 could take 20-30 seconds, the same chain using HeavyIQ might only take 3-4 seconds, meaning users can ask questions of their data with conversational latencies.
  4. Dovetailing with #3, the GPU-accelerated speed of the HeavyDB SQL engine means that the HeavyIQ-generated SQL can be executed and visualized in milliseconds, but that the model can be fed with various metadata like column ranges and top-k string values even for vast tables in near real-time. 
  5. By natively integrating with HEAVY.AI’s best-in-breed visual analytics capabilities, we allow users to immediately be able to put query results in context by seeing them visualized, whether in a scatterplot, bar chart, time-series graph, or GPU-rendered map. Note however that we always provide the SQL and data sources used to generate any visualization, so that users can easily validate the output of the LLM.

HeavyIQ will be generally available with our 8.0 release, available later this Spring, but we wanted to allow you to try it out for yourself beforehand via a new public demo. In a nod to our roots, which evolved from my grad school project exploring political attitudes in the Middle East using geocoded tweets, we are showcasing a dataset of approximately 400M geocoded US tweets (now X, but this data precedes the name change), spanning the period from January 2020 to April 2021. In addition to the Twitter data, we are including in the database geographic metadata tables at the state and county level, as well as time series datasets for Bitcoin, Dogecoin, and GameStop stock prices to allow for exploration of the relationship between what was said on Twitter and changes in the prices of these assets.

There are a number of fun things you can ask of the Twitter and other datasets provided in the demo, which you can find here.

For example, if you want to see regional linguistic variations, it’s as easy as asking a question like “Show the percentage of tweets in each state mentioning the word “wicked”. While I expected the higher percentage in New England, I was surprised that Idaho seems to have embraced the term as well.

You can see patterns in usage of a given term over the course of the week with a question like “Show the percentage of tweets by day of week and hour of day mentioning the word “mimosa”. You’ll see it light up Sunday evening on the graph, but remember that the times here are in UTC, so this actually coincides with the time around Sunday Brunch.

When it makes sense, HeavyIQ can also generate natural language answers to questions for easier interpretability. Here we see that the lowest percentage of tweets were sent from the Foursquare app in April 2020, likely due to the pandemic lock-downs that were taking full effect that month.

You can also leverage the power of the HEAVY.AI Rendering Engine to plot large geospatial result sets, here showing where tweets mentioned the worth “earthquake”.

Getting slightly more complex, we can use the included state and county-level demographic tables to perform enrichment on the Twitter data, here to compare the percentage mentions of Donald Trump to the median age of the county, while adding and coloring by the state sub-region the county is in. At least from the graph (and we could confirm this with a regression), it appears there is a moderately positive correlation between median county age and the quantity of tweets about Trump per county, with some regional variation.

Given how useful spatiotemporal fusion can be, HeavyIQ has been extensively trained to conduct both geospatial and time-series joins, here used to join the daily number of mentions of bitcoin or btc with the daily high price of bitcoin.  It's not terribly surprising, but it indeed appears that buzz around bitcoin increased on Twitter as the price of bitcoin skyrocketed (or was the causality at least partly in the other direction?). Note that such SQL is complex enough that advanced users might find it tedious to write, and more novice users impossible.

These are just a few of the many interesting things you can uncover using HeavyIQ to interrogate the Twitter and other data featured in this demo. We invite you to try for yourself; let us know what you find on X at @heavy_ai.

Over the course of the year, we have numerous improvements planned for HeavyIQ, including the ability to ask follow-up questions, the ability to use the HeavyML module for predictive analytics, and integration with Heavy Immerse dashboards, so users will be able to create charts, manipulate cross-filters, and ask questions of dynamically filtered data. We fundamentally believe that analytics will become increasingly conversational in nature, and we're excited to push the boundaries with what is possible with this new modality of getting insight from data.

We're also looking to add other datasets to our HeavyIQ demo, so let us know if you'd like to see anything in particular. And in the meantime, happy exploring!

Todd Mostak

Todd is the CTO and Co-founder of HEAVY.AI. Todd built the original prototype of HEAVY.AI after tiring of the inability of conventional tools to allow for interactive exploration of big datasets while conducting his Harvard graduate research on the role of Twitter in the Arab Spring. He then joined MIT as a research fellow focusing on GPU databases before turning the HEAVY.AI project into a startup.