Summary

Large language models (LLMs) and multimodal large language models (MLLMs) are used in artificial intelligence to create a “model” of knowledge by training a computer application with real-world information.

image_pdfimage_print

The AI era is already transforming computing and revolutionizing industries, as well as giving us many acronyms to figure out, like LLMs and MLLMs. The alphabet soup is important, however, for understanding how artificial intelligence (AI) and its applications can become more user-friendly for the non-data scientists among us. 

Creating Models of Language

AI seeks to replicate and build upon human intelligence in machine form. To do this, AI creates a “model” of knowledge by training a computer application with real-world information. A model of language is built the same way, with the most familiar example being speech recognition. By feeding massive amounts of spoken text into an AI system and pairing it with matching text, the application builds a large language model, or LLM, that learns to recognize words and translate them to text.

Much of the AI being developed today is based on LLMs that can also read, translate, classify, and analyze text, using probabilistic analysis to learn how words, sentences, and paragraphs work—and most importantly, how context creates meaning. Typically, organizations begin with an established LLM and go on to fine-tune it with training of their own to help the LLM excel at a specific function—for example, to create a chatbot that can respond to questions online.

LLMs deal with language. AI models that train with or generate data in other modes—such as audio, images, or specialized data like DNA sequences—are known as multimodal large language models, or MLLMs. These would be used for applications that generate images from language-based prompts.

Behind the Scenes with LLMs and MLLMs

That vast repository of digitized information, combined with powerful processing capability, is what made AI possible. That’s because LLMs need enormous amounts of data to train themselves.

The training process presents significant data challenges and can be quite laborious since the quality of the model depends on the quality of the data used to train it. Achieving that quality can require not just huge amounts of data but also a good deal of pre-processing, followed by iterations and adjustments. For many applications, training is ongoing indefinitely as the model updates itself with new information, or, in the case of an advanced application like autonomous driving, improves itself indefinitely. 

Where You’ll Find LLMs and MLLMs

We’re all familiar with digital assistants and chatbots that listen to us and answer questions. They illustrate the purpose of LLMs and MLLMs perfectly: taking in audio sources, deriving meaning, performing research, and turning that research into spoken replies. 

LLMs and MLLMs are also being used in other specialized ways, from creating images and audio to reading and processing DNA code to steering autonomous vehicles. They’re even being trained to design semiconductors and create new molecular structures. And speaking of code, MLLMs are poised to revolutionize HTML coding, for example, by taking a screenshot of a website and turning that into underlying HTML code.

High-profile LLMs and MLLMs

Here are a few examples of notable LLMs and MLLMs:

  • ChatGPT is a chatbot developed by OpenAI and launched in 2022. Based on an LLM, it enables users to refine a conversation with desired length, format, style, level of detail, and language. ChatGPT is the model that powers Microsoft’s conversational assistant and search tool Copilot (formerly Bing Chat).
  • Bard/Gemini is a family of MLLMs developed by Google DeepMind. Gemini was built to be multimodal, meaning it can understand and operate across and combine different types of information, including text, code, audio, image, and video.
  • Llama, which stands for Large Language Model Meta AI, was released by Meta in 2023. The current iteration, Llama2, trains on a large set of unlabeled data.

AIRI

How to Optimize LLM Performance and Accuracy with Pure Storage and NVIDIA

Retrieval-augmented generation (RAG) can be used for large-scale, high-performance enterprise LLMs to improve and customize them with external, more specific, and proprietary data sources. RAG can make LLMs more accurate, timely, and relevant by referencing knowledge bases outside of those it was trained on. RAG also improves the accuracy and relevance of LLM inference by providing the model with a vector search lookup of the query against cleaned and vectorized data as inputs. Learn more about why a RAG pipeline with NVIDIA GPUs, NVIDIA networking, NVIDIA microservices, and Pure Storage FlashBlade//S is the ideal solution for enterprises adding LLM to their production business processes.