Where we stand today on Large Language Models (including ChatGPT) and what to expect in the years to come.
The field of Large Language Models is nascent, LLMs having been around for just a few years. However, the introduction of GPT in November 2022 truly catapulted LLMs into the limelight. During this conference, we will delve into the intricacies of large language models, exploring their potential, their challenges, and their role in shaping the future of IT.
Large language models (LLMs) are a new category of AI models converting extensive text data into vectors, facilitating the manipulation of massive data volumes. They represent a significant shift from Natural Language Processing and traditional neural networks, thanks to the introduction of the Transformer architecture in 2017 by Google. This innovation enables handling text far beyond human comprehension, allowing for sophisticated interactions and scaling human reasoning processes.
"LLMs are scaling human reasoning"
(Loïc Boutet, Safebrain.ai)
Currently, the LLM market is polarized, dominated by major players like OpenAI, Google, and others including Meta. While Google leads in performance, OpenAI is more recognized, and Meta focuses on optimization rather than peak performance. Niche players like Light On cater to specific needs like confidentiality.
But behind those approaches, there is the more fundamental question of proprietary vs. open-source. While proprietary models raise concerns about data and knowledge centralization, the open-source movement, led by Meta, fosters community development and offers diverse accessibility and customization levels. Newer models like Mistral are both open-source and highly customizable, paving the way for a personalized AI experience.
Another fundamental question when looking at putting LLMs to use is foundational vs.specialized. Foundational models (e.g., GPT) are general-purpose models capable of handling a wide range of queries. Specialized models focus on specific tasks or data sets. Foundational models, despite their energy-intensive nature, encapsulate vast human knowledge, whereas specialized models offer efficiency and accuracy in their domains. The future may need a balance of both, catering to varied requirements and purposes.
Whatever the approach, LLMs have in common to offer numerous advantages, chiefly their conversational interface, providing an intuitive user experience and coherent responses. Interaction is straightforward, accommodating various languages and epitomizing user-friendliness. There's no need for users to learn coding or navigate complex interfaces, making it accessible to a wide audience.
"For the first time, human can really interact with machines just by talking"
(Loic Boutet, Safebrain.ai)
Moreover, LLMs surpass the previous version of applications using conversational interfaces like chatbots. LLMs excel in automated analysis, able to review vast volumes of documents swiftly and accurately, a task that would otherwise require extensive human labor. They enable non-developers to query and interact with large data bodies directly.
That’s why one of the most common current use of LLMs is customer support where they are outperforming previous chatbot technologies. They also prove particularly effective in HR departments, automating responses to routine questions and enhancing the onboarding process. Their patience and consistency make them well-suited for repetitive tasks, improving operational efficiency.
In IT production, LLMs are proving very useful for automated error analysis and root cause analysis, especially in examining system logs. They offer unparalleled proficiency in interpreting log data, predicting potential issues, and suggesting solutions.
"LLMs are very good at identifying weak signals to see if something that is working today might not work tomorrow"
(Guillaume Besson, AI Builders)
However,LLMs are not yet able to fix errors in production autonomously, necessitating a continued collaboration between human and AI. Achieving a flawless operation in production remains a challenge, underscoring the need for cautious integration between system through APIsation and thorough verification of these technologies.
"LLMs are going to APIfy so they can learn from other components of the IT, we're probably going there but it'll take longer than expected (10-15 years)" (Loic Boutet, Safebrain.ai)
Despite their current usefulness and potential, LLMs are still nascent technologies which come with their specific challenges and risks.
First, there is a pronounced skill gap and a limited supply of engineers proficient in LLMs, making it challenging to find the right talent. Collaborative work is not streamlined with LLMs, leading to potential fragmented operations. The hardware required, particularly high-performing GPUs, is expensive and in high demand, slowing adoption rates.
Also, working with LLMs poses risks to data privacy and intellectual property, as the data input can be reflected in the model's output. Users must be cautious and aware of these inherent risks, especially when dealing with confidential information.
User must be cautious not only because of intellectual property matters but also because LLMs can generate inaccurate or hallucinated information, making it crucial for users to verify the outputs and understand the model's limitations. Behind the user-friendly interface lies a complex technology that requires understanding and caution. Users must be trained to question and verify the outputs, use the tool as an assistant rather than a decision-maker, and maintain a healthy level of skepticism to mitigate the risks associated with LLMs. Security and confidentiality are paramount, and users must be advised not to transmit all their data, especially sensitive ones.
All the challenges we just mentioned, inherent to LLMs very technology become even more critical when implementing Large Language Models (LLMs) in organizations.
This is complex and requires securing buy-in and navigating change management. Thus, organizations should seek expert guidance to navigate the technical, legal, and change management challenges associated with LLMs. Consultants provide valuable insights, risk assessments, and tailored strategies, helping companies make informed decisions and successfully integrate LLMs into their operations.
"Don't start to do things on your own. LLMs are very specific technologies that are very new with very specific challenges, risks and limitations. So the most important thing today is to find the right help." (Guillaume Besson, AI Builders)
LLMs implementation strategies should start with adding value and enhancing existing structures. LLMs are not the silver bullet to solve operational or organizational issues. Organizations are advised to optimize their business processes and operational strategies before layering LLMs on top.
And to avoid adding operational and organizational issues because of it, IT departments should be onboarded early. They play a crucial role in the implementation of LLMs, as these models are resource-intensive and may require specific IT resources for fine-tuning or retraining. Proper onboarding and preparation of IT are essential for smooth integration and optimal performance.
ShadowAI is not a risk, it's a reality. Studies are showing that 68 % of white collar in a company are using Chat-GPT without telling their managers. (Loic Boutet, Safebrain.ai)
Onboarding IT is also a way to early address the phenomenon of "shadow AI" where employees use LLMs without supervision. Organizations should proactively offer official LLM tools to maintain control over information and usage, and lead the change through proper training and support.
Like every new technology, LLMs represent challenges but, if companies can find external expertise to manage these systems, they already are a viable solution. In the future, there's a strong likelihood that more localized, on-premise solutions will emerge, enhancing the ease with which these models are operated. As of now, companies should be prepared for challenges tied to adoption, training, and technical management.