Low-Code/No-Code: Why Declarative Approaches are Winning the Future of AI

Declarative machine learning is going to be the preferred way most organizations operationalize task-specific AI to solve business problems.


AI has reached every team of the modern enterprise, despite having traditionally been in the realm of expert data scientists with PhDs. In the last few years, we’ve seen multiple attempts to democratize machine learning (ML) and deep learning (DL) with the goal of reaching a broader set of personas like engineers and developers. From open-source ML libraries and frameworks to low-code platforms and APIs, ML and DL are more accessible and easier to use than ever before.

However, developing ML models that go beyond simple prompts to chatGPT is still a massive lift for most organizations. From our observations in the field, it’s clear that no-code approaches such as AutoML hold a lot of interest from buyers and executives, looking to supercharge their products with AI. But every customer we talk to who has used a no-code AI tool says the same thing – that it’s great for prototyping and demos, but the nature of this black-box approach that lacks transparency, flexibility, and control prevents the development of production-ready applications. On the other side of the spectrum, the code complexity of developing ML applications with popular DL frameworks like TensorFlow is still too high for them to operationalize internally. That’s why in the last few years, we’ve pioneered the concept of declarative ML – a low-code approach – that further simplifies the ML development lifecycle.

Declarative ML or low-code approaches differ from no-code approaches, in that they still require some coding and technical skills, but they drastically reduce the amount of code and complexity involved. This approach greatly simplifies the ML development lifecycle, from faster experimentation all the way to production, and accelerates innovation. It takes inspiration from tools in adjacent spaces, like DBT for data engineering and Terraform for DevOps, which have simplified and automated complex tasks with declarative languages.

Many of the most influential tech companies, such as Meta, Apple, and Uber, saw this need to have a more accessible but flexible abstraction to machine learning and have developed their own internal declarative ML frameworks to accelerate innovation, such as Looper, Overton, and Ludwig. However, Ludwig is the first to bring this declarative approach to the masses by going open source, garnering over 10,000 stars on GitHub with an active & engaged community.

What makes Ludwig so popular? Based on the concept of declarative ML, Ludwig allows practitioners to describe what they want to achieve, rather than how to achieve it. This way, you can focus on the problem and the data, rather than the implementation details and the infrastructure. Some of our open-source users have described it as “Lego blocks for deep learning” or “deep learning with training wheels built for engineers.”

Ludwig allows practitioners to iterate faster, build reliable production-ready models, and stay in control. Specifically, Ludwig enables you to train and test DL models by simply providing a tabular file (such as CSV) with your data and a YAML configuration file with the specifications of your desired model. If you need to change the learning rate, simply specify ‘learning_rate’: 0.001. No need to master all the intricacies of how the learning rate is implemented or how to do learning rate scheduling or warm-up. It now takes just one line of code. As shown by this example, the main challenge is not the code itself, but our ability to work at an optimal level of abstraction that allows us to create faster and more reliable models.

This approach has already enabled organizations to build amazing applications with a few engineering resources. For example, Paradigm, the largest global liquidity network for cryptocurrencies, has used Predibase, a platform that leverages Ludwig, to build AI-powered features for its traders. “With over $200B in trades, Paradigm is the largest global liquidity network for cryptocurrencies. One of our top priorities is helping traders make smarter decisions with AI,” said Anand Gomes, Co-Founder and CEO of Paradigm. “By adopting a declarative approach to ML, our team of engineers has built new product capabilities that were previously not possible, and best of all, the time it takes to build production models has been reduced from months to minutes and at a fraction of the cost. With this technology, we’ve built powerful relevance scoring and in-platform intelligence that helps our customers identify trading opportunities and capture edge.”

In the new era of Generative AI, declarative ML has proven to be not only useful for building new models from scratch but also for enhancing existing ones by fine-tuning pre-trained deep learning models.

Large language models (LLMs), such as GPT-3, Mistral, and BERT, have shown remarkable performance in natural language processing (NLP) tasks, such as text generation and classification. However, using LLMs in practice can be challenging or expensive, and often, users find the need to fine-tune their LLM before it’s ready to be deployed for a production task. Declarative ML platforms, such as Predibase, can help engineers easily use LLMs in their projects by providing pre-trained models, custom training options for fine-tuning, and seamless deployment solutions that all operate through a simple configuration-driven abstraction accessible to most engineers. Moreover, declarative ML tools can also help engineers to customize LLMs for their specific tasks and data by abstracting away the painful aspects of managing infrastructure and optimizing performance. This approach also provides more flexibility and control over the use of black-box APIs for LLMs, like OpenAI.

Declarative ML is a powerful paradigm that democratizes AI for engineers, by allowing them to build and use DL models with minimal code and complexity and spend their time focusing on the specifics of their task rather than wrangling pytorch. Declarative ML tools, such as Ludwig and Predibase, are the preferred mechanism for engineering teams who have become the front-lines of productionizing AI because they provide the level of control developers need, while minimizing complexity. Given the fast-moving pace of the new AI wave, declarative ML is going to be the preferred way most organizations operationalize task-specific AI to solve business problems.

Devvret Rishi

Devvret Rishi is co-founder and CEO of Predibase. Built by developers, for developers, Predibase enables any software engineer to do the work of an ML engineer in an easy-to-understand declarative way.

Devvret Rishi is co-founder and CEO of Predibase. Built by developers, for developers, Predibase enables any software engineer to do the work of an ML engineer in an easy-to-understand declarative way.